![]() |
PEARL
Parallel Event Access and Replay Library
|
Hybrid OpenMP/MPI-related part of the PEARL library. More...
Files | |
file | pearl.h |
Declarations of global library functions. |
This part of the PEARL library provides all functions and classes that are specific to the handling traces of hybrid OpenMP/MPI applications.
The following code snippet shows the basic steps required to load and set up the PEARL data structures to handle hybrid OpenMP/MPI traces (for information on how to handle serial, pure OpenMP or pure MPI traces, see the PEARL.base, PEARL.omp, and PEARL.mpi parts of PEARL).
using namespace pearl; // Initialize MPI, etc. ... // Initialize PEARL PEARL_hybrid_init(); // Open trace archive TraceArchive* archive = TraceArchive::open(archive_name); // Load global definitions GlobalDefs* defs = archive->getDefinitions(); // Open trace container const LocationGroup& process = defs->getLocationGroup(mpi_rank); archive->openTraceContainer(process); // Create thread team #pragma omp parallel { // Load trace data const Location& location = process.getLocation(omp_get_thread_num()); LocalTrace* trace = archive->getTrace(*defs, location); // Preprocessing PEARL_verify_calltree(*defs, *trace); #pragma omp master { PEARL_mpi_unify_calltree(*defs); } PEARL_preprocess_trace(*defs, *trace); ... // Free trace data delete trace; } // Close trace container & archive archive->closeTraceContainer(); delete archive; // Free definition data delete defs; // Finalize PEARL PEARL_finalize(); // Finalize MPI ...
Note that all of the aforementioned function calls except PEARL_hybrid_init() throw exceptions in case of errors. This has to be taken into account to avoid deadlocks (e.g., one process failing with an exception while the other processes wait in an MPI communication operation or OpenMP barrier).
![]() |
Copyright © 1998–2014 Forschungszentrum Jülich GmbH,
Jülich Supercomputing Centre
Copyright © 2009–2014 German Research School for Simulation Sciences GmbH, Laboratory for Parallel Programming |