Less than half the latency and 4 times more bandwidth for communications — without rewriting your MPI code
For better speed and efficiency, you may think that writing your code to run on a parallel system is all you have to do. If you’re running on a small cluster, you may think that you don’t have to worry about scaling issues. The fact is, however, parallel codes often lose more than 50 percent of their performance due to load balancing issues alone. But improving your performance by introducing dynamic runtime support costs a lot of additional time and money — especially if you’ve already used MPI to parallelize your code.
CharmMPI can help. It gives you the power of runtime adaptivity that comes with our flagship product, Charm++. Runtime adaptivity made possible with CharmMPI constantly and automatically inspects and optimizes your simulation runs, massively improving performance. And CharmMPI works with existing MPI applications written in C, C++, and Fortran. The result is faster, higher-resolution insights.
CharmMPI provides up to 2.3 times lower latency and 3.9 times higher bandwidth for communication within a shared memory node when compared to traditional process-based MPI libraries.
We’ve developed CharmMPI in collaboration with some of the world’s best researchers to build weather and climate, molecular dynamics, and computational engineering applications. So we’re confident that CharmMPI is ready to deliver results on your adaptive mesh refinement and other simulation codes.
Using the MPI’s familiar API, CharmMPI can help:
Alleviate scalability bottlenecks that are difficult to address directly in applications.
Tackle both static and dynamic load imbalances.
Reduce communication latencies that can otherwise be hard to compensate for.
Run through node failures.
Dynamically shrink and expand your computing runs based on available resource allocations.
What is CharmMPI?
CharmMPI provides dynamic runtime support for existing MPI applications, built on top of the popular and powerful Charm++ parallel programming system. It enables MPI programmers to achieve high scaling parallelization with less time and effort compared to other MPI implementations. Using CharmMPI, you’ll then be able to optimize the execution of your program as it runs and carry out a variety of tasks that would normally require complex, application-specific programming.
If you’re ready to learn more, contact us today for a demo.