Despite the rapid growth in compute power, data growth and the ambition of domain scientists continues to grow faster. Computer power therefore remains at a premium requiring us to find newer and more efficient ways to utilize this component of the e-infrastructure. When performing any large computation (e.g, simulations, signal processing, etc.) it should be remembered that some codes may run best on a supercomputer while others may run best on a cluster. For some models accelerators suchs as GPUs will be most efficient etc.
NLeSC develops the distributed & heterogeneous computing methods that allow individual components of large parallel applications to be deployed on the resource having the best (performance, energy, financial) characteristics for that particular problem.
The application of distributed & heterogeneous computing have been crucial in projects such as eAstronomy, point cloud processing, digital forensics, astrophysics and climate simulations.