Efficient Computing

Efficient Computing

Optimizing for hardware performance

Whenever computer architectures and chip designs change significantly, as happens frequently, new algorithms are needed to exploit these developments for their performance benefits. 

As the ambition of discipline scientists grows, in addition to the growth in data they analyze, the need to ensure the efficient use of computing resources also grows. 

Code can be optimized for performance and run on the most appropriate machine, including accelerator hardware, with energy usage at a minimum.

Example: Distributed & heterogeneous computing

Despite the rapid growth in compute power, data growth and the ambition of domain scientists continues to grow faster. Compute power therefore remains at a premium requiring us to find newer and more efficient ways to utilize this component of the e-infrastructure. 

When performing any large computation (for example simulations or signal processing) it should be remembered that some codes may run best on a supercomputer while others may run best on a cluster, for some models accelerators suchs as GPUs will be most efficient, and so on. 

We develop distributed & heterogeneous computing methods that allow individual components of large parallel applications to be deployed on the resource having the best characteristics (performance, energy, financial) for that particular problem. The application of distributed & heterogeneous computing have been crucial in projects for point cloud processing, digital forensics, astrophysics and climate simulations.

This expertise area includes

  • Accelerators
  • Distributed Computing
  • High Performance Computing
  • Numerical Modelling and Algorithms
  • Workflows and Orchestration


Technical Lead Efficient Computing Dr. Jason Maassen

Jason is interested in topics related to large scale distributed computing. At the Netherlands eScience Center he works on climate research projects.

Profile page

Stay up to date, sign up for our newsletter