High Performance Computing

Scientific research increasingly relies on complex simulations and analysis of large volumes of data. Although computers have grown rapidly in terms of computational power and storage, many simulations and data analysis tasks require so much computation that they far exceed the capabilities of a single computer.

For such applications, traditional high performance computing techniques can be used to increase the performance. For example, by parallelizing codes using multi-threading and/or MPI, applications can be run on fast multi-core systems, compute clusters or even supercomputers, which may increase the available computational power significantly. By using accelerators, such as GPUs, the performance of specific types of tasks can be improved even further (see GPU computing page).

Some applications can already benefit greatly from simple task farming or parameter sweep solutions, where the (unchanged) application is run many times for different input data sets.

For more data-intensive applications alternative techniques, such as map-reduce, may be more applicable. The performance of these applications is often limited by storage performance bottlenecks instead of computational performance.

We have expertise to choose and apply the most suitable HPC technology for speeding up an application.

Related projects

Stay up to date, sign up for our newsletter