Guiding design of molecular materials for sustainable energy applications hinges on the understanding and control of excitation dynamics in functional nanostructures. Performance in, e.g., organic photovoltaics, photocatalysis, or soft thermoelectrics, is determined by multiple electronic processes, which emerge from interaction between electronic structure and nano- and mesoscale morphology. 

Resolving this intimate interplay is crucial but extremely difficult as it requires linking quantum and classical techniques in an accurate and predictive way. In MULTIXMAS, we will develop bottom-up simulations of charge/exciton dynamics in large-scale morphologies. 

Hierarchical multiscale structure equilibration of nanomaterial will be combined with excited state electronic structure theory based on Many-Body Green’s Functions, parameter-free electron-dynamics models, and kinetic Monte Carlo. Essential method development is accompanied by the technological challenge of high-performance and high-throughput computing. 

As a prototypical system, we study charge generation in low-cost organic photovoltaic cells (OPVCs) for which a breakthrough increase of power conversion efficiency (PCE) from currently ~11% to or above that of conventional silicon-based devices (20%) is required to play a significant role in meeting the growing demand for renewable energy. 

Our tools will provide a general framework for multiscale simulations of excitation dynamics in complex molecular systems, with relevance beyond energy-related applications.

The major bottleneck towards large-scale electrical transport is the restricted energy density and safety issues of current Li-ion batteries. Solid-state-batteries are intrinsically more safe and promise higher energy densities. 

However, their power densities are far below the demands. One of the key challenges is to understand this limitation, which is determined by the complex interplay of charge transport processes. 

Current modelling approaches are not able to predict the properties of these next generation systems, which requires introduction of a physically realistic Gibbs Free Energy, the exact shape of which determines the locally large Li-ion concentrations and strong ion-vacancy interactions in solid-electrolytes. 

The associated computational challenge is the very fine and inhomogeneous finite element grid, necessary to describe the atomic-scale processes a the interfaces in macroscopic complete batteries. Here we propose a fundamental approach by integrating detailed Free Energy functionals for the solid-electrolytes, determined by first principle methods, into current state-of-the-art Phase Field models. 

The correct physical description on the atomic-scale will result in a realistic general description of the interfaces, that play a pivotal role in solid-state batteries. The proposed model will boost the understanding and guide the design of these important next generation battery systems.

Non-equilibrium plasma sources are promising devices for the transformation of carbon dioxide into methane and other value-added chemicals.

Numerical simulation is an indispensable tool for the design and optimization of source concepts, but further progress mandates a professionalization that requires a blend of mathematical, physical and E-Science techniques. 

We propose new numerical schemes for the transport fluxes in plasmas with excessive numbers of chemical components, and to develop new tools for the introspection and “chemical reduction” of the complex chemical models for such plasmas. 

A major contemporary issue is the lack of reproducibility of results from plasma simulations. This is addressed by the application of modern web-based methods for the dissemination of tools and underlying data sets. 

The project will result in a vendor-neutral infra-structure that will be made available to workers in plasma, combustion and chemical reactor science. 

An important aspect is the adoption and further development of the “XSAMS” XML/Schema file format for atomic data and its promotion in the low-temperature plasma physics community. In-house testing and application will focus on microwave plasma systems for the production of “Solar Fuels” that are currently under development at the DIFFER institute and at Eindhoven University of Technology.

The in silico optimization of solar energy conversion devices in which light is used to separate charge and generate power requires advanced quantum mechanical approaches to describe the photon-harvesting component and the initial charge-propagation process in the excited state. 

Computing excited states, however, is highly demanding for electronic structure methods, which often struggle to ensure accuracy or treat the large, relevant system sizes. To overcome these limitations, we work in the sophisticated framework of many-body quantum Monte Carlo (QMC) methods, we have been actively developing in recent years for the accurate treatment of excited states in complex systems. 

Here, we propose to professionally structure and further accelerate our methodology for energy-related applications into a set of open and re-usable software tools addressing three key elements of QMC simulations: fast computation of observables, effective non-linear optimization schemes, and efficient graphics-processing-units kernels. 

With these enhanced tools, we will in parallel proceed to establish a computational protocol to optimize the primary elements of a dye-sensitized solar cell and provide robust reference data for the characterization of one of the major limitations in efficiency, namely, the charge-recombination process at the interface between dye and semiconductor.

Quantum Dots (QDs) are versatile nanoscale materials that are increasingly used to boost efficiency in lighting and solar energy conversion devices. 

While QDs can be tailored to exhibit desirable opto-electronic properties, their synthesis still requires a lengthy trial-and-error procedure to find the right starting reagents (precursors) and ideal experimental conditions. 

In this proposal, we aim to greatly speed up this process by developing a robust and reliable automated screening workflow in which quantum chemical software packages are combined with statistical data analysis tools. Unique and crucial in this approach is the ability to explicitly include the experimental conditions in all stages of the QD synthesis. 

In this manner, we create reliable models for which we can design highly parallelized Python workflows to quickly filter out suitable precursors for the preparation of novel QDs. 

The machine-learning libraries necessary for statistical analysis and pattern recognition will be deployed inside QMWorks, a Python package constructed to support massively parallel execution of quantum chemical modelling workflows. Using the multiscale modelling facilities in QMWorks, we will be able to avoid redundant calculations and achieve a prediction speed that allows for direct interaction with experimental colleagues that will ultimately test the candidate materials.

Large eddy simulations (LES) of turbulence resort to coarse-grained models of the small scales of motion for which numerical resolution is not available. 

LES can be applied for the aerodynamic analysis of wind farms at sea. However, the model that describes the nonlinear unresolved-resolved interactions is a major source of uncertainty. Therefore, we aim to study the nonlinear propagation the uncertainties in LES of wind farms. 

To start, a comparative study of Polynomial Chaos, Gaussian process and Karhunen-Loeve based surrogate models for uncertainty propagation (UP) is performed and the best method is tailored to turbulence. The number of cores needed for this UP is so large that a space-only parallelization does not suffice; hence parallel-in-time (PinT) algorithms are applied. 

Basically, multiple time steps are introduced and the serial dependencies are shifted to the largest time step. Parareal is a prime example which has been applied with success to many problems. For turbulent flows, however, parareal suffers from convergence problems and artificial dissipation. Both problems are addressed by improving the coarse-time operator. The PinT-software is set up such that it can be used for Navier-Stokes solvers; the software may also be (re)used for the time integration of similar pde’s. 

The key challenge we address in this project is to accurately and efficiently compute the effects of unavoidable fabrication disorder on functional 3D nanostructures that trap light for photovoltaic conversion. 

Traditionally, optical measurements of real nanophotonic structures are compared to an idealized model. Unfortunately, however, this does not allow to assess the consequences of unavoidable fabrication imperfections, and hampers rational development of efficient solar cells. 

Recently, we pioneered X-ray holotomography as a probe of complex 3D nanostructures with 20 nm spatial resolution. When combined with Maxwell computations this provides unprecedented opportunities to study real 3D nanofabricated structures for photovoltaics. 

The giant tomography data set of voxels requires, however, important computational innovations: i) the use of polytopic meshes to allow significantly smaller meshes than dictated by the domain’s geometric complexity; ii) the development of discontinuous Galerkin discretizations for the Maxwell equations using polytopic elements; iii) the use of unit-cell Bloch-mode basis functions for robust numerical algorithms that greatly improve the computational efficiency of ultralarge superstructure computations. 

Since the software development will be based on the hpGEM discontinuous Galerkin toolkit, our project has spin-off to other applications, including DGEarth for seismics and HamWave for nonlinear water waves.

We will perform unprecedented large-eddy simulations (LES) of high-pressure liquid-fuel injection and reacting multiphase flows in modern energy conversion systems, such as rocket engines, gas turbines and Diesel engines, to provide detailed insight into high-pressure injection phenomena and contribute to the solid physical understanding necessary to further improve the efficiency of these technical systems. 

For this purpose, we recently developed a two-phase model based on cubic equations of state and vapor-liquid equilibrium calculations, which can represent supercritical states and multi-component subcritical two-phase states, and an efficient finite-rate chemistry model, which can accurately predict ignition and the transition between deflagration and detonation. 

However, combining these readily available models efficiently in a single high-fidelity multi-physics simulation is challenging. With any classical domain decomposition, their uneven computational intensity severely limits the scalability of the simulation as described by Amdahl’s and Gustafson’s laws. 

During this project, we will solve this scalability problem through a dynamic multi-level parallelization, which will be implemented in form of a generic shared library for scalable high-performance multi-physics simulations. The library will be integrated into our existing and next-generation flow solvers and is anticipated to have a major impact on other multi-physics applications that require massively parallel high-performance computing. 

The infant Universe (its first-billion years) remains its least explored era. Although sparse observations are available, the red-shifted 21-cm emission of neutral hydrogen (HI) –seen as spectral fluctuations at several meter wavelengths– allows this era to be opened up for much more detailed study. The HI signal, however, is orders of magnitude fainter than most contaminating signals (e.g. (extra)Galactic foregrounds). Nonetheless, it is possible to detect and study this weak HI signal using the latest generation low-frequency radio interferometers (e.g. LOFAR, MWA),
provided that all systematic (instrumental, ionospheric, etc.) errors are eliminated to sufficient levels (i.e. “calibrated”). These errors are determined and removed by solving a complex non-linear optimization problem with millions of unknown parameters, constrained by many terabytes of data.

Data parallelism is inherently exploited in calibration of radio-interferometric observations, where calibration is done in parallel on data at different frequencies. However, to achieve the highest accuracy and precision in calibration, without biasing the weak HI signal, a global calibration scheme is needed. We have demonstrated that this can be done using consensus optimization. In this project we will develop this from a proof-of-concept to a fully capable, computationally efficient and scalable software system. Ensuing orders-of-magnitude improvements in accuracy and computational speed not only enable the detection of this weak HI signal but also benefit a wider astronomical community (those using e.g. LOFAR, MWA, MeerKAT, ASKAP, APERTIF, SKA). The software developed in this project will be made publicly available for many other distributed optimization applications.

Image by: Adolf Schaller NASA-MSFC

The recent discovery of the Higgs boson in 2012 by the ATLAS and CMS experiments at the Large Hadron Collider at CERN, Geneva, is a prime example of the success of large scale statistical data analysis in particle physics. At the LHC approximately 10 Petabytes of data are recorded every year of data taking. The scientific goal of the examination of proton-proton collisions is to explore whether previously unseen particles are produced in these collisions, whose presence may be indicative of previous unconfirmed or unknown fundamental physics.

As decay products of the sought-after particles may decay in a multitude of ways, and are buried among hundreds of decay products collision, constructing proof of the existence of these particles requires an exhaustive analysis of collision data. The final statistical evidence combines the results of the analysis of dozens partial data samples that each isolate a signature of interest or measure an important background or nuisance parameter.

Collaborative statistical modelling

In recent years the concept of collaborative statistical modelling has emerged, where detailed statistical models of measurements performed by independent teams of scientists are combined a posteriori without loss of detail. The preferred tool to do this, RooFit, allows to build probability models from expression trees of C++ objects that can be recursively composed into descriptive models of arbitrary complexity. 

Computational performance is a limiting issue

With the emergence of ever more complex models, computational performance is now becoming a limiting issue. The work in this project aims to introduce eScience techniques to improve computational performance: vectorization and parallelization of calculations will lead to significant improvements in performance, while new structures to represent the combined data will simplify the process of building joint models for heterogeneous datasets. 

Useable in lateral directions

With much improved scalability of computional efficiency the developed software can also become useable in lateral directions such as spectral CT image reconstruction.

Image: CMS Doomsday at the CERN LHC by solarnu – https://www.flickr.com/photos/solarnu/2078532845

Updates

Stay abreast of our latest news, events and funding opportunities

  • Dit veld is bedoeld voor validatiedoeleinden en moet niet worden gewijzigd.