Domain specific languages for exascale fluid simulations
08 Nov 2022
No
-  

 

 

Dr Jianping Meng presents the first of a series of blog posts about domain specific languages for exascale fluid simulations starting with part 1 about Lattice Boltzmann modelling 

Yes

​​​​​​​​​

 
Exascale supercomputers have arrived, which means computing power equivalent to harnessing millions of laptops to tackle major scientific challenges.  The Frontier machine in the US has achieved exaflop performance this year whilst the UK has plans to deploy such a computer by 2025 [1].

Such a fast computer can enable simulations at resolution and accuracy never achieved before and will allow understanding of complex fluid flows in both industrial and natural processes. However, it is a huge challenge to effectively utilise such facilities sinc   e novel and rapidly changing technology, e.g., the graphics processing unit (GPU)  shown in Fig.1 is being used for improving computing performance as well as energy efficiency. Keeping pace with such technology could require many years of effort in modifying software. In the worst case scenario, a traditional code might not be able to run on a machine like Frontier. 

GPU.png
Figure 1. One of the new devices adapted for exascale computers like Frontier is the Graphics Processing Unit,
which is traditionally used for computer games.  It requires dramatically different programming technology.

To tackle the challenge, we are developing a domain specific language in collaboration with partners, namely the high-level mesoscale modelling system (HiLeMMS), for lattice Boltzmann fluid simulations. The aim is to enable the concept of separation of concerns, i.e., the algorithms that encapsulate the mathematics and physics of the problem are separated from the computational science of their implementation.  Thus, HiLeMMS will help to minimise the efforts to develop and run application codes on coming exascale supercomputers and improve researchers' productivity.

The Lattice Boltzmann method is an emerging method for fluid simulations, which gradually gains popularity in wide range of applications due to its simplicity and accuracy [2]. The algorithm could be imagined as a number of fictitious particles in motion and interactions, as shown by the animation.  The rules for their motion and interaction are specially designed to reproduce fluid flow behaviour that we experience daily. While its algorithm is very intuitive, the method is mathematically validated through a couples of ways [2].


By domaining specific language, we denote a kind of computer language specially designed for a particular domain with limited expressiveness [3]. In this way we could cover many details from that domain which cannot be easily done by just using general computer language alone.  Also, the system will be speaking in the same “language" with domain experts, i.e., the design is domain oriented rather than computer oriented. Thus, the system will be easy to use for them.

HiLeMMS is such a system for lattice Boltzmann modelling, which consisted of a set of high-level abstractions, the backend codes and the necessary code-generation tools.  By using the system, we don't need to have deep knowledge in such as parallel computing and GPU programming. Instead, we only need to write the application codes once in a sequential way and the codes can be compiled and run on various platforms with the code generation and metaprogramming techniques.

In the following, we present the results for testing the computing performance using the UK's national supercomputer ARCHER2 (totally 5,860 nodes and 750,080 cores) and the EPSRC Tier-2 high-performance computer Bede (a cluster with 128 NVIDIA V100 GPUs). The application code based on HiLeMMS is intended to simulate the Taylor-Green vortex problem.  The code demonstrates excellent strong scaling (left) for up to 4096 computing nodes (524,288 cores) in ARCHER2 and excellent weak scaling (right) for up to 64 NVIDIA V100 GPUs in Bede. 

Archer2.png   Bede.png
Figure 2.  Computing performance in ARCHER2 and Bede. 


In my next blog post, I will be talking about application cases using HiLeMMS.  


[1] UK Research & Innovation opens £5m fund to develop exascale supercomputer software and algorithms - DCD (datacenterdynamics.com)

[2] Shiyi Chen and Gary D Doolen. Lattice Boltzmann method for fluid flows. 1998. Annu. Rev. Fluid Mech. , Vol. 30, No. 1 p. 329-364; C. K.  Aidun,  J. R. Clausen and  G. W. Woodruff. Lattice-Boltzmann Method for Complex Flows. 2010.Annu. Rev. Fluid Mech., Vol. 42, No. 1 p. 439-72.

[3] Martin Flowler with Rebecca Parsons, Doman Specific Languages, 2010, Addison-Wesley Professional.  


Contact: Computing Insight UK