From March 28 to April 8 - 2022. Please contact. Prof. Marco Caliari for info and details.
The recent change towards massively parallel compute architectures (such as graphic processing units, GPUs, and supercomputers with millions of cores) has major implications for the design of efficient numerical algorithms. In fact, in many situations looking at the sequential performance of an algorithm gives a very misleading picture of its actual performance in real-world applications (e.g. performing computer simulations to optimize aircraft in an industrial setting or understanding the complex dynamics inside fusion reactors).
To fully exploit modern high-performance computing systems requires algorithms that parallelize well (i.e. can be distributed into largely independent tasks). Many of the classic algorithms, such as FFT or spline interpolation, however, require a large amount of communication between these tasks. Moreover, due to their increased arithmetic throughput, modern computer systems favor higher-order methods. This requires the design of new numerical algorithms as well as to develop theoretical tools to help us understand which algorithms work well for which type of problems.
In this mini-course we will discuss how such numerical algorithms are constructed, consider their accuracy, and discuss the aspects relevant to implementing them efficiently. In particular, we will consider, both from an algorithmic as well as from a high-performance computing viewpoint, what is required to conduct large-scale computer simulations of kinetic plasma dynamics using a dynamical low-rank approach. We will also consider some examples of industrial interest (e.g. fluid flow over an airflow).
Link to moodle page.