All Days
Monday
Tuesday
Wednesday
Thursday
Friday
Monday
>
<
Tuesday
>
<
Wednesday
>
<
Thursday
>
<
Friday
Event Type
Domain
Monday, 5 July 2021 expand all · collapse all
Time | Type | Session / Presentation | Contributors | Location | Domain | Plan | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11:00 - 11:30 CEST | Welcome |
| Ella Maillart | ||||||||||||||||||
11:30 - 12:20 CEST | Keynote |
| Ella Maillart | CS and Math | |||||||||||||||||
12:20 - 13:30 CEST | Break |
| |||||||||||||||||||
13:30 - 15:00 CEST | Paper |
| Henry Dunant | CS and Math Life Sciences | |||||||||||||||||
Presentations | |||||||||||||||||||||
13:30 - 15:00 CEST | Paper |
| Lise Girardin | CS and Math Engineering | |||||||||||||||||
Presentations
| |||||||||||||||||||||
13:30 - 15:00 CEST | Paper |
| Ernesto Bertarelli | Chemistry and Materials Physics | |||||||||||||||||
Presentations | |||||||||||||||||||||
13:30 - 15:00 CEST | Paper |
| Mère Royaume | CS and Math | |||||||||||||||||
Presentations
| |||||||||||||||||||||
15:00 - 15:30 CEST | Break |
| |||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Louis Favre | CS and Math Climate and Weather Physics Solid Earth Dynamics | |||||||||||||||||
DescriptionComputational geosciences leverage advanced computational methods to improve our understanding of the interiors of Earth and other planets. They combine numerical models to understand the current state of physical quantities describing a system, to predict their future states, and to infer unknown parameters of those models from data measurements. Such models produce highly nonlinear numerical systems with extremely large numbers of unknowns. The ever-increasing power and availability of High Performance Computing (HPC) facilities offers researchers unprecedented opportunities to continually increase both the spatiotemporal resolution and the physical complexity of their numerical models. However, this requires complex numerical methods and their implementations that can harness the HPC resources efficiently for problem sizes of billions of degrees of freedom. The goal of this minisymposium is to bring scientists who work in theory, numerical methods, algorithms and scientific software engineering for scalable numerical modelling and inversion. Examples include, but are not limited to, geodynamics, multi-phase geophysical flow modelling, seismic wave propagation and imaging, seismic tomography and inversion of large data sets, development of elaborate workflows including HPC for imaging problems, ice-sheet modelling, and urgent computing for natural hazards. Presentations
| |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Physics Engineering | |||||||||||||||||
DescriptionHeterogeneous computing architectures are unavoidable moving towards the era of exascale computing. Computing nodes are being built with the ever-increasing depth of the hierarchy. Hardware as well as performance portability are key capabilities to make efficient use of them. Particle-in-cell (PIC) methods are the method of choice in computational simulations of many physical applications including but not limited to particle accelerators, nuclear fusion, and astrophysics. Hence, portability in the context of PIC schemes is the need of the hour to carry out these extreme-scale simulations in current and next-generation architectures. This mini-symposium will serve as a platform to discuss the architecture and performance of hardware-independent and, thus, portable frameworks regarding particle and grid computations. Presentations | |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Mère Royaume | CS and Math | |||||||||||||||||
DescriptionHPC systems are increasingly heterogeneous and massively parallel. This creates unique challenges in fully utilizing all resources available on a node. Application developers have to expose enough parallelism to take advantage of the increasing core counts. At the same time, communication between both on-node components and inter-node components is becoming harder to manage. The fork-join programming paradigm is the preferred choice for most applications because of its simplicity and often straightforward application to serial programs. However, it imposes significant limitations to performance with its implicit global barriers. Asynchrony is becoming a requirement to hide latencies and is even starting to see wider use in more traditional libraries. Relaxing data and task dependencies is also an important technique to expose more parallelism in an application. These are all ideas that task-based programming brings to users, and which are making their way into more more established libraries, and pushing applications, libraries, and languages in new directions. This minisymposium brings together implementers and users of task-based programming frameworks and aims to discuss the benefits, recent advances, and remaining challenges in making task-based programming usable and accessible to everyone. Presentations
| |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Jean-Jacques Rousseau | Emerging Applications | |||||||||||||||||
DescriptionSatellite Earth Observations (EO) data have already exceeded the petabyte-scale and are increasingly freely and openly available from different data holdings. This poses a number of Big Data challenges impending their full information potential to be realized. EO Data Cubes (EODC) are a new paradigm revolutionizing the way users can interact with EO data and a promising solution to store, organize, manage and analyze large volumes of EO data. Different implementations are currently operational and are paving the way to broaden the use of EO data to larger communities of users; support decision-makers with timely and actionable information converted into meaningful geophysical variables; and ultimately unlock the information power of EO data. This Minisymposium is aiming to cover the most recent advances in EODC developments from four leading countries on this technology: Australia, Switzerland, Mexico and Armenia. Presentations
| |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Henry Dunant | CS and Math Chemistry and Materials Physics Life Sciences Engineering | |||||||||||||||||
DescriptionComputer models and HPC simulations are becoming a very important approach to assist clinicans in treating diseases and devising new therapies. The goal of this minisymposium is to investigate several challenges associated with this approach. We will propose a selection of talks inspired by the problems raised in the H2020 project INSIST, for instance the estimation of success scores of a medical intervention through modeling (e.g. thrombolysis and thrombectomy processes in the treatment of a stroke, the impact of the lack of oxygen in the brain), and the way to build a virtual population of patients to propose new treatments and avoid in-vivo and in-vitro experiments. The problem of validation and uncertainty quantification of the numerical models will also be considered. Presentations
| |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Michel Mayor | Chemistry and Materials | |||||||||||||||||
DescriptionMachine learned potentials are an important tool in materials science and chemistry since they provide a highly accurate and computationally efficient approximation of the potential energy surface. Most of the currently used methods however, are based on a local description of atomic environments and are thus unable to describe effects which take place over long distances beyond the local cutoff, such as charge transfer or a change in the total charge of a system. The established methods can therefore not be applied to systems where such long range effects play an important role i.e. aromatic organic molecules, sp2 hybridized carbon systems or metal clusters adsorbed on doped substrates. This inherent shortcoming has recently gained attention which caused the development of a new generation of methods. In this minisymposium, we will discuss, with prominent experts in the field, the current challenges of incorporating long range interactions into machine learned potentials. Presentations
| |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Jean Calvin | CS and Math Climate and Weather Physics | |||||||||||||||||
DescriptionWeather and climate models contain typically millions of lines of code which were in many cases developed over multiple decades. This makes it difficult to adapt these models to the emerging massively parallel supercomputers while still keeping the code readable, maintainable and ready for operational use. One major concern for many weather and climate prediction centers is the ability for domain scientists to explore new algorithms without the need to first create the infrastructure for those changes. One strategy to address these difficulties is the use of emerging hardware architectures which require relatively small amount of code adaptation while still promising great performance and energy efficiency. Examples for these architectures are ARM processors and NEC vector engines. Exploration of new algorithms can also be facilitated by using directive based approaches. This includes programming models like OpenMP and OpenACC. These programming models have the advantage that the original code base can in principle still be used on traditional architectures like CPUs as well as ARM processors and NEC Aurora Tsubasa vector engines. This mini-symposium discusses these strategies by presenting porting efforts for different major weather models widely used in the weather and climate community. Presentations | |||||||||||||||||||||
15:30 - 17:30 CEST | Minisymposium |
| Lise Girardin | CS and Math Physics | |||||||||||||||||
DescriptionSuperfluidity is a generic feature of many quantum systems at low temperatures. It has been experimentally confirmed in condensed matter systems like 3He and 4He liquids, in nuclear systems including nuclei and neutron stars, in both fermionic and bosonic cold atoms in traps. Superfluids exhibit fascinating dynamical properties. Presently the dynamics can be modelled microscopically via a framework based on the time-dependent density functional theory (TDDFT). The superfluid-TDDFT is applicable to a wide range of physical processes involving superfluidity including simulations of nuclear reactions (fission/fusion), modeling the superfluid interior of neutron stars, and dynamics of ultracold atomic gases (quantum turbulence, dynamics of topological excitations). Since superfluidity is an emergent phenomenon, a large number of quantum particles are needed in order to simulate it correctly. It sets high numerical and technical demands for evaluating superfluid-TDDFT with classical computers. This minisymposium will present the most relevant applications of the TDDFT framework achieved with the help of computer systems like Summit (ORNL, USA) and Piz Daint (CSCS, Switzerland) together with the presently utilized numerical and technical solutions. Challenges for future exascale systems in the context of modelling superfluidity/superconductivity will be also highlighted. Presentations
| |||||||||||||||||||||
17:30 - 18:00 CEST | Break |
| |||||||||||||||||||
18:00 - 18:40 CEST | Interdisciplinary Dialogue |
| Ella Maillart | ||||||||||||||||||
18:40 - 19:10 CEST | Poster |
| Ella Maillart |
Tuesday, 6 July 2021 expand all · collapse all
Time | Type | Session / Presentation | Contributors | Location | Domain | Plan | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11:00 - 13:00 CEST | Minisymposium |
| Louis Favre | CS and Math Solid Earth Dynamics | |||||||||||||||||
DescriptionComputational geosciences leverage advanced computational methods to improve our understanding of the interiors of Earth and other planets. They combine numerical models to understand the current state of physical quantities describing a system, to predict their future states, and to infer unknown parameters of those models from data measurements. Such models produce highly nonlinear numerical systems with extremely large numbers of unknowns. The ever-increasing power and availability of High Performance Computing (HPC) facilities offers researchers unprecedented opportunities to continually increase both the spatiotemporal resolution and the physical complexity of their numerical models. However, this requires complex numerical methods and their implementations that can harness the HPC resources efficiently for problem sizes of billions of degrees of freedom. The goal of this minisymposium is to bring scientists who work in theory, numerical methods, algorithms and scientific software engineering for scalable numerical modelling and inversion. Examples include, but are not limited to, geodynamics, multi-phase geophysical flow modelling, seismic wave propagation and imaging, seismic tomography and inversion of large data sets, development of elaborate workflows including HPC for imaging problems, ice-sheet modelling, and urgent computing for natural hazards. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Michel Mayor | CS and Math Chemistry and Materials Physics Engineering | |||||||||||||||||
DescriptionCP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of condensed phase systems. It can simulate the electronic structure and thermodynamic properties of liquids and solutions, complex materials and soft biological systems. CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of shared memory multi-threading using OpenMP, distributed memory MPI, and on accellerators using e.g. CUDA. New low-scaling implementations of electronic structure methods enable simulations of systems containing millions of atoms for Density Functional Theory (DFT) and thousands of atoms for Random Phase Approximation (RPA). These methods are based on sparse linear algebra. Performance, portability and ease-of-development is ensured by the accompanying development of a general sparse matrix/tensor library (DBCSR). The desire to perform calculations on a large number of materials of interest is calling for automated workflows to organize massive amounts of data and calculations. This is enabled by combining CP2K with the Automated Interactive Infrastructure and Database for Computational Science (AiiDA). The applicability of CP2K to interesting large-scale problems is demonstrated by a DFT+U study of dislocations in functional oxide materials. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Physics Engineering | |||||||||||||||||
DescriptionComputational fluid dynamics is one of the main drivers of exascale computing, both due to its high relevance in today's world (from nano fluidics up to planetary flows), but also due to the inherent multiscale properties of turbulence. The numerical treatment is notoriously difficult due to the disparate scales in turbulence, and the need for resolving local features. In addition, aspects such as the quantification of (internal or external) uncertainties is becoming a necessity, together with in-situ visualisation/postprocessing. The recent trend in numerical methods goes towards high-fidelity methods (for instance continuous and discontinuous Galerkin) which are suitable for modern computers; however, relevant issues such as scaling, accelerators and heterogeneous systems, high-order meshing and error control, are still far from solved when it comes to largest scale simulations, e.g. in automotive and aeronautical applications. This two part minisymposium brings together eight experts from various international institutions (Europe, America, Japan) to discuss current and future issues of extreme scale CFD in engineering applications, with special focus on accurate CFD methods, and their implementation on current HPC systems. The interaction between participants of the Horizon2020 Centre of Excellence Excellerat and external experts will be particularly fruitful. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Mère Royaume | CS and Math Chemistry and Materials Physics Engineering | |||||||||||||||||
DescriptionThe advent of new supercomputing architectures often challenges the current best practices, programming paradigms and potentially renders state-of-the-art software outdated. Given the research and development man-years spent on scientific applications, this risk should make the HPC community consider more sustainable and long-term development strategies. A case in point is the GPU development, where one must decide carefully the platform to target, e.g., CUDA, OpenCL, ROCm. On the other hand, there exist few examples of software that is agnostic to the underlying architecture, offering the possibility of a single code to deal with multiple architectures. Currently, scientists and engineers often develop their applications for a very specific architecture, spending valuable time optimizing and tailoring their codes. Furthermore, as the code moves to different platforms with different accelerators, the code branches into multiple development streams, where each of them is dealing with its own platform-specific issues that are solved with diverging techniques. Thus, it is critical to open a wide discussion on frameworks, inherently parallel programming languages, compilers, platforms and a combination of them, that will help the community choose a development pipeline, leading to HPC models that favor flexible, versatile and sustainable solutions. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications | |||||||||||||||||
DescriptionIn times of ever-increasing data sets on one hand side, and sophisticated models to account for the substantial heterogeneity observed in the real world, researchers in economics and finance have started to leverage recent advances from machine learning to study questions of unprecedented complexity. This minisymposium brings together researchers from different application fields of finance and economics who develop and use scalable approaches from machine learning. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Lise Girardin | CS and Math Physics | |||||||||||||||||
DescriptionMagnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. Due to the extreme challenges at hand, fusion plasma computations have always exploited the largest HPC resources available at any point in time, and it is anticipated that will remain so in the foreseeable future. Adapting the codes to the ever changing landscape of new and emerging computer architectures is a non-trivial challenge. In this Part I of the minisymposium, the emphasis will be on code developments and software advances in particular for ensuring the efficient exploitation of heterogeneous architectures. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Climate and Weather Physics | |||||||||||||||||
DescriptionThe predictive skill of weather and climate models has significantly improved over the past few decades, thanks to a huge increase in resolution facilitated by increased supercomputing capacity. A million-fold increase in computational power has allowed the resolution of operational global weather models to increase from 500 km to 10 km since 1980, for example. Further increases towards 1 km resolution would deliver significant improvements in the skill of weather and climate simulations. However, these simulations are still not viable for operational predictions due to the vast increase in computational cost. The computational speed of global kilometer-scale simulations on today’s supercomputers is below a practical level by at least two orders of magnitude. Furthermore, taking advantage of future exascale supercomputers with heterogeneous architectures will require a substantial rethink of traditional coding paradigms. This two-part minisymposium will bring together researchers on global kilometer-scale atmosphere and ocean models from around the world. Speakers will discuss both the scientific and computational challenges of 1 km resolution. They will present the state-of-the-art of their respective simulation systems and their roadmaps for the future. The challenge of kilometer-scale global simulations can be met, but only by the synthesis of ideas across Earth-System science and supercomputing. Presentations | |||||||||||||||||||||
13:00 - 14:00 CEST | Break |
| |||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Louis Favre | CS and Math Solid Earth Dynamics | |||||||||||||||||
DescriptionComputational geosciences leverage advanced computational methods to improve our understanding of the interiors of Earth and other planets. They combine numerical models to understand the current state of physical quantities describing a system, to predict their future states, and to infer unknown parameters of those models from data measurements. Such models produce highly nonlinear numerical systems with extremely large numbers of unknowns. The ever-increasing power and availability of High Performance Computing (HPC) facilities offers researchers unprecedented opportunities to continually increase both the spatiotemporal resolution and the physical complexity of their numerical models. However, this requires complex numerical methods and their implementations that can harness the HPC resources efficiently for problem sizes of billions of degrees of freedom. The goal of this minisymposium is to bring scientists who work in theory, numerical methods, algorithms and scientific software engineering for scalable numerical modelling and inversion. Examples include, but are not limited to, geodynamics, multi-phase geophysical flow modelling, seismic wave propagation and imaging, seismic tomography and inversion of large data sets, development of elaborate workflows including HPC for imaging problems, ice-sheet modelling, and urgent computing for natural hazards. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Henry Dunant | Chemistry and Materials Physics Life Sciences Engineering | |||||||||||||||||
DescriptionSimulations of physical phenomena consume a large fraction of computer time on existing supercomputing resources. Today, the challenge of scaling multiscale simulations is primarily addressed by brute-force search-and-sample techniques and are computationally expensive. Emerging Exascale architectures pose challenges for simulations such as efficient and scalable execution of complex workflows, concurrent execution of heterogeneous tasks and the robustness of algorithms on millions of processing cores, data and I/O parallelism, and fault tolerance. Therefore, incremental approaches that scale simulations will not be successful in achieving the throughput and utilization on such machines. Machine learning (ML) techniques can be integrated with system and application changes and give many orders of magnitude higher effective performance. We term this convergence of high-performance computing (HPC) and ML methodologies/ practice as MLforHPC. Nowhere is the impact of MLforHPC methods likely to be greater than multiscale simulations in biological and material sciences, with early evidence suggesting several orders of magnitude improvement over traditional methods. Fueled by advances in statistical algorithms and runtime systems, ensemble-based methods have overcome some of the limitations of traditional monolithic simulations. Furthermore, integrating ML approaches with such ensemble methods holds even greater promise in overcoming performance barriers and enabling simulations of complex multiscale phenomena. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Mère Royaume | CS and Math | |||||||||||||||||
DescriptionThe next HPC systems are expected to feature millions of cores leading to more parallelism for large-scale simulations and workflows in varied scientific domains. However, there is a price to pay: the gap between computing power and data movement performance will keep increasing more and more and will inevitably intensify the bottleneck caused by data movements. Thus, it becomes necessary to consider data movement within those modern architectures at all the stages of the data lifetime and to come up with software components for orchestrating this movement. In particular, we can identify several key aspects among which: data semantics for expressing applications needs and memory abstraction for portability and extendibility given the multiplicity of memory and storage tiers. During this Minisymposium, we will focus on those two facets and present very promising ongoing projects coming from academia and industry proposing to address the data orchestration challenge and facilitate efficient exploitation of future exascale systems. An open discussion with the speakers will close the session. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Physics Engineering | |||||||||||||||||
DescriptionComputational fluid dynamics is one of the main drivers of exascale computing, both due to its high relevance in today's world (from nano fluidics up to planetary flows), but also due to the inherent multiscale properties of turbulence. The numerical treatment is notoriously difficult due to the disparate scales in turbulence, and the need for resolving local features. In addition, aspects such as the quantification of (internal or external) uncertainties is becoming a necessity, together with in-situ visualisation/postprocessing. The recent trend in numerical methods goes towards high-fidelity methods (for instance continuous and discontinuous Galerkin) which are suitable for modern computers; however, relevant issues such as scaling, accelerators and heterogeneous systems, high-order meshing and error control, are still far from solved when it comes to largest scale simulations, e.g. in automotive and aeronautical applications. This two part minisymposium brings together eight experts from various international institutions (Europe, America, Japan) to discuss current and future issues of extreme scale CFD in engineering applications, with special focus on accurate CFD methods, and their implementation on current HPC systems. The interaction between participants of the Horizon2020 Centre of Excellence Excellerat and external experts will be particularly fruitful. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications Chemistry and Materials Climate and Weather Life Sciences | |||||||||||||||||
DescriptionA common research problem in diverse domains is the extraction of combinatorial patterns from large datasets. For example, most human diseases arise due to the interactions of multiple genetic factors and lifestyle choices; and weather events, such as tornados, manifest due to the complex interactions of a host of meteorological states. Data in such domains is rapidly being gathered, yet identification of these high-dimensional patterns remains difficult due to the combinatorial explosion of the number of groups to be considered. One practical approach leverages the concept of guilt-by-association and models the data as a network. These networks typically represent factors as nodes and relationships between pairs of factors as edges between the corresponding nodes. A key benefit of network modeling is that the computation of simple pair-wise relationships can yield knowledge about unknown high-dimensional relationships. Network models have been widely employed, yet several challenges impede their full potential. This minisymposium focuses on these challenges, discusses current state-of-the-art approaches, and also presents promising directions for future research, such as the computations of 3-way relationships. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Michel Mayor | Chemistry and Materials Physics Engineering | |||||||||||||||||
DescriptionThis minisymposium will focus on the tools and the models required to accurately model material behavior under various mechanical stimuli. Continuum-scale models traditionally have difficulty accounting for specific mesoscale deformation behavior due to the larger length scales (tens to hundreds of microns) at which these models are applicable. Accurately modeling fracture in higher length scale models is limited in similar ways; the sub-scale features of interest such as cracks and/or voids, and their interactions along boundaries cannot be resolved. Furthermore, when complex and extreme loading conditions are considered, the active deformation mechanisms can change, impacting overall material strength and damage evolution. Hence, the current state of the art models, particularly those active at larger length scales, cannot accurately predict material behavior especially under dynamic loading conditions. To get around these issues, many multiscale approaches have been developed in which information is ‘passed’ from lower length scales up to higher length scales. While this approach is reasonable, what information is needed, how different models on different length scales connect, and the fidelity of these connections is still not clear. This symposium is aimed at addressing these issues by bringing together modelers who have been working on modeling materials across scales. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Lise Girardin | Physics | |||||||||||||||||
DescriptionMagnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. Core and edge plasmas present very different physical conditions and, until now, these two regions were typically treated separately. But given that these two regions actually strongly interact with each other, the challenge to achieve realistic simulations is to describe them in a unified framework. This Part II of the minisymposium will specifically address this issue. In particular, pros and cons of treating this problem using different numerical approaches will be covered. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean Calvin | Climate and Weather | |||||||||||||||||
DescriptionThe predictive skill of weather and climate models has significantly improved over the past few decades, thanks to a huge increase in resolution facilitated by increased supercomputing capacity. A million-fold increase in computational power has allowed the resolution of operational global weather models to increase from 500 km to 10 km since 1980, for example. Further increases towards 1 km resolution would deliver significant improvements in the skill of weather and climate simulations. However, these simulations are still not viable for operational predictions due to the vast increase in computational cost. The computational speed of global kilometer-scale simulations on today’s supercomputers is below a practical level by at least two orders of magnitude. Furthermore, taking advantage of future exascale supercomputers with heterogeneous architectures will require a substantial rethink of traditional coding paradigms. This two-part minisymposium will bring together researchers on global kilometer-scale atmosphere and ocean models from around the world. Speakers will discuss both the scientific and computational challenges of 1 km resolution. They will present the state-of-the-art of their respective simulation systems and their roadmaps for the future. The challenge of kilometer-scale global simulations can be met, but only by the synthesis of ideas across Earth-System science and supercomputing. Presentations
| |||||||||||||||||||||
16:00 - 16:30 CEST | Break |
| |||||||||||||||||||
16:30 - 17:30 CEST | Panel |
| Ella Maillart | ||||||||||||||||||
17:30 - 19:00 CEST | Poster |
| |||||||||||||||||||
Presentations |
Wednesday, 7 July 2021 expand all · collapse all
Time | Type | Session / Presentation | Contributors | Location | Domain | Plan | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11:00 - 13:00 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Physics Engineering | |||||||||||||||||
DescriptionHeterogeneous computing architectures are unavoidable moving towards the era of exascale computing. Computing nodes are being built with the ever-increasing depth of the hierarchy. Hardware as well as performance portability are key capabilities to make efficient use of them. Particle-in-cell (PIC) methods are the method of choice in computational simulations of many physical applications including but not limited to particle accelerators, nuclear fusion, and astrophysics. Hence, portability in the context of PIC schemes is the need of the hour to carry out these extreme-scale simulations in current and next-generation architectures. This mini-symposium will serve as a platform to discuss the architecture and performance of hardware-independent and, thus, portable frameworks regarding particle and grid computations. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Louis Favre | CS and Math Solid Earth Dynamics Engineering | |||||||||||||||||
DescriptionFractures are ubiquitous at different scales in the subsurface regions, and can strongly dominate the hydraulic and mechanical response of such regions. Understanding their distribution, connectivity, initiation, and propagation is fundamental for several applications, such as geothermal energy production, hydrocarbon exploration, hydraulic stimulation and induced-seismicity assessment, CO2 storage. Modelling realistic fracture networks introduces several challenges, and, in the literature, several methods have been introduced to handle with the multiscale and multiphysics phenomena underlying the geophysics applications, including phase-field models for fracture initiation and propagation, hydro-mechanical and thermo-hydro-mechano-chemical coupling for fractured poroelastic media. For these kinds of problems, accurate and realistic discretization methods for fractured porous media hence give rise to large-scale problems for which modern high-performance computing architectures, such as hybrid GPU-CPU supercomputers, are necessary for efficient simulations. The goal of this mini-symposium is to bring together applied researchers and computational scientists working on the simulation of fractured porous media, with a particular focus on geoscientific applications. The presentations will be focused on the major challenges of the field and the most recent developments of HPC and large-scale software. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Mère Royaume | CS and Math Emerging Applications Chemistry and Materials Physics Life Sciences Engineering | |||||||||||||||||
DescriptionScientific campaigns are increasingly tightening the feedback and validation loop between simulation and observational data. The linking of experimental and observational data from empirically driven facilities with computational facilities is giving rise to cross-facility workflows and needs to peer traditional modeling and simulation and high-performance data science. The dramatic increases in luminosity and data collection capabilities are also forcing HPC centers to support modes of operation such as interactivity and adaptive computation. As we scale up such pipelines for scientific discovery, cross-facility and intra-facility workflows require each participating facility and system to overcome interface hurdles to work seamlessly in an end-to-end manner. We bring together the perspectives of disparate collaborating facilities and experts, explore HPC and data intensive science (including observational science) needs, and aim to offer a roadmap for how cross-facility collaborations can be more effective by peering HPC and data-driven science. The session consists of talks by CSCS, PSI, and ORNL, and conclude with a panel. We summarize each facility’s perspective and pose leading questions to our panel to invite targeted responses. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Michel Mayor | CS and Math Emerging Applications Chemistry and Materials Climate and Weather Physics Solid Earth Dynamics Life Sciences Engineering | |||||||||||||||||
DescriptionWhile parallel applications in all scientific and engineering domains have always been prone to execution inefficiencies that limit their performance and scalability, exascale computer systems comprising millions of heterogeneous processors/cores present a very considerable imminent challenge to be addressed for academia and industry alike. Ten HPC Centres of Excellence are currently funded by the EU Horizon2020 programme to prepare applications for forthcoming exascale computer systems [https://www.focus-coe.eu/index.php/centres-of-excellence-in-hpc-applications/]. The transversal Performance Optimisation and Productivity Centre of Excellence (POP CoE) [https://www.pop-coe.eu] supports the others, along with the wider European community of application developers, with impartial application performance assessments of parallel execution efficiency and scaling based on a solid methodology analysing measurements with open-source performance tools. This minisymposium introduces the POP services and methodology, summarising results provided to date for over 200 customers with particular focus on those from the HPC CoEs. Engagements with the HPC CoEs will be reviewed in the introductory presentation, covering climate and weather (ESiWACE), chemistry and materials (BioExcel/MaX), and computational fluid dynamics in engineering (EXCELLERAT). The CoEs for Computational Biomedicine (CompBioMed [https://www.compbiomed.eu/]), Solid Earth (ChEESE [https://cheese-coe.eu/]) and Energy-oriented applications (EoCoE [https://www.eocoe.eu/]) will then report their experience of collaborating with POP in preparing their flagship codes for exascale. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Henry Dunant | CS and Math Chemistry and Materials Physics Life Sciences Engineering | |||||||||||||||||
DescriptionWith the recent success of machine learning, interest was sparked in various computational domains to use those methods to recast certain simulation problems as machine learning problems. In neuroscience, simulating biophysically detailed models of neurons and brain tissue models has become an important tool, complementing experiments and theory. However, already simulating a single detailed neuron may require the solving of 10,000 differential equations, which is computationally costly and sparks the desire to explore whether a recasting of the problem is possible. This mini-symposium explores this question through three presentations and a panel discussion: A first presentation will describe how quantum mechanics can be recast as a machine learning problem. Next, we will learn about the possibilities of approximating detailed models of neurons through deep artificial neural networks. The third talk will introduce how physics-informed neural networks can be used to compute forward and inverse problems for partial differential equations. Finally, the panel discussion will explore what we can learn from these approaches for recasting the simulation of brain tissue as machine learning problems. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications Solid Earth Dynamics Engineering | |||||||||||||||||
DescriptionSolving Linear Algebra problems is at the core of many scientific and engineering applications. When the problem size is growing, it is critical to maintaining a reasonable run-time while using the computational resources at full capacity. The need for scalable algorithms is essential in preparing scientific applications for the new exascale ecosystem. Having flexibility and efficiency for the related software framework on supercomputers with hybrid nodes lies at this effort's core. This mini-symposium will address the development of libraries of solvers and preconditioners for the applications related to the current European transition towards renewable energies. In particular, in the context of the "Energy Oriented Center of Excellence: toward exascale for energy" European project on High-Performance Computing and Sustainable Energies. The Talks will cover topics such as the acceleration and scalability techniques for direct solvers as the Block-Low Rank (BLR) feature or the use of GPUs. Recent developments in Multigrid Solvers, both Algebraic and Geometric type, will be discussed, e.g., the usage of graph matching aggregation schemes and direct distributed coarse solvers with BLR features. The talks will explore several applications ranging from the simulation of hydrodynamic systems and large wind and material science models to illustrate these approaches' performance. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Lise Girardin | CS and Math Physics | |||||||||||||||||
DescriptionMagnetic fusion plasmas are subject to a plethora of collective effects such as electromagnetic waves and instabilities, spanning multiple time- and length- scales. The low collisionality of reactor core plasmas makes a kinetic description mandatory for an accurate representation. Ions and electrons have very different dynamics, and therefore the problem is intrinsically multi-scale, both in space and time. As we go towards the plasma edge, collisionality increases and the relative fluctuation amplitudes become close to unity. In this Part III of the minisymposium, we shall focus on the fact that relying on just porting, and optimizing existing codes on new generation computers will not be sufficient to make a global, full fusion reactor (‘from the magnetic axis to the wall’) description tractable. Ongoing developments in new innovative mathematical representations, discretizations and algorithms are just as critical. Challenges related to transitioning from reduced to more accurate descriptions where needed will also be addressed, in particular considering the possible transition from fluid to gyrokinetic models as well as from gyrokinetic to fully kinetic. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Emerging Applications Climate and Weather | |||||||||||||||||
DescriptionThe path towards exascale computing holds enormous challenges for the community of weather and climate modelling regarding portability, scalability and data management. Exascale machines will allow running Global kilometre-resolving simulations which eventually will enable the representation of features impossible until now. However, the new complexity of the Earth System Models (ESMs) on development could be critical for the new exascale's deployment. Speakers from the HPC-driven European centre of excellence ESiWACE on exascale computing for weather and climate models will address this matter from different points of view; each speaker will present different challenges faced in their scientific domain and the approaches to mitigate these challenges. Including topics such as 1) the latest results and technical achievements in the context of ESiWACE 2) the computational analysis of the O(1km) global model and challenges presented on the development of the new configurations, 3) the use of PSyclone in LFRic to achieve performance and portability on different architectures and accelerators such as GPUs or 4) the new coupling generation for ESMs on exascale platforms through OASIS3-MCT. All of them pursuing the main goal to prepare the weather and climate community to be able to make use of exascale systems when they become available. Presentations | |||||||||||||||||||||
13:00 - 14:00 CEST | Break |
| |||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Lise Girardin | CS and Math Physics | |||||||||||||||||
DescriptionData locality on new and future generation platforms is expected to be of paramount importance for good performance. However, many applications in astrophysics are by nature multiphysics, involving multiple solvers with diverse data layouts and communication patterns. Additionally, because they can have a large dynamic range of scales, different resolutions in different parts of the domains are often necessary for computational efficiency. Thus, the constraints placed on data management due to the physics models and their corresponding numerical methods can be at cross purposes with data locality. This mini-symposium will examine constraints on data locality for two major class of astrophysics applications, cosmology and supernovae. For each class we include two types of discretization approaches, adaptive mesh refinement and smooth particle hydrodynamics, that have very different data layouts, access patterns and solver characteristics. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium | Ernesto Bertarelli | CS and Math Emerging Applications Physics Engineering | ||||||||||||||||||
DescriptionIn this minisymposium we address an important question: how do we future proof scientific codes on a rapidly changing hardware landscape of heterogeneous computing platforms which, at present consists of CPU+GPU systems, with significant differences between the GPUs. Given that the language(s)/API(s)/Pragma(s) to offload instructions/data from/to the GPUs on these systems (e.g. SYCL, HIP, OpenMP-5.x) are very different, the task of refactoring large scientific codes, each with their own dependencies on libraries, is a daunting one. Consequently, the questions that are uppermost in minds of code developers are: a) how feasible is it to use a high level hardware abstraction layer (HAL) that would make codes portable across the various heterogeneous computing platforms, and b) will these HALs continue to be developed when there are other accelerators that become part of the hardware landscape? In this mini-symposia we shine the spotlight on one such HAL, namely, Kokkos, that is being developed by the US Department of Energy, as part of the exascale computing program (ECP). We have four talks on the usability of Kokkos, on the development of mesh and particle based scientific codes, and on a specialized scientific library, all of which leverage Kokkos for portability. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications | |||||||||||||||||
DescriptionResponding to disasters such as wildfires, hurricanes, flooding, earthquakes, tsunamis, winter weather conditions, spread of diseases, and accidents; technological advances are creating exciting new opportunities that have the potential to move HPC well beyond traditional computational workloads. While HPC has a long history of simulating disasters after the fact, an exciting possibility is to use these resources to support emergency, urgent, decision making in real-time. As our ability to capture data continues to grow significantly, it is only now possible to combine high velocity data and live analytics with HPC models to aid in urgently responding to real-world problems, ultimately saving lives and reducing economic loss. To make this vision a reality, a variety of technical and policy challenges must be identified and overcome. Whether it be developing more interactive simulation codes which include real-time data feeds, improving in-situ data analysis techniques, developing new large-scale data visualisation techniques, or guaranteeing bounded and predictable machine queue times, the challenges here are significant. In this minisymposium, we will discuss this emerging HPC use-case by bringing together experts in the field, researchers, practitioners, and interested parties from across our community to identify and tackle issues involved in using HPC for urgent decision making. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Climate and Weather | |||||||||||||||||
DescriptionWeather and climate prediction have made significant progress over the past decades. Despite this progress there are still substantial shortcomings including insufficient parallelism, limited scalability, portability limitations, and increasing complexity in the applications. Weather extremes, for example, are still difficult to predict with sufficient lead time and predicting the impact of climate change at a regional or national level is a big challenge. Improving these predictions promises important economic benefits. One of the key sources of model error is limited spatial and temporal resolution. Improving resolution translates into significant computational challenges. This makes it necessary to heavily restructure and optimise weather and climate models for the fastest available supercomputers. This mini-symposium gives an overview of work on porting and optimising four popular earth system models for the supercomputer Summit. This includes optimisation for the NVIDIA V100 GPUs as well as the IBM Power 9 host CPUs and the Mellanox interconnect. Being able to make good use of fat nodes like on Summit will be highly relevant for many domains within the HPC community. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Henry Dunant | CS and Math Life Sciences | |||||||||||||||||
DescriptionDeep learning (DL) has the capability to transform many scientific problems, including COVID-19 research, cancer studies, and a wide range of energy sciences. Ever more complex couplings of data sources, analysis, and learning methods demand additional resources in computing capability, data movement, and storage. In practice, the growing scientific usage of DL faces technical and engineering challenges in the exascale era. This minisymposium will illustrate two different learning-based applications and two related systems software contributions to DL at exascale. Exascale challenges for DL include scaling models, scaling workflows, integrating methods, integrating software, and down-scaling results for return to users with varying levels of available computing power. The presenters of this minisymposium are developing a suite of software to support scalable DL on exascale supercomputing resources. The software suite includes capabilities for rapid prototyping and application development, portability across machine types and scales, support for typical DL workflow patterns such as error analysis and hyperparameter optimization, and hierarchical parallelism via distribution of larger models across multiple nodes. We will also present a range of research results in the usage of deep learning ensembles, including recursive search patterns used to look for outliers in experimental data. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Louis Favre | CS and Math Solid Earth Dynamics | |||||||||||||||||
DescriptionFractures are ubiquitous at different scales in the subsurface regions, and can strongly dominate the hydraulic and mechanical response of such regions. Understanding their distribution, connectivity, initiation, and propagation is fundamental for several applications, such as geothermal energy production, hydrocarbon exploration, hydraulic stimulation and induced-seismicity assessment, CO2 storage. Modelling realistic fracture networks introduces several challenges, and, in the literature, several methods have been introduced to handle with the multiscale and multiphysics phenomena underlying the geophysics applications, including phase-field models for fracture initiation and propagation, hydro-mechanical and thermo-hydro-mechano-chemical coupling for fractured poroelastic media. For these kinds of problems, accurate and realistic discretization methods for fractured porous media hence give rise to large-scale problems for which modern high-performance computing architectures, such as hybrid GPU-CPU supercomputers, are necessary for efficient simulations. The goal of this mini-symposium is to bring together applied researchers and computational scientists working on the simulation of fractured porous media, with a particular focus on geoscientific applications. The presentations will be focused on the major challenges of the field and the most recent developments of HPC and large-scale software. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Mère Royaume | CS and Math | |||||||||||||||||
DescriptionRecently, hardware manufacturers are responding to an increasing request for low precision functionality such as FP16 by integrating special low-precision functional units, e.g., NVIDIA Tensor cores. These, however, remain unused even for compute-intensive applications if high precision is employed for all arithmetic operations. At the same time, communication-intensive applications suffer from the memory bandwidth of architectures growing at a much slower pace than the arithmetic performance. In both cases, a promising strategy is to abandon the high-precision standard (typically fp64), and employ lower or non-standard precision for arithmetic computations or memory operations whenever possible. While employing formats other than working precision can render attractive performance improvements, it also requires careful consideration of the numerical effects. On the other end of the spectrum, precision formats with higher accuracy than the hardware-supported fp64 can be effective in improving the robustness and accuracy of numerical methods. With this breakout minisymposium, we aim to create a platform where those working with multiprecision or interested in using multiprecision technology come together and share their expertise and experience. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Michel Mayor | CS and Math Chemistry and Materials Physics Engineering | |||||||||||||||||
DescriptionThis mini-symposium will focus on the tools and the models required to accurately model material behavior under various mechanical stimuli. Continuum-scale models traditionally have difficulty accounting for specific mesoscale deformation behavior due to the larger length scales (tens to hundreds of microns) at which these models are applicable. Accurately modeling fracture in higher length scale models is limited in similar ways; the sub-scale features of interest such as cracks and/or voids, and their interactions along boundaries cannot be resolved. Furthermore, when complex and extreme loading conditions are considered, the active deformation mechanisms can change, impacting overall material strength and damage evolution. Hence, the current state of the art models, particularly those active at larger length scales, cannot accurately predict material behavior especially under dynamic loading conditions. To get around these issues, many multiscale approaches have been developed in which information is ‘passed’ from lower length scales up to higher length scales. While this approach is reasonable, what information is needed, how different models on different length scales connect, and the fidelity of these connections is still not clear. This symposium is aimed at addressing these issues by bringing together modelers who have been working on modeling materials across scales. Presentations
| |||||||||||||||||||||
16:00 - 16:30 CEST | Break |
| |||||||||||||||||||
16:30 - 17:30 CEST | Paper |
| Ella Maillart | CS and Math Engineering | |||||||||||||||||
Presentations | |||||||||||||||||||||
17:40 - 18:30 CEST | Keynote Public Lecture |
| Ella Maillart | Emerging Applications |
Thursday, 8 July 2021 expand all · collapse all
Time | Type | Session / Presentation | Contributors | Location | Domain | Plan | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11:00 - 13:00 CEST | Minisymposium |
| Lise Girardin | Physics | |||||||||||||||||
DescriptionCosmology has undergone a revolution, from a rather philosophical enterprise to a data-driven, observational science. To make sense of the terabytes and petabytes of data streaming in from the new and future facilities we need large-scale N-body simulations that contain the relevant physics and cover a huge dynamical range to reach the required precision, in the future they will provide a benchmark problem for exascale computing. These simulations combine many numerical challenges, including load-balancing multi-scale dynamical evolution, solving nonlinear finite-difference equations, managing complex data sets and performing on-the-fly statistical analyses. This minisymposium reviews the methods behind the largest current simulations that evolve over a trillion particles, and ongoing developments in including effects from General Relativity as well as relativistic fields and particles in the simulations. The presentations will provide both an overview of current results relevant for cosmology and a look inside the machinery (algorithms and their implementations) behind the latest generation of cosmological simulation codes. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Louis Favre | Solid Earth Dynamics Engineering | |||||||||||||||||
DescriptionNumerical simulation has become a necessary tool in the field of geotechnical engineering. As the common material involved, geomaterial is always with great discontinuity, heterogeneity, and anisotropy. To describe the mechanical behavior of geomaterial, various computational methods have been developed. As an important branch, discontinuous numerical methods are designed using the bottom-to-top strategy, in which the computational model is divided into a group of discrete elements to reproduce the response of its physical counterpart. Compared to continuous methods, such as the finite element method (FEM), the discontinuous numerical method is regarded as superior in representing characteristics of geomaterial and obtaining closer results to those of laboratory testing. However, they are always handicapped to be further applied into the practical case in geotechnical engineering due to the extra high computational requirements, e.g., over millions if not billions of numerical elements are generally required for discontinuous numerical model of a large scale slope or underground cavern. This mini-symposium targets presenting new parallel computing algorithms of discontinuous numerical methods for geotechnical engineering, including but not limited to new developed computing methods such as the discontinuous deformation analysis, the discrete element model and the four-dimensional lattice spring model. Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Emerging Applications Climate and Weather Solid Earth Dynamics | |||||||||||||||||
DescriptionIn this mini-symposium, we examine the increasing role that AI/ML methods are playing in the Earth and Climate Sciences, with speakers relating how these new tools can be used judiciously. From the atmospheric sciences side, we will see how ML can help understand the variability of the stratospheric polar vortex and thus enhance seasonal forecasts. And in terms of solid earth science, we discuss the usefulness of ML in gaining insight into earthquake dynamics. We further cover how to provision ML services for Earth System sciences as these tools gain more adoption within the scientific communities, and need to be made available in a more systematic way. Finally, we peer into the future of both ML and Earth Science, with thoughts on how sparsity in both of these areas will likely grow over time, and the expected interplay of this sparsity with current and alternative computer architectures. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Climate and Weather Engineering | |||||||||||||||||
DescriptionUncertainty in the input data is a reality of engineering practice, albeit not always modelled in simulations. Accurate quantification of the resulting output uncertainty yields finer control on the robustness and cost of designs. However, this quantification requires exploring the parameter space – usually with numerous simulations – which may incur a prohibitive cost, especially for involved studies such as optimal design in fluid dynamics. Parallel computing can make such studies tractable, provided suitable methods are used to leverage it. This is an active field of research especially relevant for fluid dynamics, whose simulations are notoriously expensive and intricate, while being critical to many applications: aeronautics, civil engineering, meteorology, etc. Presented here are novel research developments from ExaQUte, an EU-H2020 project developing HPC methods for robust engineering design. The driving application is the optimisation of building shapes for civil engineering under uncertain wind loads. Therefore, this mini-symposium encompasses unsteady fluid problems, adaptive meshing, uncertainty quantification, robust shape optimisation and more. The methods discussed here are designed to leverage parallelisation with a modern framework for current and future distributed computing environments. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Michel Mayor | Chemistry and Materials Climate and Weather Physics Solid Earth Dynamics Life Sciences Engineering | |||||||||||||||||
DescriptionWhile parallel applications in all scientific and engineering domains have always been prone to execution inefficiencies that limit their performance and scalability, exascale computer systems comprising millions of heterogeneous processors/cores present a very considerable imminent challenge to be addressed for academia and industry alike. HPC Centres of Excellence are funded by the EU Horizon2020 programme to prepare applications for forthcoming exascale computer systems [https://www.focus-coe.eu/index.php/centres-of-excellence-in-hpc-applications/]. The transversal Performance Optimisation and Productivity Centre of Excellence (POP CoE) [https://www.pop-coe.eu] supports the others, along with the wider European community of application developers, with impartial application performance assessments of parallel execution efficiency and scaling based on a solid methodology analysing measurements with open-source performance tools. This minisymposium part complements the first with three additional HPC CoE presentations of their experience collaborating with POP in preparing their flagship codes for exascale, and concludes with a discussion session involving the presenters from both parts. Presentations
| |||||||||||||||||||||
11:00 - 13:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications Chemistry and Materials Climate and Weather Solid Earth Dynamics Life Sciences Engineering | |||||||||||||||||
DescriptionNowadays the complexity of scientific computing in conjugation with the hardware complexity has become significant. On one side, scientific software applications require several dependencies for their compilation. On the other side, users and developers of scientific applications target a wide range of diverse computing platforms, such as their laptops to supercomputers. This complexity poses challenges during the entire workflow of the applications. We can distinguish at least five critical areas: 1) applications building, including all dependencies; 2) testing during the application development with Continuous Integration (CI) and automated build and testing techniques; 3) deployment of the applications via Continuous Deployment (CD) techniques; 4) packaging of the applications with dependencies for easy user-level installation and productivity; 5) software performance portability. The challenge in High Performance Computing is to develop techniques that allow maximizing the three software applications characteristics: productivity, performance, and portability for the five areas aforementioned. In this Minisymposium researchers and developers will discuss their successes and failures concerning this challenge. This is split into two sessions, each of two hours. The first session will cover the building tools and CI/CD techniques, while the second session focuses on packaging applications and performance portability. Presentations
| |||||||||||||||||||||
13:00 - 14:00 CEST | Break |
| |||||||||||||||||||
14:00 - 15:30 CEST | Networking |
| |||||||||||||||||||
15:30 - 16:30 CEST | Paper |
| Henry Dunant | Solid Earth Dynamics Engineering | |||||||||||||||||
15:30 - 16:30 CEST | Paper |
| Lise Girardin | Emerging Applications Engineering | |||||||||||||||||
15:30 - 16:30 CEST | Paper |
| Ernesto Bertarelli | Physics | |||||||||||||||||
16:30 - 17:00 CEST | Break |
| |||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Climate and Weather | |||||||||||||||||
DescriptionMany of the operational weather and climate prediction models world wide have been developed over multiple decades. The algorithms used in these models were often designed well before the multicore era started. To take full advantage of the emerging massively parallel heterogeneous supercomputers it is necessary to investigate completely new algorithms which have never been used in operational weather and climate prediction before and which promise significant improvement in terms of scalability and energy efficiency. In particular, high order discontinuous and continuous Galerkin methods offer increased operational intensity which allows to make better use of the available processor performance while reducing the necessary memory traffic which is typically the bottleneck in most earth system models. At the same time new time integration methods allow to reduce the number of time steps or even to parallelize the computation over time steps. This mini-symposium will present innovative work on these new algorithmic approaches and discuss their benefits in light of the upcoming exascale supercomputers. Presentations
| |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Lise Girardin | CS and Math Emerging Applications Physics Engineering | |||||||||||||||||
DescriptionCosmology and particle physics both strive to gain a deeper understanding of the history, the composition and the inner workings of the Universe. While complementary in many aspects, cosmology and particle physics also share many things: they try to find answers to the same open questions, they use similar detectors and analysis techniques, and they both have marvelously precise models. Both disciplines rely heavily on Monte Carlo simulation techniques, they have ever-increasing datasets with planned next-generation experiments, which face yet-unsolved computing challenges related to triggering, data reconstruction and simulation, as well as data storage. Traditional computing approaches are not scaling to these New Challenges and are limiting the physics output. New Computing Paradigms are needed to make progress. Recent developments in machine learning techniques coupled with custom hardware may offer potential directions of improvement. The symposium will highlight several prominent areas in this fast emerging and thriving field with a diverse set of elect international speakers from research Universities both in physics and computer science, research institutes and industry. This rich diversity will provide different angles from which to shine light on the problem for maximum clarity and accessibility of the underlying challenges and the proposed methods to address them. Presentations
| |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Louis Favre | CS and Math Emerging Applications Chemistry and Materials Climate and Weather Physics | |||||||||||||||||
DescriptionScientists in all domains increasingly rely on high-throughput applications that combine multiple components into increasingly complex multi-modal workflows executing on heterogeneous systems. The complexities of those applications and their workflows hinder the scientists’ ability to generate results in a robust way. Robust science should assure performance scalability in space and time; trust in technology, people, and infrastructures; and reproducibility or confirmable research in high-throughput applications. Today high-throughput applications are far from achieving these goals. Through the presentations of a set of comprehensive, cross-disciplinary studies of high-throughput applications for scientific discovery we outline challenges to reach robust science with hardware and systems all the way to policies and practices. We bring together a cross-disciplinary community including computer and data scientists, physicists, natural scientists, and molecular dynamics scientists discussing practices and procedures to help define, design, implement, and use a set of solutions for robust science. We take important steps to define a roadmap that enables high-throughput applications to withstand and overcome adverse conditions such as heterogeneous, unreliable architectures at all scales including extreme scale, lack of rigorous testing under uncertainties, unexplainable algorithms (e.g., in machine learning), and black-box methods. Presentations
| |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Michel Mayor | CS and Math Chemistry and Materials Physics | |||||||||||||||||
DescriptionThe field of computational materials design has been pushed to new frontiers over the last decade. This is mainly rooted in recent advances to augment well known simulation approaches like density functional theory (DFT) or molecular dynamics (MD) simulations with data-driven approaches like machine learning. The representation of DFT potential energy surfaces with neural networks lead to a great gain in computational efficiency without losing much of the accuracy of the DFT calculation. In addition to describing interatomic interactions, machine learning has been further applied to predict material properties based on first-principle calculations or to classify molecules based on thermodynamic properties. The ultimate goal of this new class of methodological approaches is to aid the development of novel materials. These materials may find applications in drug design, unconventional energy resources or innovative semiconducting materials. This minisymposium will present state-of-the-art examples of the underlying method development, scientific applications, and implementation on high performance computing platforms. Presentations
| |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Emerging Applications Climate and Weather | |||||||||||||||||
DescriptionLakes form an integral component of ecosystems and our communities, with a significant portion of the Swiss population living in their close proximity. Better understanding of the internal lake processes and can be obtained through the development of more accurate computational models and the use of the newly available high frequency sensor data. Enabled by powerful computational resources, researchers can now test and evaluate multitude of model paradigms, calibrate and infer quantities governing the physical and ecological dynamical processes and study the underlying fine-scale mechanisms. These methodologies can be coupled with state-of-the-art data assimilation techniques allowing to perform statistical inference of inaccessible quantities of interest and to perform accurate forecasting for early warning systems, including the quantification of the associated uncertainty. The goal of this minisymposium is to foster exchange of recent developments and methodologies pertaining to high performance computing in aquatic research, with a large focus on lake phenomena. These discussions aim to help scientists to better understand complex processes that are relevant to improve the quality of lake models and predictive frameworks. Presentations | |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Mère Royaume | CS and Math Emerging Applications Physics Engineering | |||||||||||||||||
DescriptionRecently, hardware manufacturers are responding to an increasing request for low precision functionality such as FP16 by integrating special low-precision functional units, e.g., NVIDIA Tensor cores. These, however, remain unused even for compute-intensive applications if high precision is employed for all arithmetic operations. At the same time, communication-intensive applications suffer from the memory bandwidth of architectures growing at a much slower pace than the arithmetic performance. In both cases, a promising strategy is to abandon the high-precision standard (typically fp64), and employ lower or non-standard precision for arithmetic computations or memory operations whenever possible. While employing formats other than working precision can render attractive performance improvements, it also requires careful consideration of the numerical effects. On the other end of the spectrum, precision formats with higher accuracy than the hardware-supported fp64 can be effective in improving the robustness and accuracy of numerical methods. With this breakout minisymposium, we aim to create a platform where those working with multiprecision or interested in using multiprecision technology come together and share their expertise and experience. Presentations
| |||||||||||||||||||||
17:00 - 19:00 CEST | Minisymposium |
| Henry Dunant | CS and Math Chemistry and Materials Life Sciences | |||||||||||||||||
DescriptionOne major potential and promise of big data analysis lies in the simultaneous mining andintegration of multiple heterogeneous sources of data. In life sciences, recent years have seenthe increasing availability of biological and bioinformatic databases using the ResourceDescription Framework (RDF), which facilitates automatic data processing andinteroperability. However, there are major stumbling blocks on the path to mass adoption.The complexity of general-purpose models, inconsistent data models, and low usability aresome of the challenges that hamper the use of RDF resources by the bulk of biologicalresearchers. This mini-symposium brings together specialists on semantic data integration inlife science and will provide a forum to explore innovative solutions to fulfil the potential ofbig data integration. Presentations
| |||||||||||||||||||||
19:30 - 20:00 CEST | Diversion |
|
Friday, 9 July 2021 expand all · collapse all
Time | Type | Session / Presentation | Contributors | Location | Domain | Plan | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11:00 - 13:00 CEST | Paper |
| Ernesto Bertarelli | CS and Math | |||||||||||||||||
Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Paper |
| Lise Girardin | CS and Math Physics Life Sciences | |||||||||||||||||
Presentations | |||||||||||||||||||||
11:00 - 13:00 CEST | Paper |
| Henry Dunant | CS and Math Climate and Weather | |||||||||||||||||
Presentations | |||||||||||||||||||||
13:00 - 14:00 CEST | Break |
| |||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Mère Royaume | CS and Math Climate and Weather Engineering | |||||||||||||||||
Description Data movement is a constraining factor in almost all HPC applications and workflows. The reasons for this ubiquity include physical design constraints, power limitations, relative advancements of processors versus memory and rapid increases in dataset sizes. While decades of research and innovation in HPC have resulted in robust and powerful optimizing environments, even basic data-movement optimization remains a challenge. In many cases, fundamental abstractions suited to the expression of data are still missing, as is a model of various memory types/features. Performance portability of Exascale systems requires that heterogeneous memories are used intelligently and abstractly in the middleware/runtime rather than requiring explicit, laborious hand-coding. To do so, capacity, bandwidth and latency considerations of multiple levels must be understood (and often modeled) at runtime. Furthermore, the semantics of data usage within applications must be evident in the programming model. Several research projects are presenting solutions for either a piece of this problem (EPiGRAM-HS with MAMBA, Tuyere) or holistically (DaCE). This minisymposium will present a sample of the most relevant research concerning programming abstractions, models and runtimes for data movement from the perspectives of vendors (HPE/Cray), world-class supercomputer centers (EPCC, ORNL) and programming model developers (ETH). Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Louis Favre | CS and Math Physics Solid Earth Dynamics | |||||||||||||||||
DescriptionComputational geosciences leverage advanced computational methods to improve our understanding of the interiors of Earth and other planets. They combine numerical models to understand the current state of physical quantities describing a system, to predict their future states, and to infer unknown parameters of those models from data measurements. Such models produce highly nonlinear numerical systems with extremely large numbers of unknowns. The ever-increasing power and availability of High Performance Computing (HPC) facilities offers researchers unprecedented opportunities to continually increase both the spatiotemporal resolution and the physical complexity of their numerical models. However, this requires complex numerical methods and their implementations that can harness the HPC resources efficiently for problem sizes of billions of degrees of freedom. The goal of this minisymposium is to bring scientists who work in theory, numerical methods, algorithms and scientific software engineering for scalable numerical modelling and inversion. Examples include, but are not limited to, geodynamics, multi-phase geophysical flow modelling, seismic wave propagation and imaging, seismic tomography and inversion of large data sets, development of elaborate workflows including HPC for imaging problems, ice-sheet modelling, and urgent computing for natural hazards. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Ernesto Bertarelli | CS and Math Emerging Applications Climate and Weather Physics Engineering | |||||||||||||||||
DescriptionMost of the flow solvers, commercial as well as opensource, that are used for turbulent flow simulations are based on spatial discretizations that are nominally second-order accurate for evolving the compressible and incompressible Navier-Stokes on unstructured meshes that represent the underlying complex geometry. For canonical simulations of incompressible turbulent flows, on the other hand, where the geometry of the computational domain is much simpler, the solvers usually make use of FFT based pseudo-spectral solvers that could be used in conjunction with higher-order finite difference schemes. The construction of these solvers for optimal performance on GPU based platforms, and the hardware abstractions that are used to offload computations to the GPU, is the subject of this mini-symposia. Secondly, this mini-symposia will feature a talk that assesses the performance of higher-order discretization schemes (with local support) on GPU based platforms, and their ability to represent the fine scale turbulent flow features when compared with pseudo-spectral solvers that have traditionally been used for DNS of canonical flows. Finally, this mini-symposia will also present the simulation of multiphase flows with a higher-order lattice Boltzmann method. Presentations | |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean Calvin | CS and Math Climate and Weather | |||||||||||||||||
DescriptionArchitectural specialization driven by the limits imposed by the slow down in Moore’s Law is here to stay. For weather and climate models, increased complexity of hardware architectures imposes a huge challenge. Balancing development speed, performance portability, efficiency and maintenance cost of community developed weather and climate models using the prevalent programming model of Fortran plus extensions has become increasingly hard and slowed scientific productivity. A few efforts are aiming to solve this challenge by developing domain-specific language (DSL) compilers. Higher-level programming increased developer productivity and shifts the burden to generate efficient code for a given hardware architecture to the DSL compiler. Requirements on the DSLs are not unanimous since target architecture, computational patterns from different models, as well as the preferred way of expressing the model, varies among some of the major weather and climate model development efforts. In this mini-symposium, keynote speakers from various efforts around the world will talk about their approaches and the learnings from their work. We provide a platform to discuss how the future of domain-specific languages in weather and climate should look and how we can evolve our current ideas. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Michel Mayor | Chemistry and Materials Physics Engineering | |||||||||||||||||
DescriptionThe discovery of new materials by means of computational approaches has seen a dramatic shift towards high-throughput approaches and big-data analysis. This has asked for the development of techniques to deal with the generation, storage and processing of large amounts of simulation data, complementing experimental data in the discovery effort. Sharing research data acquires critical importance to make the experimental and computational results, and the overall scientific process, reproducible. In this symposium, we will discuss present approaches and challenges to grant efficient solutions for Finding, Accessing, Interoperating and Reusing (FAIR) scientific data. Four speakers: -Professor Giulia Galli, Department of Chemistry University of Chicago, USA -Dr. Emanuele Bosoni, Institut de Ciencia de Materials de Barcelona, Spain -Professor Gian-Marco Rignanese, Institute of Condensed Matter and Nanoscience (IMCN), Université catholique de Louvain (UCL), Belgium -Dr. Carl Simon Adorf, Laboratory of Theory and Simulation of Materials, École Polytechnique Fédérale de Lausanne, Switzerland will focus the discussion on platforms to grant interoperability between materials databases (OPTIMADE); to curate scientific publications (QResp: generation of metadata, automatic access and exploration of scientific data within a publication); to enable execution and exchange of computational workflows, making simulations accessible, and interoperable between different quantum simulation engines (AiiDA, AiiDAlab) Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Jean-Jacques Rousseau | CS and Math Physics | |||||||||||||||||
DescriptionNowadays the complexity of scientific computing in conjugation with the hardware complexity has become significant. On one side, scientific software applications require several dependencies for their compilation. On the other side, users and developers of scientific applications target a wide range of diverse computing platforms, such as their laptops to supercomputers. This complexity poses challenges during the entire workflow of the applications. We can distinguish at least five critical areas: 1) applications building, including all dependencies; 2) testing during the application development with Continuous Integration (CI) and automated build and testing techniques; 3) deployment of the applications via Continuous Deployment (CD) techniques; 4) packaging of the applications with dependencies for easy user-level installation and productivity; 5) software performance portability. The challenge in High Performance Computing is to develop techniques that allow maximizing the three software applications characteristics: productivity, performance, and portability for the five areas aforementioned. In this Minisymposium researchers and developers will discuss their successes and failures concerning this challenge. This is split into two sessions, each of two hours. The first session will cover the building tools and CI/CD techniques, while the second session focuses on packaging applications and performance portability. Presentations
| |||||||||||||||||||||
14:00 - 16:00 CEST | Minisymposium |
| Henry Dunant | CS and Math Emerging Applications | |||||||||||||||||
DescriptionWHPC is the only international organization working to improve equity, diversity and inclusion in High Performance Computing. The chapter is being formed by senior professionals scientists and engineers working in Switzerland, representing academia, large research centre, national HPC Centre and the IT industry. The mission of WHPC is to "promote, build and leverage a diverse and inclusive HPC workforce by enabling and energising those in the HPC community to increase the participation of women and highlight their contribution to the success of supercomputing. To ensure that women are treated fairly and have equal opportunities to succeed in their chosen HPC career". The minisymposium will be comprised of 1 short intro, 3 talks followed by a panel discussion with all the speakers and organizers on actions, existing programmes and emerging proposals to enable more diversity and inclusivity. Presentations
| |||||||||||||||||||||
16:00 - 16:30 CEST | Break |
| |||||||||||||||||||
16:30 - 17:20 CEST | Keynote |
| Ella Maillart | ||||||||||||||||||
17:20 - 17:50 CEST | Closing |
| Ella Maillart |
No presentations match the current selections.
You have not yet flagged any presentations. Flag presentations that you want to attend, then return here to view your agenda.