The challenges posed by modern hardware, combined with the rapidly increasing data sizes used by simulations, expose the fact that the software stack and its core constituents were designed in an era informed by very different design constraints. It is fair under the current conditions to question whether scientific throughput is truly being maximised. In this talk, I discuss some of the ways that scientific workflows are not well supported by the modern software environment. I then describe a number of research projects that seek to challenge this situation, by designing core data- and memory-centric abstractions that can be used at various places in the software stack such as middleware, programming environments, runtimes, and systems software. This work is being performed in collaborations between Cray and key customers, funded by the EU under the projects Maestro, Epigram-HS, EXPERTISE and plan4res.
Back to Workshop II: HPC and Data Science for Scientific Discovery