Big Data Meets Computation

January 30 - February 3, 2017


In High Performance Computing (HPC), one of the key challenges toward eDCM2017_imagexascale computing is to overcome the communication bottleneck. Data motion tends to clearly limit the overall performance and determine the (enormous) energy consumption of future supercomputers; some even say “flops are for free.” Therefore, it is crucial to develop novel ways of efficiently representing, reducing, reconstructing, and transferring huge amounts of data. At the same time, the analysis of large sets of (simulation) data requires sophisticated data analytics, which, in return, turns more and more compute-intense itself and, thus, becomes a major customer for HPC. Hence, computing technology and Big Data technology are intrinsically linked, and latest insights, methods, and algorithms have to be considered jointly within that context. The fusion of HPC and Big Data is a young field with an endless number of applications and huge potential. The present workshop aims at being a catalyst at this frontier and bringing together leading innovators and pioneers from applied mathematics, computer science, and various applications areas. This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Additional participant support from:


Organizing Committee

Rick Archibald (Oak Ridge National Laboratory)
Hans-Joachim Bungartz (Technical University Munich (TUM))
Frank Jenko (University of California, Los Angeles (UCLA))
Stan Osher (Institute for Pure and Applied Mathematics)