Large data sets are increasingly being ingested (e.g., in data assimilation) and produced (e.g., in uncertainty quantification) by traditional simulations. Important questions that emerges are: What experience from traditional simulations is transferable to newly emerging big data applications? Conversely, what new optimal algorithms will emerge that are motivated by data intensive applications being pushed to large scales? How will they enrich traditional simulations?
Examples of topics that will be discussed:
This workshop will bring together analysts and developers of computationally and data intensive applications interested in early exploitation of extreme scale computing platforms to define common ground and seek new opportunities.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
(Technical University Munich (TUM), Computer Science)
Emmanuel Candes (Stanford University, Applied and Computational Mathematics)
Chris Johnson (University of Utah, Imaging & Biomedical Computing)
David Keyes (King Abdullah Univ. of Science and Technology (KAUST), Applied Mathematics and HPC)
Marina Meila (University of Washington)