In just a few years, new technologies for massively parallel DNA sequencing have become widely available, reducing the cost of sequencing a genome by four orders of magnitude and placing the capacity to generate gigabases to terabases of sequence data into the hands of individual investigators. These “next-generation” technologies have the potential to dramatically accelerate biological and biomedical research by enabling the comprehensive analysis of genomes and transcriptomes to become inexpensive, routine and widespread.
This is a dynamic moment in the field. The technologies themselves are evolving at a breathtaking pace, and the exploding volume of data has spurred the development of novel algorithmic approaches for primary analyses of sequence data.
This workshop, the first in a series of five, will bring together leaders in the field to present next-generation sequencing technology and to discuss the various mathematical and computational challenges presented by these technologies. Specifically, we will provide an introduction to the core concepts driving the development of leading second-generation and third-generation technologies. This discussion will be linked to an extensive consideration of methods for base-calling and variant-calling, for aligning reads to reference sequences (e.g. genomes) and for de novo assembly of short reads into longer sequences.
(University of Washington)
Matteo Pellegrini (University of California, Los Angeles (UCLA))
Aviv Regev (Broad Institute)
Eric Schadt (Pacific Biosciences)
Jay Shendure, Chair (University of Washington)
Yun Song (University of California, Berkeley (UC Berkeley))