HPC Symposium 2012

Seventh Annual HPC Workshop

  • You are here: 
  • Home
  • Speakers

22-23 March 2012::Lehigh University::Bethlehem, PA

Symposium Speakers

Keynote Speaker: Manish Parashar
Moving From Data to Insights—Addressing Data Challenges in Simulation-based Science

Department of Electrical and Computer Engineering
Rutgers, The State University of New Jersey

http://nsfcac.rutgers.edu/people/parashar/

Data-related challenges are quickly dominating computational and data-enabled sciences, and are limiting the potential impact of end-to-end coupled application formulations enabled by current high-performance distributed computing environments. These data-intensive application workflows present significant data management, transport and processing challenges, involving dynamic coordination, interactions and data-coupling between multiple application process that run at scale on different high performance resources, and with services for monitoring, analysis and visualization and archiving. In this presentation I will explore the data grand challenges of simulation-based science application workflows and investigate how solutions based on managed data pipelines, in-memory data-staging, in-situ placement and execution, and in-transit data processing can be used to address some of these data challenges at petascale and beyond.

Manish Parashar is Professor of Electrical and Computer Engineering at Rutgers University. He is also founding Director of the Center for Autonomic Computing and The Applied Software Systems Laboratory (TASSL), and Associate Director of the Rutgers Center for Information Assurance (RUCIA). Manish received a BE degree from Bombay University, India and MS and Ph.D. degrees from Syracuse University. His research interests are in the broad area of parallel and distributed computing and include Computational and Data-Enabled Science and Engineering, Autonomic Computing, and Power/Energy Management. A key focus of his research is on addressing the complexity or large-scale systems and applications through programming abstractions and systems. Manish has published over 350 technical papers, serves on the editorial boards and organizing committees of a large number of journals and international conferences and workshops, and has deployed several software systems that are widely used. He has also received numerous awards and is Fellow of IEEE/IEEE Computer Society and Senior Member of ACM. For more information please visit nsfcac.rutgers.edu/people/parashar/.


Keynote Speaker: Steve Plimpton
High-Performance Computing for Atomistic Materials Modeling

Scalable Algorithms Group
The Sandia National Laboratories

http://www.sandia.gov/~sjplimp/

Advances in high-performance computing technology over the last decade have enabled increasingly sophisticated material models at the atomic scale. The length and time scales accessible to simulation have grown dramatically, due to faster clock rates and increased parallelism at both the chip level (multicore, GPUs) and the machine level (tens of thousands of cores). However, algorithm advances and the development of more accurate interatomic potentials have been equally important in enabling predictive simulations. These trends are illustrated by enhancements to the open-source LAMMPS molecular dynamics package, including recent efforts aimed at more accurate all-atom models, faster coarse-grained models, and coupling to capture multi-scale effects. Highlights of these efforts are demonstrated by vignettes from large-scale calculations.

Steve Plimpton is a staff member at Sandia National Laboratories, a U.S. Department of Energy lab. For many years he was in the Parallel Computational Sciences group. In 2001, he moved to a new Computational Biology group, and in 2007, joined the Scalable Algorithms group. Plimpton's work involves implementing and using scientific simulations designed for parallel supercomputers. Often this includes the creation of efficient parallel algorithms. The applications he works on typically use particles, finite elements, or partial differential equations. His group is now applying some of these simulation tools and algorithms to biology and informatics problems.


John Cavazos
Auto-tuning a High-Level Language Targeted to GPU Codes

Department of Computer and Information Sciences
University of Delaware

http://www.eecis.udel.edu/~cavazos/

Determining the best set of optimizations to apply to a kernel to be executed on the graphics processing unit (GPU) is a challenging problem. There are large sets of possible optimization configurations that can be applied, and many applications have multiple kernels. Each kernel may require a specific configuration to achieve the best performance, and moving an application to new hardware often requires a new optimization configuration for each kernel. In this talk, I will discuss our work on applying optimizations to GPU code using HMPP, a high-level directive-based language and source-to-source compiler that can generate CUDA / OpenCL code. Programming with high-level languages has often been associated with a loss of performance compared to using low-level languages. In this talk, I will show that it is possible to improve the performance of a high-level language by using auto-tuning. I will discuss how we performed auto-tuning on a large optimization space on GPU kernels, focusing on loop permutation, loop unrolling, tiling, and specifying which loop(s) to parallelize, and I will show results on convolution kernels, codes in the PolyBench suite, and an implementation of belief propagation for stereo vision. The results show that our auto-tuned HMPP-generated implementations are significantly faster than the default HMPP implementation and can meet or exceed the performance of manually coded CUDA / OpenCL implementations.

Prof. John Cavazos was one of the first researchers to introduce the use of machine learning to optimize the compiler itself. He is working on techniques that allow the compiler to learn about the underlying architecture to determine the best set of optimizations for a particular program and architecture. Prof. Cavazos is also investigating the use of existing and novel techniques to improve the performance of Java programs, without losing its flexibility and portability.


Boyce Griffith
Cardiac fluid-structure and electro-mechanical interaction

Charney Division of Cardiology, Department of Medicine
New York University School of Medicine

http://www.cims.nyu.edu/~griffith/

The heart is a coupled electro-fluid-mechanical system. The contractions of the cardiac muscle are stimulated and coordinated by the electrophysiology of the heart; in turn, these contractions affect the electrical function of the heart by altering the macroscopic conductivity of the tissue and by influencing stretch-activated transmembrane ion channels. To develop a unified approach to modeling cardiac electromechanics, we have extended the immersed boundary (IB) method for fluid-structure interaction, which was originally introduced to model cardiac mechanics, to describe cardiac electrophysiology. The IB method for fluid-structure interaction uses Lagrangian variables to describe the elasticity of the structure, and uses Eulerian variables to describe the fluid. Coupling between Lagrangian and Eulerian descriptions is mediated by integral transforms with Dirac delta function kernels. An analogous approach can be developed for the bidomain equations of cardiac electrophysiology. Quantities associated with the cell membrane, like the ion channel gating variables, are tracked along with the intracellular variables in Lagrangian form. Employing an Eulerian description of the extracellular space, on the other hand, makes it straightforward to extend that space beyond the myocardium, into the blood and into the extracardiac tissue, both of which are electrically conducting media that couple directly to the extracellular space of the myocardium. In the electrophysiological immersed method, interaction between Lagrangian and Eulerian variables happens in a manner that is completely analogous to the corresponding coupling between Lagrangian and Eulerian variables in the conventional IB method for fluid-structure interaction.

In this talk, I will describe the IB method for both fluid-structure and electro-mechanical interaction, and I will present applications of this unified methodology to models of heart function that couple descriptions of cardiac mechanics, fluid dynamics, and electrophysiology. Additional applications of the IB method will also be presented.

Boyce Griffith received a B.S. in Computer Science and a B.A. in Mathematics and in Computational and Applied Mathematics from Rice University in 2000, and completed his Ph.D. in Mathematics at the Courant Institute of Mathematical Sciences at New York University in 2005. Since 2008, he has been on the faculty at NYU School of Medicine, where he is an Assistant Professor of Medicine in the Leon H. Charney Division of Cardiology. Griffith's research involves developing of mathematical models and numerical methods for simulating physiology, especially cardiovascular physiology.


Michela Taufer
GPU-enabled Macromolecular Simulation: Challenges and Opportunities

Department of Computer and Information Science
University of Delaware

http://gcl.cis.udel.edu/personal/taufer/

GPU enabled simulation of fully atomistic macromolecular systems is rapidly gaining momentum, enabled by the massive parallelism and due to parallelizability of various components of the underlying algorithms and methodologies. Massive parallelism on the order of several hundred to a few thousand cores presents opportunities as well as implementation challenges. In this talk I will visit various key aspects of realism of macromolecular systems (i.e., the realism of mathematical models and the validity of simulations) specifically adapted to GPUs. We also visit some of the underlying challenges and solutions devised to tackle them.

Michela Taufer is an Assistant Professor in Computer and Information Sciences at the University of Delaware since September 2007. She earned her MS in Computer Engineering from the University of Padova and her Ph.D. in Computer Science from the Swiss Federal Institute of Technology (ETH). She was a post-doctoral research supported by the La Jolla Interfaces in Science Training Program (also called LJIS) at UC San Diego and The Scripps Research Institute. Before she joined the University of Delaware, Michela was faculty in Computer Science at the University of Texas at El Paso.

Michela has a long history of interdisciplinary work with high-profile computational biophysics groups in several research and academic institutions. Her research interests include software applications and their advance programmability in heterogeneous computing (i.e., multi-core platforms and GPUs); cloud computing and volunteer computing; and performance analysis, modeling and optimization of multi-scale applications. She has been serving as the principal investigator of several NSF collaborative projects. She also has significant experience in mentoring a diverse population of students on interdisciplinary research. Michela's training expertise includes efforts to spread high-performance computing participation in undergraduate education and research as well as efforts to increase the interest and participation of diverse populations in interdisciplinary studies.

Michela has served on numerous IEEE program committees (SC and IPDPS among others) and has reviewed for most of the leading journals in parallel computing.


Skylar Tibbits
Computational Construction

Department of Architecture
The Massachusetts Institute of Technology

http://architecture.mit.edu/faculty/skylar-tibbits

Currently, our construction processes are plagued with out-dated techniques, analog methodologies and tolerance-prone assemblies. Recently emerging opportunities to utilize computational and digital information for automated-assembly processes has come at a critical time in the discipline. A time when we are striving to build at extremely large-scales, nano-scales, with extreme efficiency, reduced energy consumption, minimal construction budgets and a variety of accumulating constraints. Cellular automata and other computational models for embedding computational logic within fundamental physical blocks may offer new methodologies for construction. Our material parts of the future may have various states, embedded logic and can switch between states based on neighboring constraints. These new construction processes will contain “computational materials” (not necessarily requiring embedded electronics and motors, but rather simple low-level computation and digital information) as well as computational processes for constructing complex assemblies through local, rule-sets. These systems may explore self-assembly, self-repair and self-replication processes to aim at complete autonomy and/or user-material guided interaction.

Skylar Tibbits is a trained Architect, Designer and Computer Scientist whose research currently focuses on developing self-assembly technologies for large-scale structures in our physical environment. Skylar graduated from Philadelphia University with a 5 yr. Bachelor of Architecture degree and minor in experimental computation. Continuing his education at MIT, he received a Masters of Science in Design + Computation and a Masters of Science in Computer Science.

Skylar is currently a lecturer in MIT's Department of Architecture, teaching graduate and undergraduate design studios and co-teaching How to Make (Almost) Anything, a seminar at MIT's Media Lab. Skylar was recently awarded a TED2012 Senior Fellowship, a TED2011 Fellowship and has been named a Revolutionary Mind in SEED Magazine's 2008 Design Issue. His previous work experience includes: Zaha Hadid Architects, Asymptote Architecture, SKIII Space Variations and Point b Design. Skylar has exhibited work at a number of venues around the world including: the Guggenheim Museum NY and the Beijing Biennale, lectured at MoMA and SEED Media Group's MIND08 Conference, Storefront for Art and Architecture, the Rhode Island School of Design, the Institute for Computational Design in Stuttgart and The Center for Architecture NY. He has been published in numerous articles and built large-scale installations around the world from Paris, Calgary, NY to Frankfurt and MIT. As a guest critic, Skylar has visited a range of schools from the University of Pennsylvania, Pratt Institute and Harvard's Graduate School of Design.

Skylar Tibbits is the founder and principal of SJET LLC. Started in 2007 as platform for experimental computation + design, SJET has grown into a multidisciplinary research based practice crossing disciplines from architecture + design, fabrication, computer science to robotics.


John Urbanic
OpenMP and OpenACC: How to Program Many Cores

Pittsburgh Supercomputing Center

We will discuss why many core programming has become a necessity for High Performance Computing—or even efficient desktop computing— discussing the current and upcoming hardware that all programmers will have to use: CPUs, GPUs and various mixes. Although the programming techniques involved are often much simpler than the message passing methods used for massively parallel computing, they still require forethought coupled with realistic expectations. We will learn the standard technique for programming multiple core CPUs: OpenMP. Then we will be in an excellent position to learn the GPU oriented extension to OpenMP: OpenACC. OpenACC is a new standard of great interest as it enables programmers to efficiently and portably program GPUs without resorting to the very low-level CUDA language that has previously been the dominant option.

John Urbanic is Parallel Computing Specialist at the Pittsburgh Supercomputing Center and has been involved with parallel high performance computing for 20 years. He has won the Gordon Bell prize for massively parallel earthquake simulations and has developed code and algorithms that scale well to hundreds of thousands of cores on the largest platforms. A physicist with a B.S. from Carnegie Mellon University and an M.S. from Pennsylvania State University, John has always worked in a variety of scientific domains and in any area of practical impact on scalable computing performance, from IO to mathematical algorithms.


John Wofford
HPC Growing Pains: IT Lessons Learned from the Biomedical Data Deluge

Center for Computational Biology and Bioinformatics (C2B2)
Columbia University

Over the past few years the amount of biomedical data has grown dramatically, and along with this growth of data the sophistication and complexity of the tools used to process, store and analyze the data have grown in kind. In the past 4 years we have seen a > 30x increase in both storage and computing requirements at C2B2. This increased demand has pushed us from having a few small clusters and file servers in data closets, to housing a Top500 supercomputer and a distributed “scale-out” NAS platform all housed in a power-dense 3000 sq.ft. datacenter specifically designed for research computing. In this talk I will outline some of the lessons we've learned in this rapid expansion, emphasizing concerns surrounding large capacity, high-performance data storage.

John Wofford is the Director of Information Technologies for the Center for Computational Biology & Bioinformatics (C2B2) at Columbia University. His responsibilities also extend to:

  • Herbert Irving Comprehensive Cancer Center (HICCC),
  • Institute for Cancer Genomics (ICG),
  • J.P. Sulzberger Columbia Genome Center, and
  • Columbia Initiative in Systems Biology (CISB).

Wofford received his bachelor of liberal arts from St. John's College, Santa Fe, NM. Before coming to Columbia, he worked in The Los Alamos National Laboratories’ T-06 (Theoretical Astrophysics & Cosmology) division, where he researched supercomputing applications for astrophysical data and simulation.


Lei Xie
Bringing the Power of High Performance Computing to Systems Pharmacology

Department of Computer Science
Hunter College, City University of New York

http://compsci.hunter.cuny.edu/~leixie/

Recent advances in computational techniques and high performance computing have enabled the prediction of proteome-wide drug-target network. Concurrent developments in systems biology allow for prediction of the functional effects of system perturbations using large-scale network models. Integration of these two capabilities with structure-based drug design provides a framework for correlating protein-ligand interactions to drug response phenotypes /in silico/. This combined approach has been applied to repurpose existing drugs to treat infectious diseases and to investigate the hypertensive side effect of the cholesteryl ester transfer protein inhibitor torcetrapib in the context of human renal function. The bridge of High Performance Computing with systems pharmacology has important implications for drug discovery and personalized medicine.

Lei Xie is an associate professor in Computer Science at Hunter College, and Ph.D. program in Computer Science, Biology and Biochemistry in the City University of New York. He currently works in the area of Bioinformatics, Systems Biology, and Drug Discovery. His research focuses onestablishing casual relationships between genotypes and phenotypes, and revealing mechanisms of action of existing and preclinical drugs. The primary goal is to bridge basic sciences with clinical research through developing and using tools derived from a broad area of computer sciences such as machine learning, computational geometry, graph theory, and imaging processing.


HPC Twitter