2009 HPC Day >> Keynote Speaker
Friday, April 3rd, 2009 1:10pm - 2:20pm, RBC 85
The anticipated availability of massively parallel peta and exascale computers in the next few years offers the climate community a golden opportunity to dramatically advance our understanding of the Earth’s climate system and climate change, if only they can be harnessed to the task. Unfortunately the fit is not perfect. First, massively parallel systems impose stringent and unavoidable Amdahl-law requirements on application scalability. Second, the trade-off between resolution and integration rate, both critical factors in climate modeling, are severe. Third, the increasing complexity of modern supercomputer systems, e.g. in terms of the numbers of cores on a chip, and the number of chips in a system, increases the tension between the system architectural trends and programmability. Finally, the size and complexity of climate applications make them difficult to port, adapt, and validate on new architectures.
This talk will discuss on-going efforts within the DOE SciDAC and NSF PetaApps programs
to both seize this important scientific opportunity and address the increased complexity
of petascale systems. Efforts to develop lightweight, incremental, and beneficial scaling
improvements on existing ocean, land and sea-ice components of the Community Climate System
Model (CCSM) will be demonstrated and discussed. Improved scalability and performance of
these components has improved to the point that 50 km atmospheric component coupled to
10 km eddy-permitting ocean and sea-ice simulations have now being attempted or are being
contemplated at Lawrence Livermore National Laboratory, the National Energy Research
Scientific Computing Center, National Institute for Computational Sciences, and elsewhere.
A 25 km version of the High-Order Method Modeling Environment (HOMME), a scalable dynamical
core cur rently being evaluated within the Community Atmosphere Model (CAM), will soon be
tested coupled to CCSM. To push to even higher, cloud-resolving resolutions requires a
breathtaking number of innovations in numerical methods, computer architecture and
Dr. Richard Loft is the Director for Technology Development in the Computational and Information Systems Laboratory of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. With a B.S. in Chemistry and an M.S. and a Ph.D. in Physics, Dr. Loft has been involved with massively parallel computing since joining Thinking Machines Corporation as an Application Engineer in 1989. Throughout his career he has contributed to the understanding and effective use of parallelism as applied to the National Science Foundation's Grand Challenge simulations. He has made significant contributions to the design and performance of a variety of Earth System models, and he helped develop one of the first climate simulation schemes that coupled an atmosphere model with an ocean model. He contributed to a team that developed an efficient spectral-element-based primitive-equations dynamical core on the cubed-sphere. This core has evolved into the High Order Method Modeling Environment (HOMME). HOMME is currently being integrated with the CCSM Community Atmosphere Model (CAM) to transfer its capabilities to a broad community of climate researchers. The algorithm aspects of this work were recognized with an honorable mention prize in the IEEE/ACM Gordon Bell competition at the international Supercomputing 2001 conference. In 2005, Dr. Loft was co-PI on an NSF MRI grant that brought a 2,048-processor IBM Blue Gene/L system to the Colorado Front Range. The successful deployment and use of the Blue Gene/L system as a computational science research platform has led to 59 publications at last count. Since 2005, Dr. Loft has led NCAR's participation in the NSF-funded TeraGrid project as resource provider principal investigator (RPPI), and successfully deployed the IBM Blue Gene/L as a TeraGrid resource on August 1, 2007. In 2007, Dr. Loft established the Summer Internship in Parallel Computational Science (SIParCS) program at NCAR. Each summer's 10-week period program brings upper-division undergraduates and first and second-year graduate students in contact with practical, HPC-related applied mathematics and computational science problems derived from the mission and needs of NCAR's Computational and Information Systems Laboratory.