Skip to main content



Press Release

Green computing, "fast data” dominate Cornell HPC workshop


Contact: Paul Redfern
Cell: (607) 227-1865

FOR RELEASE: June 13, 2008

ITHACA, N.Y. – The Cornell Center for Advanced Computing hosted a regional high-performance computing workshop for IT professionals on May 2 at its Ithaca, New York campus. Fifty-five IT directors, CIOs and systems managers attended the event, including professionals from Corning, Kodak, Lockheed Martin, Time Warner, North Shore LIJ Health System, CBORD, Syracuse University, and Penn State.

Dell led off the event and was followed by speakers from Cluster Resources, Cisco, Intel, Microsoft, and DataDirect Networks. Cornell has a long history of collaborating with Dell, encouraging the company to enter the high-performance computing market and, subsequently, deploying Dell’s first “Top500” supercomputer. Today, Cornell continues to deploy the latest Dell technology, including a recently installed Combustion And Turbulence Simulator (CATS) cluster running parallel FLUENT and soon to be deployed Dell PowerEdge R900 and M600 clusters. Cornell operates a heterogeneous computing and data storage environment consisting of Linux, Windows, and UNIX-based clusters.

Tim Carroll, Senior Manager HPCC at Dell, noted that the biggest HPC deployment issue faced today is improperly specified power and cooling. "Get your facilities folks engaged early and ensure that they understand what the power issues are," he explained. "It's no longer just a question of doing a simple tonnage calculation." Equally important, he added, is carefully estimating what it will take to get the application up and running. While Dell deploys a very high volume of industry standard clusters, the company is building more and more custom based systems for HPC and Web 2.0 applications. One customer, for example, will deploy over forty thousand custom built Dell servers over the next three years.

Michael Jackson, President of Cluster Resources, believes that usability is one of the biggest inhibitors to making HPC a mass market. He detailed solutions to address this usability issue as well as a new innovation called the Moab Hybrid Cluster, an HPC solution that dynamically changes cluster nodes between Linux and Windows based on workload and application needs. Scheduling systems such as Moab can help decrease power consumption as well. "Power savings for idle nodes, thermal balancing, and grid level workload placement based on local power costs will ultimately deliver a more cost efficient and green sensitive computing environment," Jackson noted.

Cisco announced that the Nexus 5000 Series switches will be available this month. It is the first open standards-based access-layer switch to support I/O consolidation at the rack level. The switch is designed to meet the demands of next generation HPC and data storage systems with dense multisocket, multicore services. The Nexus 5000 extends Fibre Channel traffic over 10 Gigabit Ethernet networks, consolidating I/O onto one set of cables, eliminating redundant adapters, cables and ports. "While InfiniBand is still the way to go for applications requiring ultra low latency, 10 Gigabit Ethernet networks will meet the needs of many HPC applications while providing a lower total cost of ownership," explained Ji Lim, Consulting Systems Engineer at Cisco.

Steve Lionel, Senior Member Technical Staff at Intel, provided an update on Intel processors and discussed Intel software development tools. Intel's current 5400 series quad-core processor, featuring the state-of-the-art 45nm High-K process technology, and its associated platforms are the most popular building blocks for HPC clusters. Intel will also launch a 6-core 45nm processor (code named “Dunnington”) for 4-socket platforms, later this year. An even more significant HPC platform, with new microarchitecture is coming out by end of 2008. Code named “Nehalem,” it will be scalable from 2 to 8 cores, with "simultaneous multithreading" resulting in 4 to 16 thread capability. Nehalem, with its integrated memory controller, is expected to deliver 4 times the memory bandwidth compared to today's highest-performance Intel Xeon processor-based systems. Its 731 million transistors allow it to feature a large Level-3 cache. Nehalem will have multiple point-to-point interconnect links called QuickPath Interconnect (QPI) each capable of bandwidth several times greater than current front side bus.

The successor to Windows Compute Cluster Server 2003 will be called Windows HPC Server 2008 and be built on Windows Server 2008 64-bit technology. Brain Hammond, HPC Specialist at Microsoft, discussed the key features of Windows HPC Server 2008 such as new high-speed networking, efficient cluster management tools, and a job scheduler whose flexibility enables integration between Windows and Linux-based platforms. "Microsoft is developing a Windows parallel programming ecosystem that provides a rich suite of tools for multi-core and cluster programming," explained Hammond. "These tools, tightly integrated with the Windows HPC platform, are appealing to ISVs who would rather focus on optimizing their product’s performance rather than maintaining multiple operating systems."

The workshop ended with a discussion of storage technologies led by Randy Kreiser, Chief Storage Architect for DataDirect Networks. Their latest storage product, the S2A9900, provides 6 GB per second performance with inexpensive SATA disks. With DataDirect technology, the I/O system localizes error management at the level of the drive. Up to 1.2 petabytes of storage can fit in two racks.

Live workshops and online educational events are provided by the Cornell Center for Advanced Computing to benefit the national HPC community and are made possible thanks to the support of the National Science Foundation, New York State, and members of Cornell's corporate program. Cornell also collaborates with the New York State Grid (NYSGrid) in order to increase the impact of HPC on New York State research, education, and innovation.