The massive data centers needed to store and run a university’s information systems eat up huge amounts of energy and staff time. To help curb the costs associated with these resources, researchers at Carnegie Mellon University in Pittsburg, Pa., have opened a dual-purpose facility that is both a working data center and a research vehicle for the study of data center automation and efficiency.

The 2,000-square-foot Data Center Observatory (DCO) has the ability to support 40 racks of computers, which would consume energy at a rate of up to 774 kilowatts per hour–more than the rate of consumption of 750 average-sized homes, according to Carnegie Mellon. In addition to supporting a variety of Carnegie Mellon research activities, from data mining to simulations and three-dimensional visualizations, the center also will be used to study dense computing environments.

The DCO’s principle research goals are to better comprehend and mitigate human administration costs and complexities, power and cooling challenges and failures, and their consequences. Researchers also aim to understand resource utilization patterns and opportunities to reduce costs by sharing resources among users.

“One thing that motivated the project was that people have come to us consistently over the years and said, ‘This data center stuff is really hard and really expensive,'” said Bill Courtright, executive director of Carnegie Mellon’s Parallel Data Lab (PDL), which specializes in the study of storage systems. “When you asked them to quantify that [statement], you often got the answer, ‘Because it’s just really hard and really expensive.'”

The DCO joins what Carnegie Mellon officials call “a long tradition of weaving infrastructure research into campus life,” which the university says keeps it at the forefront of technology.

The initiative is a large-scale collaborative effort primarily between Carnegie Mellon’s College of Engineering and School of Computer Science and American Power Conversion (APC) Corp., a manufacturer of commercial uninterruptible power supplies. APC says it is providing engineering expertise and its InfraStruXure network infrastructure system for powering, cooling, racking, and managing equipment in the DCO.

For those departments that show interest, the DCO will centralize their research clusters, many of which are located in different buildings throughout campus.

“Each little research group has its own little cluster of machines,” Courtright explained. “We’re offering to run their stuff for them. They give over their administrative duties to us, and we will do their stuff for them. We want to study data centers. We’re asking that they give that part over to us–whether they’re doing simulations, or statistical analysis, or whatever. They save money, we do some of their administrative functions, and our research works better, because we have a research lab.”

Courtright continued: “For example, we met with someone this week that does very large simulations; they just bought a big slug of 40 machines. The administration is just [extra] overhead for them. They have 40 computers, they don’t have the space for them, and they just want their simulations to work.”

Courtright said it will be important to amass quantifiable data on the human administration of data functions. “Anecdotally, we know that human costs are a dominant part of the total cost of ownership for data centers, but exactly where people spend their time isn’t well understood. One of the things that makes the DCO so interesting is that, for the first time, university researchers will be able to study human costs and efficiencies in a working data center,” he said.

“We have started collecting time on all the administrators who work [in the center],” Courtright continued. “They record the amount of time they spend on each task: time spent installing machines, debugging them, et cetera. … We are looking at where, overall, the administrative time goes. I know I need 10 guys [in the center], for whatever reason, but understanding [that] reason is trickier, and nobody I know of has done that yet. Is there one reason that takes up a lot of time, or are there 10 reasons?” He concluded: “The fact that it’s a real working data center keeps us honest. & We’re not looking at this in a vacuum, in lab conditions. We’re getting our hands dirty.”

Energy efficiency is also one of the center’s major concerns. For some time now, the amount of power consumed by commodity servers has been increasing, as has the number of servers placed in a facility.

Greg Ganger, a professor of electrical and computer engineering and director of the PDL, said the move toward greater energy efficiency is a national one.

“These large clusters of power-hungry machines, along with rising energy prices, are generating huge energy bills, forcing data center owners nationwide to seek more energy-efficient solutions,” Ganger said.

Courtright said APC has been instrumental in inventing the center’s design. According to Courtright, traditional data centers are rarely scalable and are not built with the available space in mind, making them even more inefficient power guzzlers.

“We’re actually seeing machines now that draw as much as 70 percent peak power when they’re idle,” he said. “In the past, we’ve had rooms with under-floor air pressurization. There are several vendors that make that kind of gear. We’ve had labs in the past that were plagued by hot and cold pockets of air,” leading to greater costs in cooling the systems. “There is great inefficiency in slamming stuff in the room and hoping it works.”

Courtright returned to the example of the department running large-scale simulations.

“Forty computers drive a lot of heat. When you centralize [the data center], you centralize cooling, then you get less cost for cooling university-wide,” he said. “Put them in one room with one big air conditioner versus a bunch of rooms with individual cooling units. At the university level, it’s a win just from a cost perspective.”

According to Courtright, APC is working with Carnegie Mellon engineers to solve the problems created by the massive amount of energy needed to run a data center, and the difficulty of designing around a more energy-efficient model.

The DCO uses APC’s InfrastruXure on-demand architecture for network-critical physical infrastructure (NCPI). APC calls NCPI “the foundation upon which all highly available networks depend.” The modular design allows for racks with computing devices to be scaled for incorporation into environments ranging as tight as a wiring closet to large-scale data centers like the one at Carnegie Mellon, according to APC.

In Carnegie Mellon’s center, the cooling distribution system features what the company calls hot aisle containment. That means the backs of the racks face one another in columns throughout the structure, the hot air from the racks being pushed out from an exhaust system that funnels it from that contained area between the racks. This in-row cooling, as opposed to the in-room cooling method traditionally used in data centers, leads to greater control of the data crunching environment.

The hot blown air “doesn’t mix with the room air,” according to Jason Juley, an engineer for APC. This system, along with technology that helps balance the humidity in the data-center room, reportedly runs the cooling unit at its optimum level. “We look forward to working with Carnegie Mellon to help them solve the many challenges of designing and deploying high-density data centers in the future,” said Dwight Sperry, APC vice president of Enterprise Systems and Business Networks. “The Data Center Observatory faces many of the challenges common to all data center planners, such as space constraints, cooling high-density systems, and the unpredictability of future growth. APC’s InfraStruXure offers a space-saving, scalable, redundant, and pay-as-you-grow modular design that addresses all these concerns and [performs] at a lower total cost of ownership compared to legacy systems.”

In about a year, Courtright said, Carnegie Mellon hopes to have some initial findings from its research. The university plans to share these findings on the conference circuit, in academic and trade publications, and in other public-domain areas. Courtright said the school might even make raw data from the unit available.

“The first thing we want to get to is a pie chart to [figure out] where the time was spent” in administering the data center, Courtright said. “If there’s this huge time sink in one dimension, that’s probably something pretty interesting. But just getting the basic pie chart will be interesting. I haven’t seen anyone be able to do that before.”

Links:

Carnegie Mellon’s Data Center Observatory
http://www.pdl.cmu.edu/DCO

American Power Conversion Corp.
http://www.apc.com