A computing strategy well known among geeks and gurus is about to burst into mainstream consumer electronics, as teams of researchers work to improve speed and efficiency with what is known as parallel computing.

Parallel computing involves breaking a large complex problem or task into smaller discrete components, using multiple computers or processors to address those separate components simultaneously (or "in parallel"), then assembling those disparate results back into unified a solution or outcome, instead of using a single processor to complete each task one after another–thereby allowing computers to work faster and more efficiently.  Ideally, parallel computing makes processing faster, because more "engines" are working at the same time.

 Earlier this year, Microsoft and Intel announced a joint research initiative aimed at speeding the progress of developments made in parallel computing. (See "New research to focus on parallel computing.") The companies partnered with the University of California, Berkeley, and the University of Illinois at Urbana-Champaign to create a Universal Parallel Computing Research Center (UPCRC) at each university.

Microsoft, Intel, IBM, and others already deliver hardware and software that is capable of handling dual- and quad-core-based PCs.

And while the average computer user might not be familiar with parallel computing, researchers say it won’t be long before every off-the-shelf laptop is employing this programming strategy.

Parallel computing is increasingly moving into consumer electronics–most laptops have dual-core chips, and quad-core chips are increasingly popular, said Marc Snir, co-director of Illinois’ UPCRC. 

Researchers at the Illinois UPCRC are focusing on how to take advantage of parallelism in terms of those consumer electronics.

"Using more transistors to speed up a single-core processor doesn’t bring any real return, so the solution that all microprocessor manufacturers are moving toward is putting more cores, more threads, onto one chip, and building a parallel computer onto a chip," Snir said.

Today, users will see chips with four cores, and each core might execute more than one sequential program, or more than one thread, he said, adding that those numbers are expected to double in the coming years.

"So, the amount of parallelism on a chip is increasing very rapidly," said Snir. "It tells us we’re getting more and more transistors on a chip, and where before we equated that to more performance on a single thread, now that translates to larger levels of parallelism on a chip."

This leads researchers to the next step: All the software they are running on clients, laptops, and cell phones has been developed to run sequentially. To take advantage of the greater number of cores and increase performance, programmers must write parallel programs for the client environment: programs that break down a processing task into multiple chunks that can be processed simultaneously.

The ultimate goal, or one of them, is to make parallel programming synonymous with programming, Snir said, because every computer eventually is going to be a parallel computer, and any program ideally will be running in parallel.

Parallelism has been present for years in the worlds of supercomputing, graphic processing, and gaming, but it’s now becoming more common in general computing. Some graphic processors have 10 concurrent threads, and an IBM cell processor for video games has eight or nine concurrent threads, Snir said.

The trend traditionally has been aimed at those who needed advanced computing for scientific discovery, said Dan Reed, multi-core computing strategist for Microsoft.

"The doubling [of processing speeds] will continue for the foreseeable future," said Snir. "When we speak of long-term research, we think of laptops with portable devices that are as powerful as a supercomputer today–but of course, a laptop today is more powerful than a supercomputer from the 1970s."

He added: "Portable and mobile devices in 20 years may have 1,000-way parallelism."

In terms of how applications can take advantage of the performance offered in parallel computing, a general assumption, according to Snir, is that performance efforts will focus on providing higher-level, more intelligent interfaces to the user.

More progress is still needed in that area, but developments such as speech and image recognition, combined with a more intelligent analysis of information, could be in our future.

Better real-time graphics could replace current interfaces, for instance. A "portable digital assistant," for example, could know everything its owner has done, see what that person sees, and retain information about the user’s interactions and goals, Snir explains, noting that such interfaces are far from reality today.

Reed’s view of parallel computing’s future echoes Snir’s, and he said that computing systems are becoming much more environmentally aware, not just in a "green" sense, but in modes of interaction.

"Speech recognition and vision and image recognition [are leading to] a much more natural interaction, where we move from a world where the computer is something on which you type and read responses to one where you think about moving closer to a semi-intelligent system, where it can observe interactions, moods, and information based on contexts–those are more natural, effective interactions," he said.

And to do those things efficiently, the world needs more computing power than it has now.

"That’s where multi-core computing comes into play," Reed said. "In the mobile devices we take for granted now, with each substantive increase in computing power, we can start to embed intelligence into everyday objects."

Researchers are already working on graphic applications and a variety of smart applications that are relevant to the portable environment. It’s important to make parallel programming easier as well, so that it’s simpler and more approachable in the eyes of most programmers.

Exposing parallelism in a way that makes it seem powerful but still manageable will require more sophisticated programming environments, which Snir likened to a new car.

"Under the hood it’s sophisticated, but that sophistication makes the operation easier," he explained. "We want to hide that complexity under the hood."

Snir said parallelism hasn’t become completely mainstream in the programming world, "because it has been the focus of relatively few highly skilled programmers, and the market is small."

"As parallelism goes mass-market, we can think of more sophisticated programming environments and more specialized tools that deal with different forms of parallelism," he said.

Reed said researchers also are finding ways to address perceptions and misconceptions of the computing field, and to make sure that students are aware of them.

"Computing, in a sense, is really about logical problem solving, and it’s applicable to essentially any domain. Programming is only a small piece of it," he said. "Problem solving, for example, is applicable in many domains, and computing teaches you many of those fundamental skills."

Those behind parallel computing want to make it easy, so mainstream developers can craft applications that continue to deliver what we’ve "taken as a birth right," Reed said: faster computers.

"If we are going to continue to do that," Reed said, "we have to change the way that we develop software. That has some pretty deep implications [for] how we teach people how to develop programs, and that speaks to K-20 computing education, not only in rethinking computing and writing software, but also day-to-day issues of how we do it in practice."

Reed said the world has traditionally thought of sequential programming as the norm, and parallel programming as a special case.

"We’re about to turn that on its head," he said.

 Links:

Microsoft Corp.

Intel Corp.

University of California, Berkeley

University of Illinois at Urbana-Champaign