Select the directory option from the above "Directory" header!

Menu
The supercomputer on your desktop

The supercomputer on your desktop

Applications traditionally handled by the biggest of Big Iron are heading to the desktop

High-performance computing (HPC) has almost always required a supercomputer - one of those room-size monoliths you find at government research labs and universities. And while those systems aren't going away, some of the applications traditionally handled by the biggest of Big Iron are heading to the desktop.

One reason is that processing that took an hour on a standard PC about eight years ago now takes six seconds, according to Ed Martin, a manager in the automotive unit at computer-aided design software maker Autodesk Inc. Monumental improvements in desktop processing power, graphics processing unit (GPU) performance, network bandwidth and solid-state drive speed combined with 64-bit throughput have made the desktop increasingly viable for large-scale computing projects.

Thanks to those developments, a transition to "a supercomputer on your desk" is in full force.

Earthquake simulations, nuclear-stockpile simulations and DNA research are staying put on traditional supercomputers for now. But as processor technology advances to multiple cores in the next 10 years, even those activities, or portions of them, could conceivably make their way to the desktop.

In the meantime, here are some examples of high-performance applications that are already running on smaller computers.

Building better drugs for anesthesia

Today, doctors know how to administer anesthesia-inducing drugs, and they know the effects, but they do not actually know what the drugs' molecules are doing when the patient drifts off to sleep. This analysis requires intense computational power to see not only when the anesthetic enters the respiratory system, but also how it starts making changes.

At Temple University, researchers have developed models that measure the effects of applying anesthesia on molecules within nerve cells. The models currently run on a supercomputer, but plans are underway to perform the calculations on an Nvidia GPU cluster with four nodes. This will both save money and give researchers more flexibility to conduct tests when they're ready to do so (instead of having to wait for their scheduled time to use a supercomputer).

In that scenario, each GPU has the computational power of a small HPC cluster. GPU calculations involve mathematical calculations on the scale of those normally used to, say, render pixels in a video game.

Dr. Axel Kohlmeyer, a researcher on the project, says the best way to understand the simulation is to imagine a box filled with rubber balls, where each ball is a slightly different size and moves at a slightly different rate, interconnected with springs. Some springs are stronger or weaker than others, and some of the balls move faster or react differently. In the simulation, Kohlmeyer can follow the movements of all molecules to see the effects of anesthetics in the human body.

"Groups of particles will form and go where they like to be as determined by the magnitude of their interactions," says Kohlmeyer, explaining how the simulation evolves to the point where the interactions become balanced. Temperature variants produce vibrations and introduce new molecular activity. "The computational model is actually simple, but the challenge is you need so many millions of interactions. We do not want to just know the interactions at one point, but rather how they change over time."

Having to repeat the calculations very often is another part of the challenge, he adds.

For Kohlmeyer, the goal is to discover when the condition of not feeling anything actually occurs in the human body. This could lead to the creation of new kinds of anesthetics or help doctors determine why problems such as memory loss can occur after surgery.

Researchers at the Ohio Supercomputer Center (OSC) in Columbus, Ohio, have found that not every simulation requires a traditional supercomputer. Don Stredney, the director and interface lab research scientist for biomedical applications at OSC, found a limitation that's common with supercomputers: Batch processes are static and run on a scheduled time frame. They cannot provide real-time interactions, so they can't mimic a real surgical procedure. Desktop workstations that cost $6,000 to $10,000 allow his team to run simulations that show, in real-time, how a surgery changes a patient's anatomy, he says.

Stredney says his industry benefited from innovations in computer gaming because the standard consumer GPU became much more powerful, resulting in better realism at a much lower cost. Stredney says his researchers use commodity PCs running standard GPUs such as those from AMD's ATI unit and Nvidia Corp., but not high-end GPU clusters. However, they find that when the data sets grow too large with some simulations, they need to return to the supercomputer.

What drives Stredney's group back to the supercomputer, he says, is the "exponentially increasing size of data sets, images in the gigabyte-per-slice range and multiscale data sets that are now routinely being acquired at half-terabyte levels." Ever-larger data sets and the complex interaction required for real-time visual and auditory simulations "require more sophisticated systems," he says.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags supercomputinghigh performance computing (HPC)

Show Comments