John Robb discusses possible futures for Grid Computing. I resonate with his choice #1: “An application specific backwater. Networks of voluntary PC networks slaved to work on specific projects/games and clusters of servers dedicated to specific apps.".
Grid computing is far from new: chip design groups, supercomputing shops and folks like Pixar have been doing it for years and years. In my experience there are two fundamental issues:
Most single large applications are very difficult to break into pieces which can run reasonably in parallel (my years as an architect of massively parallel supercomputers drove this home).
The economics of most deployed business applications is such that only a small portion is attributable to hardware. Lowering that cost while raising larger items, such as resource management, isn’t a compelling proposition.
I think people will continue to find cool and useful applications that can harvest cycles (SETI@Home comes to mind and I can certainly imagine some bio-informatics stuff). It just won’t replace most uses of larger servers (IMHO).
Closing thought: I think it was Dave Patterson (UC-Berkeley) who raised the following analogy. Think about all of the wasted automobile-hours while cars are parked! If we could only get a really good algorithm for how to make use of them, we could dramatically reduce the need for additional vehicles! While the analogy isn’t perfect, it brings to light some of the same challenges that face general purpose grid computing.