FS3, Many-core Processing and The Future of Computing
A recent article from HPC Wire (High Productivity Computing) quoted Burton J. Smith (currently with Microsoft, formerly with Cray) as saying,
"There are two possible future scenarios: either computers get a lot cheaper but not much faster, or we use parallel computing to sustain continued performance improvement. In the first case, computing becomes a "mature" industry, and hardware and software become commodities. In the second, consumers will continue to enjoy the benefits of performance improvements, but successful software and hardware providers will have to embrace parallelism to differentiate themselves and compete."
There are very few people in the world who comprehend the computing world and the world as a whole face this challenge. I look at the situation with mixed optimism because of history. There are no guarantees we'll find a good solution. After all, the science fiction and predictions of fifty years ago pictured us with George Jetson personal rocket ships by the year 2000, as well as regular trips to the moon and other achievements which have not yet come to pass. But even the civilian aerospace efforts are beginning to make real strides, as other posts on this blog have noted. As the magic 8 ball says, "outlook good" and "signs point to yes."
Over the past fifty years of computing, people have generally found a way around most roadblocks. One might even say that, for true hackers, finding the way around roadblocks is half the fun, if not more. If you read "Hackers" by Steven Levy, it quickly becomes apparent that getting around artificial barriers was a challenge relished by the founders of the personal computer revolution. The same is true for many in today's ongoing mini-revolutions.
Although a way will be found around the difficulties in many-core programming, there will be a period of inefficiency and churning as hundreds of companies and organizations and thousands of IT students and professionals theorize, experiment and collaborate on good ways to program for the new hardware paradigm. Companies such as PeakStream and RapidMind are already producing commercial software to harness the power of many-core programming. Many other formal and informal groups around the world are thinking about and working on approaches to this issue.
The FireSeed Streaming Supercomputer (FS3) project is one of the efforts underway to unleash the power of tens of thousands of linked processing cores. If you'd like to get involved in an ad hoc tech project tackling many-core computing, contact me and join the fun!
FS3 is currently focusing on the NVIDIA GPU. The 128 cores of single precision processing muscle found on today's NVIDIA 8800 GTX for $600 may morph into 256 or 512 cores of double precision goodness for the same price before the end of 2008. Linking together ten boxes with SLI'd video cards (scalable link interface) will then provide 10,000+ processing cores for under $50,000.
The only question is, can the computing world effectively make use of those 10,000 cores which will be available to every startup company, every biotechnology and nanotechnology company, every university, and many wealthy or dedicated computing enthusiasts. And how about the 10,000,000 linked cores available to large corporations and government agencies -- don't think that No Such Agency and other groups aren't already thinking about and working on this issue.
Lastly, what will happen when an exceedingly brilliant and insightful twelve-year old has a 1000 core desktop computer sitting in their bedroom?