Computing has been essentially the same for over sixty years. An input signal is processed by a device, a switch, an amplifier, a logic gate etc, and then passed onto another device where it is processed again eventually producing some kind of output which could be a flashing light or the information you are reading now. Of course things have moved on from the early days where replacing burned out vacuum tubes was a major task, but despite advances in power and speed driven by better control over device and material parameters, the basic Von Neumann architecture has remained essentially the same.
Of course nature has invented far better ways of doing things, the human brain stands at the pinnacle of evolutionary information architecture engineering, and DNA stores information at a higher density and with greater persistence than anything ever invented – we can see bits of our genetic code reaching back almost to the first life, whereas I bet the collection of DVDs in my living room will be as unreadable as video tapes or floppy disks within ten years.
So the development of technology has to take two tracks. While huge effort goes into making things better, faster, cheaper (or at least maintaining decent margins), and in the semiconductor industry if you don’t keep up with your rivals you are finished pretty quickly, there is always the search for new and disruptive paradigms. But that is tough when you are running on the hamster wheel just trying to maintain position, and as many companies found with their nanotechnology efforts, anything unrelated to shipping silicon out of the door tends to be frowned on by management and shareholders more concerned with the quarterly numbers.
IBMs announcement of the first working cognitive computing chips covered nicely by Dean Takahashi at Venturebeat is one of those potentially disruptive technologies, but don’t expect the next iPad to be powered by one. While nanotechnologies have given us the tools to at least partially understand how nature works, replicating it is a different matter. It is a very different architecture, and one where connections are far more important than raw speed. But every advance in materials science, nanotechnology, and neuroscience takes us a step closer to developing something that allows us to get off the treadmill of Moore’s Law.
Singularians may be disappointed to find that they won’t be able to download their brain onto a chip anytime in the near future. What this work shows us is that we can replicate some of the functions of the brain, the ones that we understand, but there are many others that we cannot. Moving to organic materials where gigahertz speeds are unobtainable but fabrication is cheap may move things forward, helping to realise the long cherished idea of distributed intelligence or the Internet of Things.
And getting off the treadmill is the key to innovation. It’s something that governments and large corporations just don’t get. Companies are often too focused on improving shipping products to look at anything else, and governments fundamentally misunderstand how innovation works,which ties them to the old linear Von Neumann architecture. As a result innovation often comes from an unexpected source, as many mobile phone found to their cost, and five years on many are still scrambling to replicate the iPhone.
Part of IBM’s success has been due to the excursions into nanotech, cognition and many other seemingly unrelated areas which over time have enabled new technologies while generating substantial licensing revenues. Yes blue sky research is a luxury, but if your nose is too close to the grindstone how many opportunities will you miss?