The list of new and emerging technologies enabled by the convergence of nanotechnology, life sciences and information technology is one item longer today following the announcement from the University of Glasgow about inorganic biology.
The project head, Lee Cronin explains that “All life on earth is based on organic biology (i.e. carbon in the form of amino acids, nucleotides, and sugars etc) but the inorganic world is considered to be inanimate.
“What we are trying do is create self-replicating, evolving inorganic cells that would essentially be alive. You could call it inorganic biology.”
But professor Cronin has just used a number of phrases, perhaps intentionally, which will trigger yet another debate about playing God, worries about what happens when they escape from the lab and take over the world, and brings up the subject of responsible versus irresponsible innovation.
Whether developing a technology such as inorganic biology is classified as responsible or irresponsible depends as much on your ethical and religious views as it does on the science. The only sure thing is that the technology will be developed anyway once the genie is out of the bottle, and as with many other technologies we have to attempt to manage them in a way that gives us the best shot at producing beneficial effects.
Responsible innovation is something that seems to be trending, at least in Europe, as a way of ensuring that new and emerging technologies do not create any unpleasant side effects. To some extent it seems similar to the precautionary principle, which has been used as an argument against everything from GMO’s to nanotechnology, and can be used as an effective tool to sway political opinion against any new technology.
I would suggest, however, that thinking about responsible innovation should start only when technology reaches the stage of commercialisation, and that everything up to that point is just scientific curiosity. The howls of “what if science creates a monster?” have to be balanced against the progress that science has made over the past three hundred years, and while the products of science have not always been beneficial, we can live lives free of cholera and access whatever information we want whenever we want. It is impossible to see, from the lab bench, the final application of any technology – neither the inventors of the transistor or science fiction writers predicted the mobile phone, and I can’t remember anyone in the dot.com era predicting Facebook or Twitter.
So responsible innovation should be something for companies to practice rather than scientists, just like open innovation. It’s an idea that fits nicely alongside the drift towards sustainability, shifting from the linear take-make-waste model that has been used ever since the industrial revolution to a more cyclical zero waste one enabled by life sciences. But the concept of responsible innovation needs more definition. Was the development nuclear weapons responsible innovation, as some would argue that they ended the Second World War and prevented a third one, or does their acquisition by rogue states such as North Korea render the whole field irresponsible? Was the development of polymers responsible, as it enabled huge advances in quality of life, or irresponsible as much of the plastic waste produced ends up in land fills or in the world’s oceans?
While industry is changing, and far more questions are being asked about safety and ethics than in the mid twentieth century, the idea of responsible innovation becomes far more dangerous in the hands of governments and regulatory bodies. An increasing number of publicly funded projects require applicants to answer all kinds of questions about the ethics and sustainability of the proposed research. Adding a fluffy ill defined term such as ‘responsible’ to the mix raises the risk of research being judged by personal rather than scientific criteria. It would certainly irresponsible to start demanding answers about responsibility too early, and before defining an end use or application of the technology, something that would risk putting the brakes on innovation and add to regulatory confusion. The use of nanotechnology in food, drugs or solar cells, for example, requires vastly different regulatory structures, even if the same nanomaterials are used for each application.
Is inorganic biology responsible or irresponsible innovation? It is way too early to answer that question, and we shouldn’t even try until we know what it will be used for. It may even prove to be a scientific dead end, and much of the debate about ethics, safety and regulation will end up as productive and relevant as the debate about ‘gray goo.’