Having spent a few weeks looking at public engagement of science, and not being particularly impressed, a Nature article by ethicist Jens Clausen concerning Brain-Machine interfaces comes as a breath of fresh air. Unusually for an article on ethics, it deals with the facts and resists the temptation to imagine some kind of dystopian or utopian future which would throw up a whole slew of far more interesting and complex ethical issues.
Rather than getting worked up into a lather over the “Ethics of Killer Robots” (I’m not joking, this is a very serious issue for some members of one of the sub branches of nanofiction and I’m sure they’ll be in touch with me at some stage to castigate me for belittling them) Clausen concludes simply that
Brain–machine interfaces promise therapeutic benefit and should be pursued. Yes, the technologies pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy. Ethics is well prepared to deal with the questions in parallel to and in cooperation with the neuroscientific research.
There, that solves the issue quite neatly, and hopefully we can all now get back to getting on with the future rather than worrying too much about the consequences of things that are yet to be invented, which is a pointless exercise. This is why.
James Burke had an interesting series in the 1970s called Connections which neatly explained how one technological breakthrough or societal change enabled another round of disruption. Now for us, looking back though history, it is quite obvious that the mechanisation of agriculture led to huge productivity improvements meaning that people could live in cities without starving to death and get on with things more interesting than tilling the soil, science, philosophy and literature for example. Now, what many of the public engagement projects attempt to do is to turn the process around and imagine what a piece of technology or science could grow into at some point in the future.
(As an aside, or perhaps a disclaimer, as an increasing part of my consultancy work is adding current trends to common sense via some rather complicated maths to put some well defined probabilities on various things occurring, I have to admit that not all futurologists are mad, or all technology predictions are fantastical. However a disappointing number of predictions take a rather linear one dimensional view of technology and simply envisage ‘things’ being cheaper/faster/smaller without taking into account the way that ‘things’ are used or whether the underlying science will survive the quite extreme tests that both the free market and peer review will subject it to.)
Can anyone name a single case where any technology prediction over a period of more than ten years has been anywhere near correct? After all, shouldn’t we all have atomic powered flying cars by now and robot butlers. If I suggested that it should have been possible to predict the invention of mobile phones or the Internet on the basis of Michael Faraday‘s 1839 experiments most people would think I was barking mad, so it beats me why people think it rational that by observing a nanotechnologist fiddling about with a nanotube on a lab any conclusions about the effect of technology society and all its associated ethical baggage can be drawn.
Trying to predict what sort of societal changes that science will engender in ten or twenty years is pretty much totally impossible. Where I sit, in the middle of the City of London, most people will tell you, off the record and unfortunately over a beer these days rather than over a magnum of Krug, that really they don’t have a clue what will happen next week. So claiming to be able to predict the societal and ethical effects of a few bits of science is so ridiculous as to cause any reasonable sane person to suffer apoplexy at the thought of spending money on it.
And we thought the bankers were mad?
Comments 3
Glad to see literature made it onto the list of worthwhile pursuits. Beer snobs get no respect.
Seriously though, it is bizarre that when time-to-market has been so significantly reduced, it still seems to take 20 years minimum to effect real change. I’m thinking about working in the magazine business in the early 80s and art directors debating what ‘digital photography’ would mean. 20 years later they finally got a chance to find out.
Tim,
Agree that predicting the unpredictable is hardly productive. But anticipating the possible can be helpful, in that just occasionally it might help avoid a right royal cock-up!
Author
Andrew, good point and I agree, but it has to be done as quantitatively as possible and I think that that is something that is missing from many current predictions. In the financial world we will tolerate a certain level of risk, and especially now it is important to have an idea of the probability of various things happening – credit defaults, grey goo etc, and these numbers are constantly revised (or should be). Obviously it is an imperfect science, but decisions need to be justified and asses need to be covered, so there has to be some methodology behind the predictions. If there is then you can trust your gut and the various other intangibles that separate a triumph from a cock-up, but if not then the process becomes totally random which wastes time/money/careers.