Reblogged from Matthew Wright:
One of the main tropes of science fiction has to be the self-aware robot or computer – one mobile, the other not, but both presented as self-aware and able to think as we do, although often better.
Often, Frankenstein-style, the AI develops malevolence. That was a trope long before HAL; virtually all of Asimov’s robot stories from the 1940s onwards were designed to counter the notion of the AI turning on its creators. Asimov’s answer – which, apparently, was proposed to him by John W. Campbell – were the ‘laws of robotics’ in which machines simply couldn’t harm humans.
Inevitably, these laws didn’t work, and Asimov knew it; a lot of his stories involved finding ways that the laws failed. He spelled out the main point of failure in one of the final robot novels: all the builder had to do was program a different definition of ‘human’ into a robot. And more recently, work on robots has shown that such laws are impractical, not least because they require value judgements which current machine technology cannot provide.
That also highlights the other problem – for all the work done to date and all the conceits of creating ‘smart’ machines, the AI we’ve come up with is nothing like the notion of a self-aware, thinking machine in our image, mentally or otherwise. And part of the reason is that, to this day, nobody actually knows where consciousness comes from, how it’s generated, or even what it actually is. Oh, we have some very good guesses and theories. But actually knowing? No.
Continue reading at Matthew Wright