From the archives 2014:
The movement of the robotic face of the Geminoid DK robot is uncanny. It is almost… but not quite… perfect and has a vaguely disquieting effect, looking remarkably like Henrik Schärfe, who was part of its creation. Other robots in this generation are becoming capable of greater and greater emulation of human expression, eye to eye recognition…they can even smile. With some you have to do a double-take on first glance, just to check what you really are looking at. It is easier, perhaps, with the cartoon-character faces…less unnerving. There is a science to that; the unease is referred to as the ‘uncanny valley’ where the almost-but-not-quite perfect representation of a human face, for example, is profoundly disturbing to us.
The theory goes that while we can readily accept a quasi-human intelligence packaged as an android when it looks like a robot, the closer it gets to being humanoid the more disturbing we find it… until a point arrives when we have the perfect imitation of human form… at which point we are able to empathise with the machine.
The current generation of David Hanson’s character robots have a remarkable evolving intelligence. I’m no scientist, but from what I understand the creators have installed a vast database and connection to the web so that by using facial recognition and imitation these robots can make and maintain an expressive conversation with people. Analysis of words, speech patterns and learned, observed behaviour allow them to interact individually with their interlocutors, changing their behaviour to suit each person their encounter. Their own speech draws upon that huge database of knowledge, programmed patterns of speech and response… and even humour. The phrases are not pre-determined and in several of the videos I have watched the creators themselves seem genuinely surprised by some of the responses the machines give.
Many see the creation of super-human artificial intelligence as the goal to aim for; others, such as Professor Stephen Hawking, warn that it might well be our greatest creation… but could also be our last. The debate continues about how we ensure a moral and ethical code for AI and science fiction has explored that idea from many angles over the years. You have to wonder how we would fare in a world where justice assimilates all the facts but none of the human emotions that lead us to act the way we do.
I couldn’t help pondering, and for myself I have to wonder… both with the eyes of a child who sees sci-fi fantasy becoming a reality and with the mind of the adult who questions. Although the possibilities are incredibly exciting, they also raise some fundamental questions. What is, actually, the difference between the programming of the human brain through the learning, observation and experience of life and the advanced AI that seems almost with us? Every word, phrase, or idea is born of the data we have acquired, one way or another. Even the expressions of basic emotion can be traced to chemical reaction.
We are an organic life-form. Or does that simply mean we wear out and our parts are less readily replaceable? Our memories may be incredible, but our ability to recall stored information will likely never match a super-computer for speed or accuracy. And, of course, we can create miniature versions of ourselves, giving life to babes who take years to mature into adults… which, from the point of view of pure efficiency can’t be called ideal. Robots can build and programme robots a darned sight quicker than we can raise children.
We could say it is the soul that defines us, separates us from being no more than organic computers… For many that is answer enough; that spark of the divine within each of us… but not everyone believes in the soul. “For those who believe, no proof is necessary. For those who don’t believe, no proof is possible,” wrote Stuart Chase. How then do we separate ourselves from the programming that leaves us way behind the coming generation of androids in a way open to believer and non-believer alike?
I think the answer to that lies in the higher human emotions. Logic can compute justice, but can it know mercy? Although the designers are creating programmes to simulate empathy, can it ever be really felt or can a machine ever truly understand compassion? The programming of a machine is based on logic… could it understand how a parent can lay down their life for a child, a man risk his own for a dog in danger? All these things stem from the one thing… Love.
And Love knows no logic.



























A lot to mull over in this post. Have you changed your mindy any since you first wrote it?
LikeLike
No, Bernadette, I haven’t. We can know Love in a way no robot can…they may simulate it, bt they will never feel it.
LikeLiked by 2 people
I think it’s the people who work in this field we should worry about. I don’t think in their pursuit of AI machines they care about the consequences of developing the perfect machine. Driver-less trucks transporting goods is already very close but what happens to the thousands of drivers of out of work?
LikeLike
There lies just one of the problems, Mary. From automated checkouts to driverless vehicles,all built by robots, we are potentially making ourselves redundant.
LikeLiked by 1 person
Why humanize a robot? They will never be human and we don’t need fake humans.
LikeLike
I agree.
LikeLiked by 1 person
Fascinating and frightening. I had trouble believing that video was actually of a robot!
LikeLike
It is getting more and more difficult to tell…at leatst on film.
LikeLiked by 1 person
This is scary
LikeLiked by 1 person
It is…
LikeLiked by 2 people
I can’t help but wonder if androids would be a better alternative to our human legislators, at least here in the US! 😉
LikeLiked by 1 person
Not unless they can truly understand humanity …with all its foibles. Though I can see what you mean 😉
LikeLiked by 2 people
This reminds me of Robin Williams Bicentennial Man. I would prefer highly sophisticated computers to look like machines, not people. There are those who think that if robots become sophisticated enough, humans will become irrelevant.
LikeLiked by 1 person
I think it will be an interesting and potentially fatal future if we forget the purpose of tool-building.
LikeLiked by 1 person