Owning the Robots? – the rise of AI

TeletypeAA

It’s 1950. The room we are in is more like an old school hall than the site of a leading-edge experiment in the definition of intelligence. Before us is a teletype – a machine that we can type on which will relay that message – and any response generated – across a telecommunications link to a similar machine somewhere you can’t see.

That’s a fairly simple requirement by today’s standards, but, back then, it was the way you communicated, remotely.

We’ve just typed a question on our end of the teletype link: “What will you do on Tuesday?”

There is a pause while our recipient in this experiment thinks about this. Then our machine chugs into action, the paper moving up one line to allow the response to appear. A bit like a old teleprinter giving a football score, the characters appear, noisily, one by one.

“Perhaps, I should ask you what you will be doing on Tuesday?” reads the message.

We smile. There is both humour and cheek in the response. For the past hour we have been trying to decide whether our correspondent on the other side of the link is a machine or not. This test for intelligence – whether we can decide the difference by talking to he/she/it – has been proposed by Alan Turing, one of Britain’s leading code-breakers of World War II, and the genius credited with cracking the Nazi’s ‘uncrackable’ Enigma encryption machine, as part of the Bletchley Park initiative.

Four years from now, Turing will be found dead from cyanide poisoning in his Manchester flat, though the apple remnants in his stomach will show no signs of the poison. Shortly before this he was required by the local police to be chemically castrated, following prosecution for practicing homosexuality. The treatment gave him breasts, and this is thought to have triggered his supposedly terminally depressed state. His flat was full of advanced computing circuitry, which was removed by the local police.

Our little test with the teleprinter never actually took place. Like many such mental devices that Turing put forward, it was a conceived as a reliable ‘thought experiment’ in logic. His proposition was:

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The success of Turing’s code breaking also led to significant moral dilemmas. It was alleged that Churchill’s wartime government faced agonised decisions when messages they intercepted resulted in knowledge of future enemy attacks. If you knew that a certain convoy of ships was being targeted, did you alert them to the danger? We would probably respond with ‘of course!’, but to do that, repeatedly, would have revealed to the enemy the fact that the codes had been broken – resulting in the critical, and potentially war-winning, advantage being lost.

What followed has never been officially verified, but has been widely spoken of. The government -and Turing was involved in this, himself – had to derive a method, an algorithm, that would randomise the interventions to an acceptable level of success. Of course, each one also had to be viewed in terms of strategic importance to the war effort and the country’s future. ‘Playing God’ was reputedly one of the expressions that Turing used at that time. It was during this era that the foundations of what became known as Game Theory were laid by mathematicians on both sides of the Atlantic. Not unlike the thinking behind cybernetics (one of the early branches of ‘robotic’ artificially intelligent behaviour), this new branch of confrontation and strategic warfare was to gain in importance until it became a key part of the planning of possible nuclear war scenarios.

Turing was concerned with what ‘behaved like it was intelligent’, understanding that, although the teletype didn’t look like a human, or even a robot, if its appearance could be masked, that wouldn’t matter. The problems generated by the cracking of the Enigma machine came about because of the success of the technology. They were not directly related to the technology, itself.

Artificial Intelligence (AI) has moved on, considerably, since the 1950s, and now poses moral dilemmas and consequences much greater than those of revealing that codes have been cracked. AI, embedded in all manner of domestic products, now makes decisions for us, computing in tiny fractions of a second what the line of least risk might be for a given situation; or what the most efficient ‘way forward’ might be…

The problems generated by the success of the technology haven’t gone away. A recent example is the logical programming of automated ‘driverless’ cars, which use AI to control themselves – something quite unheard of in Turing’s time, and viewed as fantasy even a few years ago.

Dilemmas often centre around a ‘thought experiment’, much as Turing’s test of intelligence all that time ago. One of the latest such mental devices, derived from philosophy, is known as the ‘Trolley Problem’. Imagine that you are standing beside a railway track, and that, coincidentally, there is a lever nearby that activates a set of points to change the direction of any train coming your way. You’re having a bad day and it gets worse when you turn around and see that number of people have been tied down to the main track – the normal route of the train.  Before you can adjust to the shock you turn to look at the other track  (the one that the points will activate) and see that there is another person tied there. Your attention is rudely dragged back and forwards to the main track – and you see that a runaway train is coming down the line…

You – simulating AI – have the only ‘control freedom’ possible; the train can’t be stopped and is either going to kill the group or the single person. You get to decide. Most people would switch the track-points so that fewer people got killed. But then, just before the train gets to you, your smart phone tells you that the men on track one are bad guys who’ve just robbed a bank. Now, what do you do?

This is not so far-fetched as it may first appear. Say this was a driverless car and a connected social media platform that you trusted kept a ‘good persons score’ on each satellite-tagged person in the vicinity. In the event of your car being about to smash into that car in front of your – the one with five occupants – your car would ‘know’ that the lady with a pram on the pavement, who you would kill if your car took it’s usual ‘avoid collision’ action, was a much better person, and therefore worthy of saving.

It’s horrific… but this ‘world’ is very close to where we are now as a technological society. In this example what’s happening is that the possibility to make changes in real-time is forcing us to re-examine that tiny window previously covered by the word ‘accident’.

If AI can change the outcome, then who owns the programming the AI will use?

While technology is advancing, massively, politics and humanitarianism seem to be being left far behind, leaving the rich and powerful to decide things – as they did with the large-scale manipulation of political data on both sides of the Atlantic during the past twelve months.

Next week we will go deeper into the fascinating but potentially nightmarish world of AI.

©Copyright Stephen Tanham 2017