Yes, master Skywalker...?
Sean Walsh explores the moral agency driving cognitive structure. He asks why C3PO shouldn't have as much right to the position as Luke Skywalker.
The chief economist of the Bank of England, Andy Haldane, said recently that developments in machine intelligence were likely to have massively deleterious consequences for the UK job market. Developing technologies will, he argued, cause a Fourth Industrial Revolution, making hundreds of thousands of jobs obsolete.
It's tempting to reply that the effect of all this could be mitigated by making the first of those jobs the ones currently occupied by senior economists. Especially those in the Bank of England. But instead perhaps we should be grateful that those economists are taking a time out from scaring us all about the consequences of Brexit – even if it's simply so they can panic us about something else.
This is one of the things Mr Haldane said: "That hollowing out is going to be potentially on a much greater scale in the future when we have machines both thinking and doing – replacing both the cognitive and the technical skills of humans".
So, note that then: it is a central thesis of Mr Haldane's analysis that machines will replace the cognitive skills of humans. There is a standard philosophical strategy that suggests itself. I call it this: hold your horses.
When you make a claim to the effect that machines will be developed to the point that they will come to "replace" the cognitive skills of humans then you are making a claim not about machines but about the nature and character of human rationality and consciousness. In assuming that such a state of affairs is possible you are endorsing the following reductive scheme: that souls are no more than minds, that a mind is nothing but a brain and that a brain is simply a type of computer. The assumption is of what philosophers call materialism: that human consciousness and rationality are either identical to brain states, identical to brain function or that they don't exist at all (some philosophers actually argue the latter, nuts though it is).
But materialism (in any of these forms) is, to put it gently, eminently contestable.
For one thing, it is very difficult to see how the rich depth, colour, texture and generally kaleidoscopic character of human consciousness can be reduced to the grey mechanism of the human brain. The story that the neuroscientists tell us about the brain mechanisms that underpin experience invariably leave something out – what it feels like.
Second, those same neuroscientists tell us that the brain is all causation. But human persons (as opposed to beings) inhabit a world not of causes but of reasons. I have reasons for acting in the way I do, and those reasons will be invisible to the causal account of what's going on in my brain when I decide to do them.
Thirdly -and relatedly- persons exist not simply as objects in the world but as subjective viewpoints on it. As well as "my consciousness" there is "something it is like to be me that only I can know". Any attempt to say that we are "nothing but our brains" is an attempt also to dissolve our perspective on this world, to assimilate the subject to the object. It is not clear that it is even intelligible to say that science can ever say how this is possible.
These considerations do not perhaps conclusively demonstrate that a person is not a machine and that therefore a machine could never be a person (I have other arguments for that). But it does show that Mr Haldane's concern is predicated upon an unexamined assertion: it may well be that the "cognitive skills" of a machine could never replace those of a human person. In which case the Apocalyptic vision he offered up last week might turn out to be a sort of Project Fear 3.
But another thought occurs: if a machine comes into being that is as cognitively well-developed as a human person then why does the human person get first dibs on whatever jobs are going? With cognitive structure comes moral agency, surely? C3PO has as much right to the position as Luke Skywalker.
Artificial intelligence theorists have an expression: garbage in, garbage out. What goes on in a computer simulation of intelligence is only as good as the parameters chosen by the programmer (in much the same way as the conclusions of the Brexit model were only ever going to be as good as the nonsense fed into the model by Mr Haldane's Treasury colleagues). The idea of a self-programming machine with the same sophistication as that of a normal human is currently for the birds.
There may well be something to Mr Haldane's argument. But to the extent that it rests on his claim quoted above, we need not be too concerned.