Archive for Cyborgs

Machine intelligence – where did it go?

Fiction has tended to regard artificial intelligence as a threat or challenge to humanity. The science fiction film 2001 Space Oddessy has the computer HAL (one letter back in the alphabet from IBM) plotting to dispose of its human copilots that it despised. Blockbusters such as the Matrix and Terminator are similarly distopian. Stephen Speilberg’s AI is unusual in that it is the machine (in a human embodiment) that is persecuted. But if fiction needs big narratives to make a point, reality tends to be more complex and ambiguous, and where the value of machine intelligence is contested.

The impressive growth in the sophistication of computer technology through the latter part of the 20th century coupled with laboratory experiments in artificial intelligence convinced many that fiction would soon become fact, and intelligent machines would soon relieve us of those tiresome tasks that require us to think.

So where are the intelligent machines now? They didn’t arrive fully formed like their fictional counterparts, but sneaked into our lives in the shape of AI programs in everyday applications. Google uses automatic class detection, an AI classification technique, to link documents to search terms. Not very exiciting maybe, but it’s been the search engines that have levered the web into everyday life. More recently, both Google and Microsoft are investing heavily in speech recognition software to enable voice-driven mobile search technology.

On a more familiar dystopian theme, the US government has a research programme on “sentiment analysis”; AI programs to automatically inspect publications in the US and abroad for unsympathetic or negative opinions of America. And, of course, there’s always robot wars.

Leave a Comment

Who’s afraid of a cyborg future?

We probably now have a closer relationship with our technologies than we do with pets. Some may be willing to forego them, and some communities prosper without them, but in the West, most would feel a distinct loss if they were denied a mobile phone, car or networked computer. Where a separation has been enforced many experience an emotional void, as of the loss of a friend or even a limb. Few people are troubled by this relationship because it appears at arms length and discretionary.

One day Michael Chorost suddenly and unexpectantly went proundly deaf. In his autobiography Rebuilt: How Becoming Part Computer Made Me More Human he describes this experience of restoring his hearing through a digital cochlear implant. Through first hand experience he describes the increasingly blurry distinction between artificial and human intelligence, and gives a unique perspective on the difficult relationship between humans and technologies. He talks lucidly of his cyborg identity on AfterTV.

But what about Chorost’s claim that his digital implant made him feel more human? Restorative prothesis such as Chorost’s have a long history that includes spectacles and pacemakers and so on, so his experience maybe uncontroversial. But why wait until you’ve lost something? The most common cyborg development is the technology that augments or enhances human powers, such as enhanced vision systems used in combat to prevent so-called ‘friendly fire’ killings. In fact most consumer technologies fit into this category. Mobile phones augment our power to communicate, ICTs connect us to eBay and extend the market into our homes.

We thereby technologize the environment we live in and become low-grade cyborgs ourselves in order to thrive in it. Obvious examples are the communication and transport technologies that have transformed the world so radically throughout the past century. Less obvious candidates are vaccinations and immunizations that transfigure our bodies to allow us live live in high population densities without risk of genocidal plagues of diseases. Timothy Luke calls this the ‘end of nature’ or ‘denature’: humans are no longer confronted by a vast realm of autonomous, unmanageable, non-human wild activity, but a ‘planned habitat’ that requires scientific management.

Comments (1)

Can a machine think?

…..or reason, or be intelligent, or know what it is doing, and so on. These are challenges that have excercised philosophers and artificial intelligence scientists and many others since Alan Turing first posed the question in his seminal paper for Mind, Computing Machinery and Intelligence. Rumour has it that in the 1980s Prime Minister Thatcher saw a certain Professor Fredkin on TV explain that superintelligent machines would soon surpass the human race in intelligence and that if we were lucky they find human beings interesting enough to keep us around as pets. She decided that the ‘artificial intelligentsia”, whom she was just proposing to give lots of research funds under the Alvey Initiative, were seriously deranged and slashed their budget.

Il-considered pronouncements like the one above are often made about AI by people too easily impressed by computers, but they detract from a serious point that Turing was attempting to consider. Turing was an academic mathematician who during the second world war was engaged by military intelligence to produce mathematical procedures that could decipher German coded messages. bletchley1.jpgWorking through these procedures was extremely complex and could only be done by rooms full of ‘computers’, which in the days before digital computers were humans, usually women, employed to make routine calculations.

Turing was impressed that although these people were unaware of what they were doing, collectively their achievement, code-breaking, should be judged as intelligent. As he was also working on rudimentary computing devices he was aware that the tasks done by ‘human computers’ could one day be done by machines. In his celebrated Turing Test he subsequently speculated on whether or not intelligence had to come from a human, or indeed whether it needed a source that was singular, human, or conscious at all.

Rather than have a machine pretend to be human, in the Chinese Room mind experiment, John Searle makes the human act like a machine, and thereby attempting to prove Turing wrong – that appearance and essence can be very different. We may want to attribute intelligence based on what seems to be the case, but we have no grounds for doing so. While these may appear esoteric academic disputes they address issues that confront us in our everyday lives of which, it seems, our understanding is extremely shallow.

Carol Beer in “Little Britain” gets a lot of laughs everytime she says “computer says no“, but how often do we, in our everyday lives blindly follow instructions without any understanding of what they mean. Surely these scenarios have their representation in both the mind experiments above.

Perhaps we can take comfort in relying on people who know better than us, but what happens if the experts are doing it too? An interesting point was made by the US economist Frech in a paper titled European versus American Economics, Artificial Intelligence and Scientific Content, who noted that it was the US economists who won all the Nobel prizes. Unlike their European counterparts, the US academics were concerned almost exclusively with symbols and the relationships between them, and they had little or no appreciation of what they meant in the outside world – just like the person in the Chinese Room.

So if Nobel prizewinners are doing, what hope for the rest of us?

Comments (1)