Главная | Обратная связь | Поможем написать вашу работу!
МегаЛекции

Forget artificial intelligence. It's artificial idiocy we need to worry about




Machines are good at some things and OK at others, but completely useless when it comes to understanding

 

Tom Chatfield

 

The Guardian, Monday 6 January 2014 15.00 GMT

If a computer could learn to identify you with absolute accuracy, would that mean it knew what it means to be you? Photograph: Carsten Koall/AFP/Getty Images

 

Massive, inconceivable numbers are commonplace in conversations about computers. The exabyte, a one followed by 18 zeroes worth of bytes; the petaflop, one quadrillion calculations performed in a single second. Beneath the surface of our lives churns an ocean of information, from whose depths answers and optimisations ascend like munificent kraken.

This is the much-hyped realm of "big data": unprecedented quantities of information generated at unprecedented speed, in unprecedented variety.

From particle physics to predictive search and aggregated social media sentiments, we reap its benefits across a broadening gamut of fields. We agonise about over-sharing while the numbers themselves tick upwards. Mostly, though, we fail to address a handful of questions more fundamental even than privacy. What are machines good at; what are they less good at; and when are their answers worse than useless?

Consider cats. As commentators like the American psychologist Gary Marcus have noted, it's extremely difficult to teach a computer to recognise cats. And that's not for want of trying. Back in the summer of 2012, Google fed 10 million feline-featuring images (there's no shortage online) into a massively powerful custom-built system. The hope was that the alchemy of big data would do for images what it has already done for machine translation: that an algorithm could learn from a sufficient number of examples to approximate accurate solutions to the question "what is that?"

Sadly, cats proved trickier than words. Although the system did develop a rough measure of "cattiness", it struggled with variations in size, positioning, setting and complexity. Once expanded to encompass 20,000 potential categories of object, the identification process managed just 15.8% accuracy: a huge improvement on previous efforts, but hardly a new digital dawn.

If computers remain far worse than us at image recognition, a certain over-confident combination of man and machine can elsewhere take inaccuracy to a whole new level. As arch-pollster Nate Silver noted in his 2012 book The Signal and the Noise, many data-crunching models have a terrible record when it comes to predictions – and their designers only tend to realise this when it's too late."There are entire disciplines in which predictions have been failing, often at great cost to society," Silver argues, with fields at fault including biomedical research, national security, financial and economic modelling, political science and seismology. The Fukushima nuclear facility was designed to withstand what plenty of experts predicted was the worst possible scenario. Instead, it revealed the tragic divide between their models' versions of "worst" and reality's.

Identifying cats is a far cry from predicting the future. Yet they share a common feature: that the processes underpinning them are not something we are able fully to explain to either machines or ourselves.

This wouldn't matter if every data-led undertaking was, like Google's cat-spotting exercise, conducted as an experiment, with clear criteria for failure, success and incremental improvement. Instead, though, the banality of phrases like "big data" tends to conceal a semantic switcheroo, in which the results a system generates are considered an impartial representation of the world – or worse, an appealingly predictable substitute for mere actuality.

Yet there's no such thing as impartial information any more than there's a way of measuring someone's height without selecting a unit of measurement. Every single byte of data on earth was made, not found. And each was manufactured according to methods whose biases are baked into their very being.

When Facebook asks me what I "like", it's making the convenient assumption that I feel one of two ways about everything in the world – indifferent or affectionate. When it aggregates the results of mine and a billion other responses, marvellous insights emerge. But these remain based on a model of preference that might kindly be called moronic, and that is more likely to provide profitable profiling for marketing purposes than to transform our understanding of the human mind.

Similarly, every measurement embodies a series of choices: what to include, what to exclude. If a computer could learn to recognise images of cats with absolute accuracy, would that mean it knew what a cat was? Not unless you redefined cats as silent, immobile, odourless sequences of information describing two-dimensional images. If a computer could learn to identify you with absolute accuracy via surreptitiously scraped data from your social media presence, phone calls and banking activities, would that mean it knew what it means to be you? Not unless you redefined a person as a series of traceable numbers monitored ceaselessly by a series of machines. Which, of course, sounds like an excellent idea to some people.

Perhaps the greatest illusion that a phrase like big data embodies, then, is that information is ever separate from life. "Personal data" sounds so much less personal than those things it touches on: conversations, meetings, trips, possessions, earnings, relationships, beliefs, self-expression. Yet there is no place where it ends and the real us begins. We are, among other things, what the world believes and permits us to be – a negotiation that demands flexibility on both sides.

As those in the business of selling computers enjoy pointing out, 90% of the data in existence was produced in the last few years. Yet it seems unlikely that we're now nine times smarter than everyone else who ever lived – while all one billion bad assumptions make for is one really big, really bad answer. Forget artificial intelligence – in the brave new world of big data, it's artificial idiocy we should be looking out for.

 

• This article was amended on 7 January 2014. An earlier version referred to the exabyte as 18 zeroes worth of bits rather than bytes.

СРСП 8

Поделиться:





Воспользуйтесь поиском по сайту:



©2015 - 2024 megalektsii.ru Все авторские права принадлежат авторам лекционных материалов. Обратная связь с нами...