Articles

Artifical Stupidity - Stories of Dumb Algorithms

We have seen the future, and in many ways it’s already here. Algorithms and AI will tackle not just the 3D jobs we hate- Dull, Dirty and Dangerous- but also write contracts, diagnose and treat disease and possibly even do away with the need for accountants altogether. That’s the hope – I mean hype – and who is to nay say it?

Nobel prize winner Paul Krugman predicted in 1998  that the internet would have no more impact on civilization than a fax machine. I certainly don’t want to commit a similar embarrassing error commenting on whither AI: it’s here to stay, and then some. But I can’t help feeling that in the early experimental stages – which we are still in – it’s capable of breath-taking stupidity. And it’s every digital citizen’s duty to rise up and use their RI (Real Intelligence) to point out some of its lunacy. At the very least it’s great therapy!

Linkedin was recently excited to offer me a job as Chief Compliance and Regulatory Officer for a government body. Not bad pay and conditions, but had you been trying hard (I mean really hard) to come up with a role less fitting for my style, competence and inclinations, you’d have been hard pressed. Let me explain: this year our daughter’s heartfelt wish for her parents is that they become more compliant. It embarrasses her when we break rules and can’t follow the guidelines or instructions, often wilfully so.

Also my personal profile -presumably the data that this this piece of smart AI mined – is littered with phrases like ‘risk taking,’ ‘thinking unconventionally’ and ‘breaking the rules.’ All in the context of innovative thinking, which I teach, and not criminality of course, but still… Linkedin had billed this as a prime opportunity made just for me, not a random mailing that could easily have suited about 2 billion others.

While we’re on Linkedin (which makes me feel more positive about being locked out) : every few months it sends me an alert saying there’s someone in Oxford with whom I have a lot in common, and really should connect to. His name is… Nigel Barlow! I communicate that I’d love to meet with him, but the trail goes dead there. No reply. Is this some kind of existential philosophical game the platform is playing with me, an ongoing inquiry into my inner self that will be revealed in the digital mirror of my own profile? Nope – I realise algorithms don’t joke, even in a wry, silicon-inflected way.

 

A Pure Numbers Play?

“Nigel, you’re just not getting this,” said a friend who is steeped in IT. (I think his job title is ‘Digital Architect.’)

“It’s a numbers game. A bot can send out a squillion of these ‘targeted’ ads in less time than it takes you to boot up your PC. Doesn’t matter if 99.9% of them are way off – the others hit their target.”

This struck me as an example of the old saw that a broken clock is right twice a day. I get it, but can’t square this randomness with the claims that the message has been personalised based on my browsing data, predilections, life choices, dating history and pet names. Witness the hype from a San Francisco based recruitment outfit who mailed me claiming their 25 years investment in AI ‘matches your data to a job profile in a way no human can achieve.’

So what did they offer me? An exciting role as a… Trainee Customs Agent! Plus a number of other junior office-based jobs with titles like Import/Export Agent, Junior Sales Agent etc. How is your human RI getting on with detecting a pattern here? Not hard, is it…my website and social media profile uses a client’s description of my style of working as an Agent Provocateur.

I wrote back a nice note thanking them, but proposing that although their algorithm was clearly well-intentioned, it needed to get out a bit more. And would it like to learn a little French, to avoid the risk of unconscious bias in its future suggestions? Yes, foolish to expect an answer, but I tried to create a dialogue. No reply.

 

It’s CONTEXT, Stupid

One thing we humans can congratulate ourselves on is modulating our thinking according to the nuances and context of a situation. Usually, though not always, we can recognise that something is meant ironically or light-heartedly. Binary can’t yet manage this subtlety:

what’s digiteese for ‘tongue in cheek?’

We subliminally draw on our own mental database of situations to make judgements. You would know that a man in his 60’s who has always worked for himself would be totally unsuitable for a dull office based job, where the typical applicant is aged 28. Or that for someone who is essentially rebellious, suggesting he manages ‘legal compliance within a regulatory framework’ is like appointing the Yorkshire Ripper to run the prison parole board. Well, not quite, but nearly as ridiculous, when you consider all that cool data that a million cookies have stored away on me in some cyberspace warehouse.

 

Living in the Past

What we’d expect at the very least is that algorithms could be future-orientated and pro-active, rather than trapped under an avalanche of retrospective information scree. Most of us have experienced this: we buy a hedge strimmer, and then for the rest of our lives are bombarded with adverts and messages for  essentially the same product. Now while this might help with cognitive dissonance if the new information reveals, on comparison, that you have made an okay choice, it’s clearly not going to create a new buying pattern where you annually upgrade your hedge-cutting tool. One strimmer is probably enough for a lifetime – be more original, you binary oafs!

Well, sometimes the algorithm is smart. One of the first applications of guided consumer data we were impressed with was: “if you liked this, why not try this…’’ Especially with music choices on Spotify or book recommendations on Amazon. (Which has morphed into EVERYTHING recommendations, on the store formerly known for books and music and now known as THE EVERYTHING ON THE PLANET WE CAN FLOG YOU- AND MORE- STORE.)

Often the music tips were interesting and seemed to make sense. But my main human objection is that I don’t want to listen to something similar: I want to be actively introduced to different, tangential, odd or conflicting sounds. For me this is a descriptor of creative rather than artificial intelligence – human curiosity guides us towards the new and unexplored.

Nice to know that someone listening to Sufjan Stephens also likes Grizzly Bear and Sharon Van Etten, but after 15 minutes of the beautiful but heart-wrenching sound of Sufjan I’d like to try something completely different, please. After all, that newness I yearn for is the ‘nova’ in innovation. Consumer algorithms are such fuddy duddies, stuck in the past, mining over and over the piles of data they have been amassing.

But unlike us, they never tire, and so will doubtless hit their target -us – if they just keep on trying. Which magazine is an excellent consumer guide whose cookies should know me by now, but their mailing today tells me, “Nigel-here’s our recommendation on top car scratch removers – and best dishwasher tablets!”  I’d been searching on their site a lot for financial investment advice, so it’s hardly surprising they’ve nailed me so accurately. (For the algorithm, Siri or Alexa: that’s called ‘sarcasm.’ Never mind…)

You see for now I don’t even own a car. Although I’ll be thrilled to get the technical lowdown on new ammunition for our dishwasher.

There’s an old saying that you should never ask a barber if you need a haircut, or an optician whether  you should buy glasses. In a similar vein, don’t believe a digital marketer when they tell you how precise is the targeting of their online ads. They’re often trying to sell you a haircut you don’t need. For now, anyway…

 

Cotswolds, England

February 2021