Artificial intelligence started, like many aspects of technology, as something only science fiction could dream of. Machines that could think – what a concept!
In the 1940s, computers were already a reality. The US Navy even had a computer – the Torpedo Data Computer – that did complex trigonometry and was small enough to fit aboard a submarine. World War II saw the use of other computers, like Britain’s Colossus, to break military and diplomatic codes. ENIAC, built by Americans, was faster even than Colossus, and an impressive feat of technology.
Glen Beck (background) and Betty Snyder (foreground) program ENIAC in the Ballistic Research Laboratory's building 328. (U.S. Army photo)
We can agree that figuring out how to hit a moving underwater target certainly qualifies as thinking, but it doesn’t quite reach the level of reasoning. Our quest to build a machine that could both think and reason like a human became a science fiction dream – and in the 1960s, a nightmare in the form of the HAL9000 computer from 2001 A Space Odyssey.
In the 1990s, though, artificial intelligence was much more the stuff of reality than sci-fi. In the 21st century, many of us carry a sophisticated form of AI around with us in our pockets every day in the form of Siri, a feature of Apple’s iOS platform. I often hear my daughter say, “Hey, Google?” to her computer to take advantage of that company’s AI. AI is here to stay, it would seem, and we’ve known that for a while.
Because artificial intelligence has advanced so quickly, we’re constantly offloading tasks to AI. Quite often these tasks involve rote work, monotonous tasks that we could, of course, do ourselves, but we see as a waste of time because data sorting just isn’t fun. After all, a computer doesn’t get bored, it just does what we tell it to. It doesn’t need to sleep, doesn’t take smoke breaks, doesn’t complain to HR about the lighting in the server room – at least not yet!
Sifting through thousands (or even tens of thousands) of news stories an hour became something we quickly shunted to AI systems. There’s some complicated programming involved, but we rely on programmers to be smart and resourceful, so it wasn’t long until AI systems emerged that could pick out the needle about Goldman Sachs from the haystack of the day’s Wall Street news. As AI became more refined, the systems were able to throttle the firehose flow of information into a trickle of useful, actionable news.
With any kind of technological advancement, getting halfway to the goal is pretty easy. With every next halfway-there goal, though, the difficulty increases and often does so exponentially rather than in a linear fashion. By the time 90% efficiency is achieved, millions of dollars have been spent and countless man-hours of effort expended.
The human factor in content curation, which fell out of favor as computers became highly sophisticated, is making a strong comeback because – thanks in large part to social media – the trickle of information has once again become a river of data. AI systems can tell you which articles have the keywords you’ve specified, but they lag in deciding whether those articles are ones you really should read. Humans are able to offer two things that computers can’t.
- Humans understand the difference between relevance and importance. An AI system can pick out articles that mention “the Coca-Cola Company,” “profit,” “earnings,” and “quarter,” but it’s far more difficult to separate an earnings announcement from an announcement about the earnings announcement. They’re both relevant, but only one is important.
- Humans communicate better with other humans than they do with computers, even sophisticated AI systems. Trying to explain the difference between an earnings announcement and an announcement about an earnings announcement to a computer might be something an experienced programmer can handle, but a CEO, an executive VP, a senior manager or a line-level employee is going to have trouble accomplishing that task.
I’m an example of just how important the human factor is when it comes to content curation. Before I came to Acquire Media, I worked for a company with a robust AI system that sorted, classified and categorized content. It worked well and when the company hit a rough patch financially, the management decided their AI worked well enough that they no longer needed any human editors. The layoffs eliminated the entire editorial department.
About a year and a half later, they started looking into why they were losing customers faster than they were gaining new ones. One of the biggest complaints was the quality of the content. “But our AI is awesome!” they said, “How can this be?” The problems were varied, but they manifested in content categories that sported accuracy sometimes as low as 20 to 25%. They hired me a few months later, and I spent the better part of the next five years tuning and tweaking the AI and the content categories to get back up to that magical 90% accuracy level we sought.
The human factor is critical to content curation simply because in the end, having a human reviewing the content for you is going to yield the best results and give any business the best return on investment for its content dollars. It’s not the 90% of the work that an AI system can do that makes the difference in the business world, it’s the 10% of it that still has to be done by a real live person that is truly critical.