In case you haven’t figured it out yet, the title is a pop-culture tribute to the Cold War classic, “Dr. Strangelove or: How I Learnt to Stop Worrying and Love the Bomb” directed by Stanley Kubrick (Google it if you’ve never heard about it!)
In one of the scenes, the President is discussing with Dr. Strangelove, his expert on nuclear war, on the Soviet’s construction of a Doomsday Machine, which will destroy all life on Earth in the event that the Soviet Union comes under nuclear attack. Bewildered by this revelation, the President tries to understand why anyone would build such a machine:
President Merkin Muffley: How is it possible for this thing to be triggered automatically and at the same time impossible to untrigger?
Dr. Strangelove: Mr. President, it is not only possible, it is essential. That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy… the FEAR to attack. And so, because of the automated and irrevocable decision-making process which rules out human meddling, the Doomsday machine is terrifying and simple to understand… and completely credible and convincing.
Watching Kevin Slavin’s TED presentation about how algorithms shape our world, I grew increasingly alarmed as I listened. I would be the first to admit that my imagination tends to take a fantastical turn, and images of ‘The Matrix’ and ‘Terminator’ gripped me as I imagined a world with algorithms going crazy and eventually bringing us back to the Stone Age or something…
But haven’t we gone down this path of fantasy with the Y2K bug? Before the dawn of the year 2000, analysts were warning that faulty algorithms would make computers think we’ve gone back to the year zero, and instantly crash. Now we have the Flash Crash of 2:45?
Post-apocalyptic fears aside, is journalism better served by algorithms that aggregate data, or by when humans who assimilate data and postulate? Personally, (and at the risk of sounding like a Luddite) I personally feel that while algorithms can help the journalist to understand the vast ocean of data surrounding us, and make order out of chaos; it is still important that humans retain the ability to assimilate data and postulate.
For all the vaunted advances in perfecting algorithms and enhancing our ability to comprehend information, algorithms are still nothing more than mathematical equations designed to focus on a series of repeated patterns and to take action(s) based on these patterns.
Let’s consider a hypothetical example… A journalist enlists the aid of an algorithm to track the stock market movements for a given day, and sets the condition that the journalist will be alerted if the stock market moves erratically. Now, in the sudden outbreak of war, or the announcement of a financial crisis, the algorithm will be activated, and the reporter is alerted. However, any journalist worth their salt would have already looked at the world news and postulated potential outcomes of global events, and would hence already been on the alert even before the algorithm sets off an alarm.
But what happens if the algorithm sets off an alarm in response to other algorithms going haywire? While I am no computer scientist, I wonder if the Crash of 2:45 was caused the statistically unlikely problem of all these algorithms being set off at the same time? A kind of ‘Perfect Storm’, if you will… Where all these various conditions are simultaneously met at the same time, and thus affecting all algorithms in a kind of domino effect. And all we can do, is to quickly press the big red ‘STOP’ button in front of us. But what if it’s too late?
Yes, algorithms can help to ease the task of the journalist. Allowing the journalist to focus on more important issues, like finding the angle for writing the story, while the algorithm trawls the vast seas of data for the data that we need. But at the end of the day, we need to sift through the data collected and postulate theories based on the data collected. No program, no matter how advanced can possibly mimic our ability to extrapolate patterns and draw conclusions based on these extrapolations.
Which brings us back to Dr. Strangelove… if we surrender ourselves to the “mere automated and irrevocable decision-making process which rules out human meddling”, are we then not doomed? Man isn’t its own worse enemy – our creations are too.