On election night 60 years ago, CBS News used a UNIVAC I computer to predict that Eisenhower would win the 1952 presidential election by a landslide.
The network initially balked at UNIVAC’s oracular result, noting that it was based on a mere 1 percent sample, but at 9:15 pm and hours ahead of the other networks, Walter Cronkite shared the news with America. The announcement was received with predictable skepticism and controversy but when the final tally came in, UNIVAC had missed the electoral total by less than 1 percent and the popular vote by less than 3 percent. Political forecasting was changed forever.
Noting the prominence of advanced analytics and big data in the current election cycle, it seems that we have reached a new UNIVAC moment. The stars this time aren’t computers, but algorithms and the human rocket scientists wielding them to squeeze insights from vast pools of data. And unlike 1952, when an unsuspecting public didn’t meet UNIVAC until election night, the new analytics have been front and center for months. Eavesdrop on the pundits and you are likely to hear mention of Bayesian inference, Markov processes, and other once-arcane concepts peppered into the usual political chatter.
Sixty years ago, it took the better part of a day before CBS—and the public—knew whether UNIVAC was right. This time we might have to wait a little longer before learning how the new generation of quant-based forecasters performed. But right or wrong, it is clear that algorithms have changed the political landscape as profoundly as UNIVAC did 60 years ago. We wonder how long it will be before a machine edges out the humans and once again takes center stage once again on election night.
Paul Saffo is a senior fellow for the Council’s Strategic Foresight Initiativeand managing director for Foresight, Discern Analytics.