Whatever shapes voting, shapes democracies. Increasingly, artificial intelligence—whether it’s unethically coordinated behavior on Facebook designed to influence issue perspectives or AI-enabled election polling—is becoming such a go-to tool for shaping voting. As the race for the US presidency picks up speed, another area we should expect is the expanding use of AI in election prediction.
Prediction is not simply a practice, it’s an industry; the product, however, is less about better analytics than voter confidence, whether removing it, reinforcing it, or in some cases, replacing the need to research a decision at all. The prediction industry serves to help shape what the public should worry about, should accept as unlikely, and what to pay attention to. It serves as a broker of the modern political imagination. Yet faith in many such human brokers is losing out, with the private sector demanding additional sources of input into decision-making than just the polls and pollsters.
Over the last campaigns, the prediction industry has been increasingly turning to machine learning to drive forecasting. Some such as MogIA and BrandsEye showed success at predicting the 2016 US elections by leveraging alternative sources of data such as tweets, despite more well-watched sources such as BBC and FiveThirtyEight failing to effectively predict. Others such as Cambridge Analytica directly, or the research of Alto Analytics, show the power of shaping political preferences.
Concern, however, should not only be with whether the predictions are accurate, but how AI and the decisions that can come with its design, reshapes the role of prediction in public discourse. For as prediction helps to create a divide in the arguments people and pundits use to persuade one another, AI may be held up as having provided an additional level of certainty in these points and arguments—a tool for providing an ostensibly empirical backing to loose judgements and poor representations of uncertainty. The initial argument is simple: we as humans with limited time and limited information make limited decisions, so what if someone could provide us far more information for us to make better decisions despite time and other constraints.
Each concern and promise points to a need for clearer public representation, not only of the means by which predictions are generated but the features and design of the machine learning solutions as well. Designing an algorithm towards such trends, such as with much of political prediction, can be fundamentally misleading and distracting, confusing data of past wins in the political game with an understanding of the political game. Developing systems around such trends is not to say that history repeats itself, it is often to say we can only reasonably expect history to repeat itself.
So indeed, more often than not, many attempts at prediction tell us more about the people trying to predict than the world they are trying to understand. Prediction tells us how these people, and at some points the prediction industry at large, see the problem, how they find data, how they ask and frame questions, and how they understand what will convince and persuade members of the public. AI systems are extensions of the mental models of the production team, they are expressions of their assumptions and biases, as much as the inevitable biases of their data. No one wants biased data, but real-world data is inevitably subject to inconsistencies. However, it’s not enough to simply acknowledge such mental models, steps need to be made to ensure that cognitive diversity is reflected across the design and use chains—lest AI simply reflect our varieties of ignorance.
Yet concern and attention needs to remain not simply at the level of responsible design of AI in political predictions, but in the responsible use and reference to such AI driven data points by the mass media and the public—in how news reports leverage predictions for viewers and for clicks. Indeed, some competition may be beneficial if we assume the industry is geared towards a race to the top, when better predictions beat others, and more sincere expressions of the uncertainty and limited use of the prediction is taken in good faith too. For now, we are not so optimistic.
Following the prediction industry can have a potentially agenda shaping effect: directing the news and reporting environment on who to listen to, determining who among the candidates is worth the attention, or worse, how the public argues and investigates the key issue items for the election, and what people find worth referring to when trying to persuade one another.
Still, when used appropriately, we believe AI can be a powerful tool to empower public conversations and strengthen the quality of political debate—but it requires taking a hard look at whether or not AI is needed at all. AllSides presents news and content from across the political spectrum to allow space for differing opinions and highlight cases of polarization and media bias. There is a case for leveraging natural language processing to scale the system even further.
Over the coming years the debate around AI, public information, and the fairness of our democracy will continue. Part of this debate should begin with improving, or indeed demanding, transparency and the responsible design and use of AI in media outlets and the prediction industry. Efforts could include audits of prediction algorithms, their application, and initiatives to improve the validity of data collection and use. We need to have higher standards for any industry dedicated to shaping voter perspectives.
Algorithms should not be surrogates for historical decisions. Indeed, democracy without the possibility of public persuasion no longer counts as democracy. Each generation rediscovers the past for themselves, and how we use and understand AI in electoral races will be a fundamental reflection of the common public consciousness of our time. So, what do we want future generations to know about us?