Navigating the Intersection of Media, Democracy, and Artificial Intelligence

Navigating the Intersection of Media, Democracy, and Artificial Intelligence: A Call to Address the Human Dimension
I’ve posted similar comments over on the Neiman Journalism Lab’s Facebook page. I have also previously commented on posts by the Knight Commission on Trust, Media and Democracy and, I have submitted several pitches to the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund.
In terms of your questions, I’ve written a short summary below. I have been working on these issues for ten years and wrote a related dissertation on the subject. My work has involved developing a content and brand strategy pitch to a major cable news broadcaster that sought to mediate the ideological divide in the presentation of news as it relates to politics.
The most important aspect that is often overlooked in these ongoing conversations about trust, media, and democracy is the inherent human dimension.
There is one raw and uncomfortable fact that the Knight Foundation Commission on Trust, Media, and Democracy needs to front-load into the assessment of trust, media, and public life: when different people are confronted with the same facts, they can have very different conclusions about the risks, the blame, and the best steps to take to resolve the politically charged question or circumstance.
Human beings have divergent core intuitions about “reality” and the solutions to political problems. This problem is at the root of the current distrust in political journalism, not that the reporting is necessarily low quality or rife with spin and lies.
The same problem is why Fox News was created — conservatives distrust liberals, liberals distrust conservatives. Teaching people how to critique, assess and understand media is also important. Still, it is important to highlight the present problem that people understand the world differently in light of establishing a contextual long-view of the content, history, and effects of media from these various perspectives.

Each political group wants its narrative framed in a way that aligns with its core intuitions about how the world is and should be—and these core intuitions are divergent. It is important to reflect upon this fact in light of questions that James Madison outlined in Federalist #10: How do we guard against the influence of factions with the understanding that factions are always going to exist?
In his time, Madison had the wisdom to understand that our common political impulses, passions, and interests align us and divide us, but they are always present and inherently human. So, in Madison’s view, any move to remove factions is problematic because, in doing so, it is a move to extinguish a form of liberty.
So, when thinking about how to teach people how to consume media, there is a question about the objective—are you aiming to indoctrinate or transform society as a part of a real or perceived faction? In my view, this is an unrealistic ideal.
As it relates to political attitudes, beliefs, and values, transformation occurs on the individual level, by circumstance, by choice, or by understanding. It is not often through the presentation of facts or assertion of a well-constructed, rational persuasive argument—the learner has to be a participant in a transformative process that is deeply personal.
In our politics (and consequently in our news as it relates to politics), human beings will forever and always disagree about what should, must, and ought to be in our politics, and any assessment of politically-related journalism is inherently a part of this very human problem; It is important to start with a clear assessment of “reality”.
By virtue of our perpetual disagreements about reality (recent example, Moore vs. Jones in Alabama), an assessment of distrust in journalism needs to be set into a context that frames the practices of journalism as it relates to politics as also being about the divergent meanings that we can derive from the same facts that are presented, as well as the various narratives that can be derived from those facts concerning what is and should be in our politics.
Therefore, rebuilding trust is a long-term process that requires the commitment of publishers, platforms, and consumers over many years to understand and address the inherently human problem of news as it relates to politics. The following video on Facebook highlights that this is not simply an American challenge — it is a human challenge:
https://www.facebook.com/Vox/videos/784994918354779/
Also, in terms of the frontiers of AI and big data and the question of making “the robots work for us,” new tools are needed, and it is important to understand the influence, effect, and impact of tools like Cambridge Analytica on trust, media, and democracy.
And, while I anticipate that Homeland Security, the CIA, the FBI, the State Department, NSA, etc. understand big data and AI behavioral modeling (ex. Cambridge Analytica) as a problem for democracy, they are also likely hamstrung to get Congress to enact new regulatory action on media and social media to require “fairness” because it will be viewed as having a “chilling effect on free speech” (from the Republican view).
So, in terms of your question about trade-offs in subjecting social media platforms to more scrutiny and regulation, we know from experience that regulatory action is going to be a challenge for the same reason that we ended up here—the deletion of the Fairness Doctrine allowed for the emergence of Fox News, and as an eventual long-term result, our public discourse became fractured.
But, with the Fairness Doctrine as an example, if you try to implement social media regulations that I personally believe are needed for the sake of preserving Democracy and ensuring fair elections (admittedly, these are Democratic left-leaning ideas). There are going to be many in the opposition, from the right, who argue that any regulation on social media has a chilling effect on free speech. So, currently, there is little connection between our divergent ways of American knowing; therefore, transforming public perception is only possible through public dialogue and discourse.
AI/big data-enabled tools like Cambridge Analytica are making the ideal of transforming public discourse more difficult because these tools make it possible for political operatives to know how individual citizens think and very clearly understand what individual citizens want and need to hear relative to their political views in order to elicit their strongest emotional response.
The research that made these tools possible emerged out of psy-ops related defense research conducted by DoD, IARPA and universities. Previously, it was used to influence DoD, State Department and CIA’s, etc. ambitions and activities in foreign policy, now it has been commercialized and has been used to influence U.S. election outcomes.
The problem is that the American public does not see all this in motion. Predictive analytics, algorithms, and big data mining are all used to influence citizens' emotional responses through media and social media. The political content exchanged via American media and social media businesses is mixed up, confusing, and contradictory in its presentation of news as it relates to politics.
For anyone interested in being discerning and critical of news information related to politics, there is a problem in knowing what or who to believe when it comes to evaluating political “facts” relative to what we each believe we should, must, and ought to do in American politics.
At the risk of affirming a “dupe narrative”, I believe that the less discerning folks, rather than questioning the veracity of what they read, hear, and view, are often victims of a predatory game that is being played with the invisible tools of predictive analytics, algorithms, and big data-mining as well as the intentional stoking of fears. We all have innate intuitions, and mainstream media and social media can be dubiously used to stir our emotions about what we already believe to be true, even if it is a misrepresentation of the facts.
In my view, our democracy is failing because the media and social media are ill-equipped and unprepared to address the challenges presented by predictive analytics, algorithms, and big data mining—e.g., robots. Thus far, media and social media have failed to fully understand and develop new approaches and tools to contend with the inherently human dimension, namely the proposition that political opponents view the world differently.
I’m happy to contribute to this conversation further. If you want additional feedback, feel free to get in touch. Thanks for reading.