digital humans

Using Human Stories to Bring Critical Work on AI to the General Public

Investigative journalists are increasingly looking into the use of AI and algorithms in areas sensitive to citizens’ rights. However, this kind of work tends to reach only niche audiences. How can we engage the general public with stories that uncover a flawed algorithm or a poorly designed AI tool? Here are some ideas.

by Pablo Jiménez Arandia

In 2020, a few months after the Covid pandemic began, I received a request from a non-profit organisation in Spain. They asked me to make a documentary podcast about how algorithms and AI were impacting the world of work and the rights of thousands of people across various sectors, from riders working for delivery companies to unemployed communities working as data labellers to train AI models.

To be honest, at that time I barely knew what an algorithm was, let alone how to build an AI system. I had never done any specific training on data science or related areas. However, shortly after I started the project I realised there was so much opportunity to dig into how data-driven systems impact our lives and rights.

Ever since that first project, I have dedicated most of my investigative work to algorithmic and AI accountability. And though I am now more comfortable with technical data and information, I am still far from an expert, at least from a technical perspective.

I share this personal story not because talking about myself comes naturally — in fact this is the first time I’ve ever started an article in this way in my nearly 15 years in journalism — but because I believe my own experience investigating and reporting on AI technologies can be useful to many who want to delve into this beat but may feel intimidated.

In this article, I will explain how a reporter like me can investigate these types of stories. I aim to show how to turn a supposed shortcoming into a strength in service of bringing stories about a flawed algorithm or a harmful AI system not only to niche readers, but also to a non-specialized and general audience who might not be aware of the risks posed by data-driven technologies.

Cross-disciplinary work is key

AI accountability stories try to fight back against the opacity surrounding data-driven technologies in the public and private sectors. These stories might, for example, delve into the data behind a predictive AI model used by a bank to approve or deny a mortgage or investigate the manner in which an obscure algorithm guides decisions taken by police forces in a small municipality.

Every AI accountability story has both a technical part as well as a socio-political one. And both deserve the same level of attention.

In addition to understanding the technical side of an AI tool or AI system, we need to investigate the social and cultural context in which it was implemented: Who designed it and how? Which communities are affected? What are the business or ideological decisions behind it? Why was it launched? What are the main goals of the project?

Cross-disciplinary work is key to finding the information you need in order to tell a complex story that reaches beneath the surface.

To gain a holistic understanding, stories must incorporate a rich variety of interdisciplinary perspectives. This can mean collaborating with other reporters who have different skills and expertise from your own. It can also mean searching for the right sources to have a more comprehensive view of the system at the heart of your investigation and its deployment in the real world.

Screenshot from Machine Bias (ProPublica; 2016), taken by the author.

For example, let’s say you want to look into an AI tool aimed at teenagers. A holistic investigator would talk to the engineers that designed it, as well as educators, sociologists, families and, of course, the users themselves — in this case the teenagers. Let’s say now that our story is about an algorithm used in prisons. Interviews should include lawyers, prison officials, psychologists and the inmates screened by the system themselves. These examples may seem obvious to many, but many of today’s tech stories show that this practice is not as common as it should be.

But even when I have a sense of the kinds of people I would like to interview for an AI accountability story related to the public sector, I face a familiar set of questions: Where should I start looking? How can I get a first grasp of the system I want to investigate?

The answer in my case has almost always been the same: submitting one or several freedom of information requests (FOIAs) to the right government department. While this reporting tool has its own limitations and can vary wildly by country (and even, in some cases by state or municipality within a given country), in general terms we can say transparency laws are a great ally for reporters working in this beat.

The information you can request varies very much depending on the system and area where it was implemented. For example, if you are investigating the AI projects of a whole government department you can often submit a general request asking for AI-related reports and other documentation in their hands. If you already have some insights about a specific model you are looking into, you may ask for the data dictionaries feeding the tool, the handbooks for end-users dealing with its outcomes or the tests conducted before the deployment of the system.

On many occasions, I have also received information or documents as a response to my FOIA that were way too technical for me, or a response that I needed a data scientist specialist to review. If that happens to you, rely, again, on teamwork. Send the response over to a data reporter who can analyse it or share it with trusted sources to get a good understanding of the information you have obtained. From those insights you can start building the stories you are aiming to tell.

Real-life stories have impact

Over the past few years, as automated systems like AI models have proliferated, great journalistic investigations covering the risks or actual harms caused by these tools have also proliferated alongside them. Many of the more recent groundbreaking investigations have innovated, for example, with data visualizations that explain in an accessible way how a complex automated system works and, most importantly, how it impacts people's lives. Note that if you want to use data visualisations in an accurate and effective way, it is crucial to work closely with data reporters and designers pushing in the same direction.

Data visualisations can bring an investigation to life, but in my opinion, the stories that stand out above the rest have one thing in common: reporters and editors decided to put real-life examples and human stories at the forefront of their narrative, using them as a nuclear part of their storytelling.

These model stories, many of which I included below, portray, for example, how an AI tool used by the social services to predict which children could be at risk may flag parents with disabilities, or the way an algorithmic system used in the prisons of the United States discriminates black people.

Screenshot from Is data neutral? How an algorithm decides which French households to audit for welfare fraud (Le Monde, Lighthouse Reports; 2023), taken by the author

Read the pieces listed below for inspiration on new stories that critically examine the implications of the tech surrounding us, especially when talking about tools used in sensitive areas from a human rights perspective. As you read them, think about how the reporters might have structured and carried out the investigations. If you are a researcher or journalist interested in covering such topics, you may even find their contact details and write to them. Also, an increasing number of organizations are including a methodology section that explains in detail how an investigation was made.

In my own experience, finding one or more powerful human-centered stories that explain the bigger story can completely change the impact of a piece of research. This is why I always try to prioritise finding real human stories that portray those impacts.

To take a recent example, on several occasions I investigated a risk-assessment algorithm called RisCanvi, which is used in the prisons of Catalonia, Spain. In 2024, I gained access to some technical details that allowed me and the reporting team I was part of to recreate how this system works internally and how the different risk factors play a role in the model’s final outcome - and therefore in present and future inmates. The technical research was central to our findings, but I also knew that framing the story around a real case of someone assessed by the algorithm would make the piece much more compelling for a general reader.

Thanks to an association of inmates' relatives who fight for better living conditions for incarcerated people in Catalonia, I met Jesús. After several long conversations with him, I understood how Jesús had interacted with this protocol in the last years, but also how his own characteristics (such as age, type of crime committed, drug abuse precedents, etc.) influenced RisCanvi's results. 

This is just one example of how we can search for human-interest stories and why they can be a game changer for investigations. Finding them can be a time consuming task that does not always yield the expected results. But, as a very general tip, a good starting point is trying to build sources in organisations that work with affected groups and communities you would like to connect with during your investigations (e.g., NGOs, unions, and public interest law firms).

Jesús, one of the characters portrayed at Un algoritmo define el futuro de los presos en Cataluña: ahora sabemos cómo funciona (El Confidencial; 2024). Author: Javier Luengo.

Investigations to inspire you, and other resources

The following list includes some specific examples of investigative stories on AI accountability published in the last few years that deserve a careful read. The pieces listed are a good source of inspiration on how to put human stories at the forefront of critical work around AI and its impacts.

Note: This list is not intended to be exhaustive, but rather a diverse compilation of projects I admire:

Before wrapping up, here there are a few additional resources for reporters interested in covering AI systems at different levels: 

Credits and Licensing

  • Author: Pablo Jiménez Arandia
  • Editorial support & copy-editing: Tyler McBrien, Laura Ranca, Jasmine Erkan
  • Illustration & design: Exposing the Invisible

CC BY-SA 4.0 - This article is published by Tactical Tech's Exposing the Invisible (ETI) project, and licensed under a Creative Commons Attribution-ShareAlike 4.0 International license

Contact us with questions or suggestions: eti-at-tacticaltech.org (GPG Key / fingerprint: BD30 C622 D030 FCF1 38EC C26D DD04 627E 1411 0C02).

About the author: Pablo Jiménez Arandia is an investigative reporter and freelance journalist based in Spain. He covers stories on the intersection between technology, social justice and human rights. Explore some of his work here: https://pablojimenezarandia.com/.


This content is part of the resources produced under the Collaborative and Investigative Journalism Initiative.

Disclaimer:

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

More about this topic