DSA colors and shapes

What can the Digital Services Act do for you? Tips for navigating platform [in]transparency

The article discusses the tools and resources provided by the EU's Digital Services Act (DSA) to enhance transparency in the digital sphere. However, despite some progress in holding big tech platforms accountable under the DSA, significant gaps remain or are, in fact, widening. For journalists, researchers, and digital rights activists, this means that existing tools and mechanisms are insufficient to uncover the full evidence behind platform practices, hindering their ability to investigate these services and their consequences effectively. This limitation is particularly concerning in areas such as the platforms’ use of complex algorithms to shape public opinion, abusive data collection practices, and the spread of misinformation during critical times like elections.

by John Albert

When the EU’s Digital Services Act (DSA) was passed into law in 2022, it was hailed as a landmark achievement in platform regulation. Among its key promises was mandating transparency — requiring online platforms to disclose how their systems work so that regulators, journalists, researchers, activists and the public can hold them accountable for the risks they create.

The Digital Services Act (DSA) is a harmonized set of rules applying across the EU to large online platforms (including social media, search engines, and online marketplaces) as well as to hosting and intermediary services. It introduces stricter requirements for the largest platforms and search engines with over 45 million users in the EU, like assessing so-called systemic risks and improving transparency. Enforcement is split between national authorities and the European Commission, which oversees these largest platforms directly and can fine them up to 6% of their annual turnover for breaking the rules.

Fast forward two years: in August 2024 Meta closed down CrowdTangle — an online tool extensively used by researchers and journalists to trace the spread of information (and mis/disinformation) on Facebook and Instagram — leaving researchers scrambling for alternatives. Ad repositories are riddled with missing data and broken filters, online platforms have reneged on commitments to fact-checkers (see Facebook’s recent move), while newly published systemic risk reports — a DSA innovation — often read like recycled PR. 

These setbacks might seem like signs of failure. But here’s the difference: platform transparency is no longer voluntary. The DSA serves as a legal backstop, creating obligations for platforms to provide data and tools that didn’t exist before. Public scrutiny of these gaps doesn’t just highlight flaws, it can help inform enforcement actions, pushing platforms to improve.

That is the focus of this article: transparency, as a concrete set of tools that can support digital investigations under the DSA, with examples of how researchers can critically engage with these tools. In a nutshell, I will cover:

  1. The various DSA transparency tools designed to support public scrutiny of major social media platforms and search engines; 
  2. The limitations of these tools and how they’re being implemented; 
  3. The potential for an accountability feedback loop, where exposing transparency gaps can trigger enforcement actions, creating pressure for platforms to improve; 
  4. Why this process will take time, active engagement, and complementary investigative approaches, and why transparency won’t solve everything.

As cybernetics pioneer Stafford Beer put it: “The purpose of a system is what it does”, not what it’s intended to do.

So, what is the DSA’s transparency framework actually doing in practice? And how useful is it for digital investigators seeking to hold platforms to account?

DSA transparency tools: What’s in them and where to find them

One tangible impact of the Digital Service Act (DSA) is that it has created new transparency rules that force platforms to produce and publish a lot of data, or else risk significant fines. These rules are designed to create new pathways for regulators, researchers, journalists and the wider public to scrutinize platform behavior.

Thanks to the DSA, we now know things like:

  • How many content moderators platforms employ, including language capabilities relevant to different EU member states. 
  • How many posts have been algorithmically blocked or down-ranked and on what grounds.  
  • How many users formally challenged moderation decisions, and the outcomes of those challenges.
  • How platforms assess and manage “systemic risks” (e.g., risks to fundamental rights, elections, public health, and minors) and the steps they are taking to mitigate those risks.

Where can you find these data?

Listed below are the various transparency tools and reports available under the DSA: 

1. Content moderation transparency 

  • Transparency reports: Regular summaries of platforms’ content moderation activities, such as the total volume of content removed, appeals processed, and the use of automated tools. 

  • The DSA Transparency Database: A centralized repository containing granular “statements of reasons” for individual moderation actions, such as why specific posts or accounts were flagged, removed, or restricted. 

2. Advertising transparency

  • Ad repositories: Publicly accessible repositories with data on ads, including ad content, advertising targeting criteria, ad duration, and spending on ads. You can find more details and a list of ad repositories in this guide to investigating digital ad libraries (also archived here).  

3. Algorithmic transparency 

  • Recommender system disclosures: Insights into how platforms prioritize and suggest content to users (these are typically located in the platforms’ transparency centers - here are examples from Meta and TikTok. Platforms must disclose the parameters and functioning of their recommender algorithms, and users are given the option to switch to a chronological feed or other non-personalized settings for viewing their posts.  

4. Systemic risk oversight 

  • Systemic risk assessments: Platforms are required to publish annual systemic risk assessments detailing societal risks stemming from their services such as harms to democratic processes or public health, along with their mitigation strategies. These assessments are reviewed by independent auditors, who also produce audit reports and track the implementation of recommended measures. 
  • Code of Practice on Disinformation reports: Voluntary updates on actions platforms take to reduce the spread of disinformation, such as limiting the monetization of misleading content or improving fact-checking.

Major platforms including Google, YouTube and LinkedIn recently withdrew their fact-checking commitments under the Code, raising concerns about the Code’s effectiveness as a benchmark for DSA enforcement.

5. Data Access for public interest researchers

Two tiers of access aim to facilitate independent research: 

  1. Public data access: Enables public interest researchers to monitor publicly available data, such as trends in content dissemination. Once informally known as the “CrowdTangle” provision, implementation has been complicated by platforms restricting access to appropriate tools. 
  • The DSA 40 Data Access Collaboratory is a project and platform that has been tracking public data access applications, providing support to researchers, lawmakers, and the public. Its mission is to promote the application of Article 40 of the Digital Services Act (DSA), which "requires platforms to grant data access to researchers and non-profit organisations in order to detect, identify and understand systemic risks in the European Union" (source: DSA 40 Data Access Collaboratory).
  • The Digital Democracy Monitor of the non-profit organisation Democracy Reporting International maintains a Data Access Resource, which outlines the types of data each platform provides, their vetting procedures and how they connect to the data access obligations under the DSA.
  1. Data access for vetted researchers: Allows researchers that fulfill specific vetting criteria to request internal platform data critical for studying societal risks. While promising, the implementation is complex and untested, leaving open practical questions around access and enforcement. The DSA40 collaboratory has been tracking data access applications.

I’ve written a recent blogpost summarizing the state of play on this provision. For the purposes of this piece, I’ll focus instead on making the most of transparency tools which are publicly accessible.

These transparency mechanisms are valuable for understanding platform operations and strengthening accountability. For example, journalists investigating disinformation campaigns during elections can use ad repositories to uncover covert targeting strategies, while researchers can scrutinize systemic risk reports to evaluate platforms’ actions to safeguard vulnerable populations.

All of this sounds promising. But does it really deliver?

The limits of platform transparency

Take systemic risk reports, for example. While they represent a step forward in transparency at best, observers have noted that the first reports largely resemble a public relations exercise. They tend to lack depth or verifiability, neglect input from affected groups, and sidestep key issues like platform design — failing to address whether certain features push users toward addictive rabbit holes or self-harm.

The Electronic Frontier Foundation (EFF) provides an accessible overview and analysis of DSA's systemic risks context:

  • "The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing." (see more in: "Systemic Risk Reporting: A System in Crisis?" by Svea Windwehr, EFF, January 16, 2025)

The DSA Transparency Database has also drawn criticism for inconsistencies and gaps in the data it contains. Some platforms use vague language to describe content moderation actions, limiting the database’s utility for researchers and regulators. And when it comes to ad repositories, a Mozilla and CheckFirst investigation found that virtually all Big Tech platforms' ad transparency tools were plagued by missing data, bugs, and poor features, making them ineffective for meaningful oversight — particularly troubling when it comes to tracking election disinformation.

Platforms’ Application Programming Interfaces (APIs) for accessing publicly available data have also been a flashpoint. The DSA mandates these tools be accessible to public interest researchers, but that didn’t stop Meta from shutting down CrowdTangle (RIP) and replacing it with the far more restrictive Meta Content Library, whereas X put its researcher API behind a prohibitively expensive paywall. Such actions undermine the DSA’s promise of transparency and frustrate efforts to hold platforms accountable.

An API (Application Programming Interface) is a software tool that facilitates communication between a user and a dataset, amongst other things. Facebook’s Ad Library API allows users to query ad data using a particular set of commands developed by Facebook.

These setbacks aren’t just concerning, they highlight why active scrutiny is essential. Transparency is meant to be the foundation for accountability under the DSA, but realizing that promise requires critical engagement with both platforms and the regulation itself.

The transparency/enforcement feedback loop

To hold the largest platforms accountable, national regulators and the European Commission have begun exercising their new enforcement powers under the DSA. Alongside investigating platforms for failing to adequately identify and manage systemic risks,many early enforcement actions have focused on apparent failures to provide the public with adequate transparency tools.

The Commission has published an overview of its main enforcement activities over the largest online platforms and search engines.

For example:

  • X: Under investigation for failing to provide an adequate ad repository and restricting data access for researchers by putting its API behind a prohibitively expensive paywall. 
  • Facebook and Instagram: The Commission launched proceedings in part after Meta deprecated CrowdTangle without offering an equivalent tool for researcher access in the run-up to European Parliament and member state elections. 
  • TikTok: Investigations are ongoing into TikTok’s failure to maintain a reliable ad repository and its shortcomings in providing researchers access to publicly available data.

While these enforcement actions aim to improve public transparency, it’s important to note the asymmetries they don’t address. Regulators and auditors like EY and Deloitte have privileged access to platform data that is unavailable to researchers and the public, creating significant information gaps.

Even so, these actions by the Commission demonstrate how DSA enforcement can potentially help enable greater public scrutiny. By addressing transparency shortfalls, the Commission can pressure platforms to provide the public with data necessary for stronger collective oversight. Independent investigations that make use of — and critique — platforms’ transparency tools can, in turn, inform further enforcement action, creating a positive feedback loop for more meaningful transparency.

A work in progress

Predictably, platforms’ transparency under the DSA has been far from perfect. Speaking about the risk reports, my former colleague Oliver Marsh aptly likened them to the “first pancake” — not very good, but a start. It will take sustained pressure to improve the quality of these reports and make transparency more meaningful.

We also can’t rely on transparency alone to understand how Big Tech platforms operate and impact our information environment. Adversarial research methods, like creating sock-puppet accounts (i.e. fictitious accounts for online research purposes) or scraping public data, will remain essential for investigators to provide insights that platforms’ curated transparency reports fail to offer, or expose flaws in their public databases.

Encouragingly, Article 40.12 of the DSA could help protect scrapers working in the public interest:

  • Art 40.12. "Providers of very large online platforms or of very large online  search engines shall give access without undue delay to data, including, where technically possible, to real-time data, provided that the data is publicly accessible in their online interface by researchers, including those affiliated to not for profit bodies, organisations and associations, who comply with the conditions set out in paragraph 8, points (b), (c), (d) and (e), and who use the data solely for performing research that contributes to the detection, identification and understanding of systemic risks in the Union pursuant to Article 34(1).” (Source: Article 40, Data access and scrutiny - the Digital Services Act (DSA)

We should also, as Rachel Griffin writes, be wary of overly relying on technocratic solutions in platform governance. Framing regulation as risk mitigation risks stabilizing a system that softens Big Tech’s harmful practices while leaving their commercial logics unchallenged. This approach also reinforces political agendas, allowing both corporate and governmental powers to shape regulation in ways that often overlook marginalized groups and broader societal change. 

But working within the DSA’s framework still has value. It equips us with tools to ask more nuanced questions about how, for example, platforms identify and manage societal risks—and to push for accountability in ways that were previously impossible.

See, for example, this investigation by Global Witness on the Romanian Presidential elections of 24 November 2025. Their findings may be used as evidence by the Commission as they investigate TikTok’s efforts to mitigate electoral interference risks.

Final thoughts: Experiment with the tools

If you’re investigating a major platform or search engine, take a look at the transparency tools available under the DSA. They might help you uncover valuable insights, or expose flaws that could help trigger or support an official investigation. Either way, it’s worth experimenting. By testing what’s possible under the DSA, we may help realize its fuller potential.

We should also recognize what the DSA can’t or won’t do. By prioritizing risk management, it may reinforce corporate logics and political agendas rather than dismantling them. The DSA does not fundamentally alter platforms’ extractive business practices; It’s probably not the right instrument to address deeper issues like media concentration or surveillance capitalism, much less Big Tech’s environmental impact.

Moreover, while the DSA aims to enhance transparency, it also grants authorities broad powers to request user data, and platforms are obliged to comply. This raises concerns about government overreach and should remind us that transparency isn’t just about holding platforms accountable — it also requires scrutiny of how state institutions use their power in the digital sphere.

These are serious concerns, demanding different investigative approaches and ways of thinking beyond what’s offered by the DSA. Yet, it doesn’t dismiss the fact that the DSA creates many potential pathways to scrutinize platform data. By critically engaging with these transparency tools, we can help shape them to more genuinely enhance accountability that ideally supports broader societal values like democracy, human rights, and mental well-being. The DSA may not be perfect, but it’s a start — and through a collective effort, it could become a valuable instrument for change.

About the author

John Albert is an associate researcher at the University of Amsterdam’s Institute for Information Law and contributor to the DSA Observatory. He writes on the DSA’s implementation and enforcement, most recently covering the Romanian elections and the first rollout of systemic risk reports. Previously, he was policy and advocacy manager at the Berlin-based NGO AlgorithmWatch; in a former life, he was a video journalist and rock music teacher.

Contact us

You can reach out to us with questions about the article or other projects of Exposing the Invisible and Tactical Tech by writing to: eti@tacticaltech.org (GPG Key / fingerprint: BD30 C622 D030 FCF1 38EC C26D DD04 627E 1411 0C02).



This article is part of the resources produced under the Collaborative and Investigative Journalism Initiative.

Disclaimer:

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

More about this topic