Over the last years, I have taken part in numerous meetings about how to square the circle of formulating policy demands that both make a difference and have a vague prospect of being adopted by policy-makers. And I often get to observe how this kills any meaningful debate and narrows perspectives. It makes us end up with demands that feel like they help us to get into a slightly less bad future, but no further. In this blog post, I want to illustrate this dynamic using a concrete example of an essay by a leading figure of the progressive policy space in a liberal publication. The policy demands contained therein I do not consider to be a “good compromise”, but a narrow limitation that stifles the debate about what we actually want.
The essay I am talking about is “AI agents are coming for your privacy, warns Meredith Whittaker”, published in September 2025 in The Economist (archived version provided by her here). To be clear upfront: This is a critique of structural features of policy debates, not of Meredith Whittaker as a person or as President of the Signal Foundation. I have seen her at a number of events and found her contributions to be very thoughtful and pleasantly broader than “just” messengers. As far as I can tell, she is involved in various projects that currently defend us against the most terrible outcomes brought upon us, including Signal, the messenger, and the AI Now Institute which challenges the current AI narratives and trajectories.
However, even Whittaker is not immune from the forces that turn resistance and wishes for real change into demands that have lost their spark. So many people may have had a sense of a different future, but had to trade that for short-term impact. Impact that consists in articulating demands that are very close to what will happen anyway, which means that there are real chances of them getting fulfilled.
The stakes are huge
In her introduction, Whittaker conveys a sense of unease about the industry promising “AI agents” that will allow us “to put our brain in a jar while a bundle of AI systems does our living for us.” This is a fantastic phrase to bring out what the promises by AI companies mean when you take them to their logical conclusion: AI agents threaten to make human agency increasingly obsolete. And they threaten or promise, depending on how you feel about it, to take out the human aspect in human connection, turning them into pure transactions, or, in Whittaker’s words: “Why waste time on wooing when you can leave it to your botservant to turn on the charm?”
The remainder of the article, however, hones in on a solution that will equip us to, at best, partly solve the problem, but I think is better characterised as utterly inadequate: some privacy, some transparency, some choice. Let’s look at them in turn.
Some privacy, some transparency, some choice
On privacy, Whittaker writes, “privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as “sensitive” and mark them as off-limits to agents, at the OS [operating system] level and otherwise.” Essentially, Whittaker wants us to turn away from the OS developers because they want to force AI agents upon us without regard for our privacy. And she wants us to turn towards app developers to fix the problem. However, if we look at the evidence, app developers do not have a good track record of being pro-privacy. We know about outrage about random torch apps requesting access to contact lists. But the problem is systemic and large-scale studies of apps and data practices show that app developers cannot be trusted: Around 40% of apps do not comply with their stated data-transfer policy (i.e. transmit data while failing to disclose this; this is more pronounced for Google vs Apple but only slightly less for EU vs non-European users). App developers snatch more data as they become more experienced and in apps targeting young users. And they collect even more data in concentrated markets and when they have a higher market share. In short, the baseline is terrible, with extensive data collection across markets, widespread lying about these practices and particularly outrageous behaviour when zooming in on apps related to market power or targeted at young users. It is those developers in whose hands “control must remain” and we should trust to exercise “agency on behalf of their users”? Signal might put effort into reducing its data needs to the minimum required for functionality and disclose it truthfully, but they are clearly not representative. So while OS developers with AI agents threaten to undermine the app developers’ ability to limit data collection, making a bad situation even worse, it seems futile to rely on app developers as the guardians of privacy. On this note, I would just like to refer back to Malte’s explanation of why privacy is not meant to meaningfully change data collection practices.
Even Whittaker is not immune from the forces that turn resistance and wishes for real change into demands that have lost their spark.
On transparency, Whittaker writes, “radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.” If “vague assurances and marketing-speak are no longer acceptable” [emphasis mine], they were in the past, right? I think she is trying to convey that something is different now, that more is at stake, but her choice of words – trying to come across as moderate and measured – legitimises corporate lies in other contexts. What is more, I do not think demanding transparency should ever be considered “radical” because transparency, on its own, does not change anything. It is only meaningful if it lays the ground for further intervention, if it gives those who observe something they do not like a way to change it. Transparency is nice, but useless without control.
On choice, Whittaker also calls for more changes in OS design and cybersecurity, without which “we risk creating a future in which a few powerful companies decide that the convenience of leaving restaurant-booking or prioritising tasks to AI is more important than cyber-security, healthy competition and the right to private communication.” This is an incredibly ambiguous ending, especially considering the strong description of the problem of increasingly obsolete human agency. What exactly are we to recognise as the problem with this future? Is the problem that “a few powerful companies” make that decision? If it is left to individuals to choose that “convenience […] is more important than cyber-security, healthy competition and the right to private communication”, does this make convenience an acceptable outcome? Or does Whittaker believe that individuals would always opt for cybersecurity, competition and privacy? Of course, I agree that AI companies should not be making those decisions. But neither should this be put on individuals. Again, Whittaker does not explicitly say so, but I would expect more than silence from her on this question, and I would be surprised if she did not have a view, and probably a very educated one.
Even all of the elements of her solution, taken together, would still be vastly insufficient to prevent AI agents from eroding human agency and putting it into the hands of companies. What we have to make up our mind about, apparently, is whether we want future no. 1 with unhinged AI agents or future no. 2 in which AI agents can obtain data within certain limits influenced by app developers, their workings are transparent and maybe even some of their harms for cybersecurity are mitigated. This is just such a narrow window, where the best choice is the lesser evil, but by no means a positive vision. To be fair, Whittaker does not claim to be painting it. But by describing a real problem and writing about an inadequate solution, it becomes harder for others to open up the window she almost closed. For example, arguing against the inclusion of AI agents at OS-level becomes harder because Whittaker focuses on how to adjust their design rather than whether they are desirable in the first place. Design fixes are not addressing the question of human agency and human connection.
Poor policy demands are a structural problem
Again, Whittaker is not the problem, nor is her essay particularly bad or wrong. She is, for many, an idol in the fight for digital rights (on the cyberlibertarian ambivalence of the term see here), so her words count a lot. Her role is to defend the Signal messenger and its mission from attacks from various sides, including governments and AI firms. The essay puts into clear and simple words how AI agents work and at which levels very specific harms can be addressed. She does a great job at focussing on specific policy demands that readers of The Economist can get behind, and this strategy is what people in those policy meetings I mentioned would always recommend. I personally would have loved to read a piece with solutions addressing the problems she starts with. I would generally like people to write op-eds that do not close so many windows. We urgently need more air to discuss policy demands that open up a more positive future rather than, at best, reduce harms.
Photo by Luis Villasmil on Unsplash

