On Signal’s Meredith Whittaker in The Economist. Or: The origins of poor digital policy demands

However, even Whittaker is not immune from the forces that turn resistance and wishes for real change into demands that have lost their spark. So many people may have had a sense of a different future, but had to trade that for short-term impact. Impact that consists in articulating demands that are very close to what will happen anyway, which means that there are real chances of them getting fulfilled. 

In her introduction, Whittaker conveys a sense of unease about the industry promising “AI agents” that will allow us “to put our brain in a jar while a bundle of AI systems does our living for us.” This is a fantastic phrase to bring out what the promises by AI companies mean when you take them to their logical conclusion: AI agents threaten to make human agency increasingly obsolete. And they threaten or promise, depending on how you feel about it, to take out the human aspect in human connection, turning them into pure transactions, or, in Whittaker’s words: “Why waste time on wooing when you can leave it to your botservant to turn on the charm?”

The remainder of the article, however, hones in on a solution that will equip us to, at best, partly solve the problem, but I think is better characterised as utterly inadequate: some privacy, some transparency, some choice. Let’s look at them in turn.



On transparency, Whittaker writes, “radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.” If “vague assurances and marketing-speak are no longer acceptable” [emphasis mine], they were in the past, right? I think she is trying to convey that something is different now, that more is at stake, but her choice of words – trying to come across as moderate and measured – legitimises corporate lies in other contexts. What is more, I do not think demanding transparency should ever be considered “radical” because transparency, on its own, does not change anything. It is only meaningful if it lays the ground for further intervention, if it gives those who observe something they do not like a way to change it. Transparency is nice, but useless without control.

On choice, Whittaker also calls for more changes in OS design and cybersecurity, without which “we risk creating a future in which a few powerful companies decide that the convenience of leaving restaurant-booking or prioritising tasks to AI is more important than cyber-security, healthy competition and the right to private communication.” This is an incredibly ambiguous ending, especially considering the strong description of the problem of increasingly obsolete human agency. What exactly are we to recognise as the problem with this future? Is the problem that “a few powerful companies” make that decision? If it is left to individuals to choose that “convenience […] is more important than cyber-security, healthy competition and the right to private communication”, does this make convenience an acceptable outcome? Or does Whittaker believe that individuals would always opt for cybersecurity, competition and privacy? Of course, I agree that AI companies should not be making those decisions. But neither should this be put on individuals. Again, Whittaker does not explicitly say so, but I would expect more than silence from her on this question, and I would be surprised if she did not have a view, and probably a very educated one.

Even all of the elements of her solution, taken together, would still be vastly insufficient to prevent AI agents from eroding human agency and putting it into the hands of companies. What we have to make up our mind about, apparently, is whether we want future no. 1 with unhinged AI agents or future no. 2 in which AI agents can obtain data within certain limits influenced by app developers, their workings are transparent and maybe even some of their harms for cybersecurity are mitigated. This is just such a narrow window, where the best choice is the lesser evil, but by no means a positive vision. To be fair, Whittaker does not claim to be painting it. But by describing a real problem and writing about an inadequate solution, it becomes harder for others to open up the window she almost closed. For example, arguing against the inclusion of AI agents at OS-level becomes harder because Whittaker focuses on how to adjust their design rather than whether they are desirable in the first place. Design fixes are not addressing the question of human agency and human connection.