For today’s post we’re going to try something a little different. One of the things that I want to do with Learning From Examples is use it as a platform for speaking to experts—whether that’s in AI or further afield—to give you all a break from my opinions (especially the bad ones).
For the first outing, I spoke to Blair Attard-Frost, a PhD candidate and SSHRC Joseph-Armand Bombardier Canada Graduate Scholar at the University of Toronto’s Faculty of Information. Blair conducts exploratory and applied research on the design and implementation of AI policies and strategies, the political economy, ecology, and ethics of AI value chains, and the application of queer/trans philosophy to AI governance.
If you want to know more about their work, you can read working papers on the governance of artificial intelligence in Canada (here) and the ethics of AI value chains (here). You can also read their chapter, Queering Intelligence, which introduces a theory of intelligence as performance and a critique of individual and artificial intelligence (here).
Final note before we kick off: as always, I’m interested in what you think of this format. This time around, I’m especially keen to hear whether you have any suggestions for who else I should speak to. Let me know what you think by emailing me at hp464@cam.ac.uk.
Harry: We both spend a lot of time thinking about AI governance, which is why I thought it was interesting to see you talk about something I was unfamiliar with: countergovernance. What is that all about and how would you explain it to someone who doesn’t know much about the concept?
Blair: My thinking about countergovernance is influenced by an excellent paper on the topic by Rikki John Dean. Dean defines countergovernance as "citizen opposition or contestation constituted with a direct and formal relation to power." Countergovernance is the agonistic response of a marginalised community to sources of power that have failed to serve their needs. This agonistic response is collectively organised by the community and directed toward sources of power with the goal of causing those sources of power to re-organize around the community's needs. Building on Dean's definition, we might broadly think of AI countergovernance as organised, community-led opposition constituted with a direct and formal relation to powerful actors in the AI governance space.
Harry: Where did it come from? Is it rooted in any specific problems with the traditional approaches to AI governance as you see it?
Blair: I don’t think AI governance has been around long enough as a field to have culturally established traditions, but many emerging norms in state-led approaches to AI governance (as opposed to organisational/corporate governance of AI) share some common problems.
These include lack of agility & enforceability in regulatory frameworks; the erosion of democratic regulatory institutions amidst the rise of private regulatory services providers; inadequate environmental protections through the AI system lifecycle (including both software and hardware components of the AI system lifecycle); inadequate IP & privacy protections in data supply chains; inadequate worker protections in data supply chains & hardware supply chains; inadequate worker protections against harmful applications of AI in the workplace, like worker surveillance and unsafe automation solutions.
There’s also the problem of a lack of redistributive policies to ensure economic gains resulting from mass AI adoption will be equitably distributed rather than amplifying existing economic inequalities, as well as a lack of policies & programmes for providing workers displaced by AI with financial, material, educational, or wraparound supports.
Harry: And in terms of the growth or emergence of countergovernance, were there any specific events that we can point to that help us understand where it came from? Or really are we looking at something that is less inspired by discrete events or actors?
Blair: I'm not sure if AI countergovernance can be thought of as a singular, unified movement. It might be more accurate to think of it as a plurality of localised community backlash movements against powerful AI governance actors. A specific event happening right now that could be considered AI countergovernance are the WGA and SAG-AFTRA strikes—Hollywood writers, actors, and artists are on strike for many reasons, but the industry's governance of generative AI is a major point of contestation in their negotiations with the studios.
Another example: here in Canada, the proposed AI legislation currently tabled in our Parliament has been subjected to intensive public criticism and opposition on many grounds. A brief I submitted to the parliamentary committee tasked with studying the legislation highlights many of those criticisms, including the legislation’s undemocratic public consultation process, inadequate enforcement and public oversight mechanisms, narrow jurisdictional scope, and weak coordination mechanisms.
We're using many countergovernance strategies in hopes of causing the government to better organise its AI policy approach around the needs of vulnerable people, including community-led audits and evaluations of the government's AI governance initiatives, petitions and open letters, media engagement, and proposals for forming citizens' assemblies.
Harry: To return to the definitional point, I’m interested in getting closer to how this approach works in practice versus established methods of AI governance. How does countergovernance differ from the governance mechanisms that already exist within the AI field? Are there any overlaps, or are we looking at two very different programmes that operate at different points in service of different goals and distinct parties?
Blair: Here again, I don’t think any sort of "traditional" mechanisms have yet formed in the field of AI governance, but there are certainly emerging norms. State-led AI governance initiatives and industry AI governance initiatives have so far not done nearly enough to intervene in the ethical concerns related to democratic institutions, worker protections, IP & privacy protections, environmental protections, economic justice, and socio-economic safety that I mentioned earlier.
Harry: What advantages does countergovernance bring to the AI industry (e.g. in areas like privacy or explainability)?
Blair: Countergovernance is meant to bring greater advantages to marginalised communities. AI countergovernance is often very deliberately intended to disadvantage industry actors that already enjoy too many advantages in the dominant, state-led systems of governing AI.
That being said, successful countergovernance movements might influence some industry actors to re-organise around more socially responsible and democratically accountable practices—if there is a business case to be made for those adopting those practices. The incentive structures of capitalism often create challenges to corporate social responsibility and public accountability, but that is why it's important to have a robust, capable, and well-resourced public sector that is able and willing to hold industry power to account.
Harry: Are there any examples of AI systems or platforms that are currently operating under countergovernance principles? Or if not, are there other movements that we can look at for inspiration?
Blair: Glaze immediately comes to mind. Glaze is a system that prevents generative AI systems from mimicking an artist's style by inserting a set of near-invisible perturbations into images that are precisely computed to disrupt generative model training. We can think of this as a practice of AI countergovernance, as it's a direct contestation of the tech industry's current power to train GAI models on stolen IP without creator consent. Generally though, AI countergovernance primarily operates through social, political, and organisational mechanisms, with technological mechanisms being used in service to collective organising. In this sense, Glaze has become an important technological mechanism within the broader countergovernance systems of artists who are collectively organising against exploitative GAI practices, such as the #CreateDontScrape movement.
Harry: Ok, and what could go wrong for these sorts of movements? What are the most significant challenges faced by the AI countergovernance movement?
Blair: AI countergovernance movements will always face a lack of political power and institutional support, but being faced with that challenge and responding to it is precisely what makes it them countergovernance movements. Industry has much greater access to government institutions, much greater political and economic power to lobby for their interests in the development of legislation, regulations, standards, guidelines, etc. than marginalised communities do. But here again, that’s just part and parcel of the power structures inherent to capitalism.
Harry: Before we wrap things up, I want to talk a little bit about what comes next. How do you see the countergovernance movement evolving? Do you anticipate a convergence with traditional governance methods, or will they remain distinct?
Blair: They will remain distinct because countergovernance is by definition always a collectively organised backlash against dominant governance practices that fail to serve the needs of a particular community. However, I hope that in time, some of the current concerns driving AI countergovernance movements (especially around worker and environmental protections, IP & privacy protections, economic justice, and social safety nets) will be better recognised by dominant actors and incorporated into AI governance systems going forward.
Harry: Final question from me: what would you want policymakers, AI practitioners, and the general public to understand most about the countergovernance approach in AI? Could you sum it up for each of these groups?
Blair: For policymakers, if you do due diligence to actively engage marginalised and vulnerable communities in policy co-design, you avert the risk of a countergovernance backlash. If you need to expand your organisational capacities to handle a greater scale of public engagement, you should invest in expanding those capacities. It is your duty as a public servant to ensure the public sector has the resources necessary to serve and protect the public interest and to hold irresponsible private interests to account.
For practitioners, countergovernance happens because your AI practices/AI business model lacks social responsibility. You've likely excluded marginalised and vulnerable communities from your decision-making processes throughout the lifecycle of your AI systems. Learn about best practices for participatory AI design and corporate social responsibility and apply them to your own practices.
And for the general public, you can govern AI yourself if you want to. Read what laws, regulations, and other policies are currently in development or being used to regulate AI where you live. If you feel like they aren't serving your needs, get organised with like-minded people, talk about your hopes and fears regarding AI, and make your own policies, community resources, and shared guidelines on AI that will serve and protect the needs of your community. Labour unions, professional associations, cooperatives, artist collectives, community organisations, and advocacy groups can and should take AI governance into their own hands if top-down governance approaches led by state power are not meeting their needs.