Who decides what is art? [TWIE]
The Week In Examples #51 | 7 September 2024
Friends! This week I was glad to get an essay out after a bit of a break. Check it out if you want to know what the deal is with the Turing test, and stay tuned for more long form writing now I have a bit more time (I recently broke through a particularly stubborn PhD deadline). As usual it’s hp464@cam.ac.uk if you want to get hold of me.
Three things
1. LLMs are changing how we talk
Remember a few months ago when people started picking up on ChatGPT and its ilk using the world ‘delve’? The explanation behind the spike in popularity is that ‘delve’ is more commonly used in Nigeria, where some AI firms employ workers to improve the quality of their model outputs. Sometimes that involves a simple thumbs up or thumbs down, but other times that might mean rewriting responses directly.
So, humans change how AI communicates. But does AI change how we talk? The Max-Planck Institute for Human Development thinks so. In a new paper, researchers transcribed 300,000 YouTube videos from 2020 to 2024. They found that words such as ‘realm’, ‘meticulous’, ‘adept’, and (of course) ‘delve’ show a significant uptick since ChatGPT fired the starting gun on the large model era in 2023.
I like this work a lot. It reminds us that while technology is socially constructed, so too does it shape the social conditions in which it is created. For AI, that means getting to grips with the ‘systemic’ impact of the technology on society (which groups like UK AISI are doing) and determining how these factors are influencing the values imbued within models.
2. Another AI art debate
Science fiction writer Ted Chiang wrote a piece in the New Yorker arguing that AI will never be capable of creating art. His case rests on the idea that to make art is to make choices, and that by using AI systems, one delegates those choices to an agent that either averages choices that others have made or mimics the choices of someone else.
Chiang compares our moment to the emergence of photography in the 19th century. For photography, he says that the centrality of choices—such as where to place the camera, how to modify the exposure, and how to compose the subject—mean that we ought to count photography as a form of art.
I have some problems with this idea. Assuming for a moment that the essence of art is something that ought to be decided by New Yorker columnists (sorry folks, beauty is not in fact in the eye of the beholder – it has been trademarked by the Condé Nast media group) a quick look at the photography analogy raises some questions.
The obvious point is that image generators (and AI approaches more broadly) do in fact allow for choices to be made. You pick a subject, modify the virtual composition, and edit individual sections. Perhaps your prompt was based on an analysis of a book conducted by an LLM, or maybe you created your own specially trained image generator on a tightly curated set of examples. Ok, so it’s the number or type of choices that counts? Maybe so (though for the record I think art is more than the sum of a decision tree). Regardless, using Chiang’s framing, who gets to decide how many choices it takes to make art, or which are the right choices?
3. Governing dual use technologies
In a new paper, researchers examined a handful of ‘dual use’ technologies—technologies that can be used for both civilian and military purposes—and applied the lessons learned from their governance to AI.
The group looks at five major international agreements and institutions: the International Atomic Energy Agency (IAEA), the Strategic Arms Reduction Treaties (START), the Organisation for the Prohibition of Chemical Weapons (OPCW), the Wassenaar Arrangement, and the Biological Weapons Convention (BWC). For each, they consider its purpose, the core governance mechanisms involved, the governance structures that underpin these powers, and any relevant instances of non-compliance. The authors look at the IAEA, for example, describing how the body can conduct inspections that may ultimately be referred to the UN Security Council (as in the case of Iran in the 1990s and 2000s).
Based on this example and others, the group advocates for the importance of developing “robust verification methods to detect non-compliance with international agreements” for AI development and deployment. They reckon that verification is (perhaps unsurprisingly) essential in any governance regime, because it allows authorities to identify examples of noncompliance and take corrective action.
Another recommendation is the introduction of benefit-sharing agreements to incentivise participation in governance programmes, which is the bedrock of the IAEA’s success (something I wrote about a while ago in the context of AI and nuclear technologies).
Best of the rest
Friday 6 September
CERN for AI: The EU’s seat at the table (ICFG)
How to Build the British ARPA (Statecraft)
AI prompt engineering: A deep dive (Anthropic)
The Behavioral and Social Sciences Need Open LLMs (OSF)
AI Isn't Magic, but Can It Be 'Agentic'? (NYT)
OpenAI Considers Higher Priced Subscriptions to its Chatbot AI; Preview of The Information’s AI Summit (The Information)
Thursday 5 September
Good technology doesn’t speak for itself (Substack)
Time 100 in AI (Time)
Yuval Noah Harari: What Happens When the Bots Compete for Your Love?(NYT)
AlphaProteo generates novel proteins for biology and health research (Google DeepMind)
Wednesday 4 September
Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance (arXiv)
Getting the machine learning: Scaling AI in public services (Reform)
Dialogue You Can Trust: Human and AI Perspectives on Generated Conversations (arXiv)
More is More: Addition Bias in Large Language Models (arXiv)
A New Group Is Trying to Make AI Data Licensing Ethical (WIRED)
Tuesday 3 September
Prioritizing International AI Research, Not Regulations (Lawfare)
AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities (arXiv)
The overlooked need for Ethics in Complexity Science: Why it matters (arXiv)
A+AI: Threats to Society, Remedies, and Governance (arXiv)
Fair Railway Network Design (arXiv)
Monday 2 September (and things I missed)
Common Elements of Frontier AI Safety Policies (METR)
The Future of International Scientific Assessments of AI’s Risks (CEIP)
Beyond Preferences in AI Alignment (arXiv)
Media literacy tips promoting reliable news improve discernment and enhance trust in traditional media (Nature > open access)
Kids who use ChatGPT as a study assistant do worse on tests (Hechinger Report)
The Future of International Scientific Assessments of AI’s Risks (University of Oxford)
Job picks
Some of the interesting (mostly) AI governance roles that I’ve seen advertised in the last week. As usual, it only includes new positions that have been posted since the last TWIE (but lots of the jobs from the previous edition are still open).
Research Scientist, Frontier Safety & Governance, Google DeepMind (London)
Head of Frontier AI Regulatory Framework, UK Government (London)
Research Operations Administrator, Global and Emerging Risks, RAND (US)
Consumer Communications Lead, Anthropic (US)
Europe Policy Communications Lead, OpenAI (London)
I have been thinking about Chiang'a article and also don't like the photography metaphor. It is far from perfect, but comparisons that works better, at least for me, is found art and collage. This emphasizes both the repeated choices and reliance on the work of others in creation. This is kind of what William Gibson seemed to have in mind when he had an AI creating simulacra of Joseph Cornell's boxes in his 1986 novel, Count Zero. In his future world these were accepted without question as art, but the source was unknown.
Has anyone done a study yet of how different sorts of artists and the general public use GenAI?