Frontier ethics, persuasion, music generation [TWIE]
The Week In Examples #32 | 13 April 2024
When it rains, it pours. That is apparently just as true for the use of the word ‘delve’ as it is for the AI industry. This week, OpenAI released the final version of GPT-4 Turbo, Google rolled out the Gemini Pro 1.5 model, and Mistral dropped another solid model via magnet link. Meanwhile, Meta’s Nick Clegg said its Llama 3 model was coming in a matter of weeks. Maybe we aren’t heading for that AI winter after all?
As for The Week In Examples, this week I looked at a new paper about how AI agents may shape cultural life, recent work assessing the persuasive properties of large models, and the launch of music generation startup Udio. As usual, it’s hp464@cam.ac.uk for comments or anything else.
Three things
1. AI ethics at the frontier
What happened? Received wisdom holds that ours is the age of polarisation. The vitality of the polity is at risk as it strains to accommodate groups who can’t agree on basic facts, never mind a political programme. Although I think commentators generally overprice the severity and prevalence of this phenomenon—and overlook the fact that it is primarily an American issue—tribalism does tend to find a home within certain contexts. One of those, writes Seth Lazar of the Australian National University, is AI. In a new paper, Lazar sketches two different responses by AI watchers to the development of large models. The first group are those who view these systems as the “apotheosis of extractive and exploitative digital capitalism.” People in this camp are often described as the ‘ethics-focused’ portion of the AI community, whose work tends to centre the contingencies on which AI systems depend: human labour, physical resources, and energy. At the other end of the spectrum, there are those who we sometimes hear described as ‘safety-focused’. This group is motivated by preventing “an intelligence explosion that will ultimately wipe out humanity.”
What's interesting? The work, which follows a piece for the magazine Aeon in February earlier this year, aims to chart a middle way between these perspectives. Acknowledging both the sophistication of the technology and its potential for structural risk, the paper foregrounds today’s topic du jour: AI agents. Lazar draws into focus three possible use-cases for agents in the near term. The analysis begins with a hypothetical ‘AI companion’ to propose that, though many find the idea uncomfortable, we ought to remember that “intuitive disgust is a fallible moral guide.” The upshot is that we just know whether companions will be net beneficial or net harmful before they are widely used. The second area imagines agents as ‘attention guardians’ that could respect a person’s second order preferences (the person they would like to be) rather than a person’s first order preferences (what they want at any given moment). It is the difference between wanting to quit smoking and wanting to have a cigarette. Finally, the paper introduces ‘universal intermediaries’ that may mediate our interactions with the digital world, proposing that such systems may be uniquely capable of directing human experience.
What else? I find the final form, the AI intermediary, particularly interesting. The concern here is that these assistants may in fact “govern those who use them” by moderating social relationships and shaping the environment in which we make decisions. This scenario reminds me of familiar concerns around ‘nudging’ in which we change the stage on which people make choices (or their ‘choice architecture’) in order to exert influence. Today, algorithmic culture—often by way of recommender systems—makes some relationships viable and renders others impossible. Consider, for example, when someone sees a post of yours on a social media platform and sends you a message. A universal intermediary would apply the same sort of logic to all relationships, preferences, and values that can be managed or expressed via the internet. But it’s not all doom and gloom, though. As Lazar explains, if agents can be “run and operated locally, fully within the control of their users” then these intermediaries may instead allow us to govern our relationship with digital technologies. Much more appealing than the alternative!
2. Bigger means better for persuasive models
What happened? Anthropic released research assessing the relationship between the size of the model and its ability to persuade. Perhaps unsurprisingly, the group found “a clear scaling trend across model generations: each successive model generation is rated to be more persuasive than the previous.” To arrive at this conclusion, the researchers ran an experiment in which they presented a person with a claim and asked how much they agreed with it, followed-up with an argument designed to persuade them to agree, and finally asked the participant to re-rate their level of agreement after reading the argument. In addition to the headline finding about the correlation between scale and persuasiveness, Anthropic also found that Claude 3 Opus is roughly as persuasive as humans.
What's interesting? Amongst a handful of other approaches, the researchers tested for ‘logical’ arguments (those that present evidence in favour of a conclusion). While this type of persuasion might seem harmless—especially compared to say, deception—I am not so sure rational arguments are entirely a good thing. Two of the problems with this idea introduced by Dan Williams are that 1) some people are better at producing rational arguments than others, and that 2) rational arguments can be used to make the case for both good and bad things. One challenge posed by the emergence of AI capable of making compelling rational arguments is that it introduces a force that may be used by a small number of people to wield a disproportionate level of influence on the rest of society. The counter argument to this might be that the diffusion of AI could both widen the total number of people able to make such arguments, while also allowing for people to enjoy an ‘epistemic shield’ provided by a personal AI. The rub here, though, is that anyone who doesn't have access to AI (or who prefers not to use it) may be left at a disadvantage.
What else? Persuasion is something of a long time hobby horse for AI watchers. The exception that proves the rule described above, it is a topic that unites both those who connect short term harms to the exploitative excesses of digital capitalism and those who worry that a powerful AI may eventually seek to trick its way out of a container. Away from persuasion, we can probably add this to the ever-growing pile of evidence that says that models get better as they get bigger. This isn’t to say that scaling will hold all the way up to extremely powerful systems, but rather that—for the moment—the existence of these laws are beyond any reasonable doubt. As Rich Sutton reminds us: “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”
3. Music start-up makes some noise
What happened? Music generation app Udio made its services generally available this week. The platform, which allows people to create music by entering a textual prompt, has been saturating my timeline with AI music. Some examples are weird, others already seem a bit passé. Whatever you think, apps like Udio and Sumo aren’t going away any time soon (provided they don’t run afoul of copyright law, which no one is really sure about until a few opening cases get settled). A central question (and one that we can apply in lots of other creative and economic domains) is the extent to which AI music is likely to supplement artists or replace them. If I had to guess, I would tend towards the former, primarily because AI music just doesn’t really appeal to me. I can’t imagine substituting any of my favourite records for something generated in fewer than 60 seconds (though this dynamic might not play out for music for things like TV games, and adverts). Others may be more open to AI generated music, which in this context I mean to be music created wholly by AI (less a prompt from a human). Ultimately it comes down to how much you weigh human input into the artistic process verses the technical or aesthetic qualities of the final artefact.
What's interesting? When I have spoken to artists and others in the music industry, they often say that the introduction of audio generation models will lower barriers to entry and allow more people to ‘make’ music. The rub, though, is it will do so without increasing the total demand for popular music, which means that it may be harder than ever for independent artists to build followings. Lots of small artists are already dependent on either additional sources of income or attracting and maintaining the attention of a small number of invested fans via ‘add ons’ like merchandise. When we talk about AI’s impact on the music industry, we are essentially talking about injecting much more competition into an already extremely competitive space.
What else? Away from music generation, Spotify is exploring the use of language models to create playlists from scratch and last year debuted its ‘AI DJ’. These are ultimately examples of music curation, representing the next stage of the AI-powered recommendation algorithms. I suspect these two trends, an explosion in the total volume of music and new technologies designed to help recommend music, will shape our collective relationship with aural culture in the years to come. They might be mutually reinforcing (i.e. streaming services that direct people to AI music or create it from scratch) or they might exist in competition (i.e. a user may be able to specify ‘no AI generated music’ when asking a platform to recommend tracks).
Best of the rest
Friday 12 April
UK and Republic of Korea to build on legacy of Bletchley Park (UK Gov)
Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images (Bloomberg)
Google, Intel and Meta drop new AI chips while TSMC gets paid (Quartz)
AI Products Still Need Their Human Helpers (Bloomberg)
The inherent paradox of AI regulation (The Hill)
Thursday 11 April
Apprentice Tutor Builder: A Platform For Users to Create and Personalize Intelligent Tutors (arXiv)
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models (arXiv)
UK competition watchdog has 'real concerns' over big tech AI dominance (BBC >> CMA)
Analyzing Toxicity in Deep Conversations: A Reddit Case Study (arXiv)
Humane AI Pin review: the post-smartphone future isn’t here yet (The Verge)
ChatGPT Can Predict the Future when it Tells Stories Set in the Future About the Past (arXiv)
Unravelling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models (arXiv)
Socially Pertinent Robots in Gerontological Healthcare (arXiv)
Wednesday 10 April
The Rise of the Interdisciplinary Lawyer: Defending the Rule of Law in the Age of AI (University Of San Francisco Law Review)
AI will produce Britain’s first trillion-pound company (City AM)
A Survey on the Integration of Generative AI for Critical Thinking in Mobile Networks (arXiv)
How to Stop Your Data From Being Used to Train AI (WIRED)
OpenAI's Altman Pitches Global AI Coalition on Trip to Middle East (Bloomberg)
How AI is helping to prevent future power cuts (BBC)
Introducing Our Next Generation Infrastructure for AI (MetaAI)
Tuesday 9 April
Meta confirms that its Llama 3 open source LLM is coming in the next month (TechCrunch)
Does Transformer Interpretability Transfer to RNNs? (arXiv)
Americans' top feeling about AI: caution (YouGov)
Inclusive Practices for Child-Centered AI Design and Testing (arXiv)
AI is repeating A-bomb's story, says EU tech overseer (Axios)
Election Workers Are Drowning in Records Requests. AI Chatbots Could Make It Worse – (WIRED)
Monday 8 April (and things I missed from last week)
Microsoft AI gets a new London hub fronted by former Inflection and DeepMind scientist Jordan Hoffmann (TechCrunch)
SafetyPrompts: A Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety (arXiv)
Global AI governance: barriers and pathways forward (International Affairs)
Securing Canada’s AI advantage (Canada Gov)
People are persuaded by rational arguments. Is that a good thing? (Substack)
Economic arguments in favour of reducing copyright protection for generative AI inputs and outputs (Bruegel)
Regulating advanced artificial agents (Science)
The Origin of Information Handling (arXiv)
Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data (arXiv)
Talk by Sora creators (AGI House >> X)
Attention in transformers, visually explained | Chapter 6, Deep Learning (YouTube)
Job picks
These are some of the interesting (mostly) non-technical AI roles that I’ve seen advertised in the last week. As always, it only includes new roles that have been posted since the last TWIE – though many of the jobs from two additions ago are still open. As well as those below, there are also six roles going in UNESCO’s AI team.
Researcher (EU Public Policy), Ada Lovelace Institute (UK)
Project Consultant, Research, AI Governance, UN (Remote)
External Affairs Director, Center for AI Policy (Washington, DC)
Head of Strategic Communications (UK AISI)
Policy Generalist, Center for AI Safety (San Francisco)