Misinformation, adoption, start-ups [TWIE]
The Week In Examples #39 | 8 June 2024
This week’s email has it all: new work challenging popular beliefs about misinformation, research into the usage patterns of generative AI amongst young adults, and a stock take of the AI start-up landscape amidst a hefty new investment. Now and always, email me at hp464@cam.ac.uk with things to include, comments, or anything else!
Three things
1. Misunderstanding misinformation
A new study published in Nature argues that misinformation is a poorly understood phenomenon. As the authors explain: “Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization.”
The group makes the case that claims about the purported harms of algorithmically-powered misinformation are not supported by studies of misleading content. Take reporting from the BBC this week with the headline “TikTok users being fed misleading election news, BBC finds”.
The report says that these videos have racked up “hundreds of thousands of views”, which—for an app with daily UK views in the billions—is probably best described as a microbe inside a drop in the ocean. Sticking it on the BBC homepage dramatically overplays the scale of the problem, which in turn may give readers the wrong impression about the importance of misinformation in shaping the outcome of the UK election.
In other words, loose reporting about the supposed effect of ‘misinformation’ could well be described as, well, misinformation (which often includes misleading but not outright false content). I mention the press because the paper calls out the role of journalists along with politicians, but it is worth saying that not all coverage of misinformation blithely toes the party line. The Atlantic, for example, wrote a great new piece explaining that an expected crisis of epistemic security simply “didn’t happen” in the Indian election.
To round things off, I want to direct you to an excellent recent essay by Dan Williams of the University of Sussex. The blog makes the case that, even if we reject the mistaken beliefs that 1) misinformation is the most worrying form of bad information and 2) misinformation is easy to identify, that does not mean that society is enjoying a period of epistemic vitality.
Quite the opposite. As Williams explains: “First…communication can be - and frequently is - highly misleading without ever involving blatantly false or fabricated content. Second, once you broaden the focus on bad information to include any content that might be misleading even if it does not involve outright falsehoods and fabrications, bad information is not easy to identify.”
2. Generational AI
Last week, I looked at a University of Oxford report indicating young people are leading their older peers in using AI. This week, we are back for more, with the American nonprofit Common Sense Media finding that 4 per cent of young people aged 14-22 use AI everyday.
A further 11 per cent said they used the technology once or twice a week, while 12 per cent reported using AI once or twice a month. These are interesting results, but it is worth saying that they are a bit long in the tooth. Given the survey was conducted in October and November 2023 it is possible that these figures have shifted in the last six months or so.
Fortunately, we also have recent work to explore. Harvard University released its inaugural Undergraduate Survey on Generative AI, which ran in April 2024 (while this is obviously more up to date, the habits of Harvard students may not be quite the same as those of other American young people).
The figure above shows how often Harvard students reported using AI, with one in four respondents saying that they rely on AI on a “daily and almost daily” basis. That figure represents a six-fold increase on the statistic from the Common Sense Media survey, which may be down to selection effects, differences in when the survey was conducted, or a combination of the two. If I had to bet, I might upweight the importance of the former given a 1 in 4 result is a significant increase on the poll from last week (though this version solely looks at daily use).
3. AI video startup records $80M funding round
Outside of well established AI developers, there are basically two types of generative AI start-up that investors are writing big cheques for. First, there are scaffolders—less charitably called ‘wrapper’ companies—that build on top of models released by Google or OpenAI. Cognition AI, which built the AI ‘software engineer’ Devin, belongs to this group.
Second, we have the companies that don’t just think they can do a better job packaging and deploying developers’ technology—but whose raison d'etre is to directly compete with them as model makers. Pika Labs, a one year old company from San Francisco, is one of these firms. The firm recently closed an $80M funding round, which they said would be used “to accelerate our progress in building the best video foundation model.” Think something like OpenAI’s Sora or the new model from a Chinese group released this week.
According to Crunchbase, AI funding is holding strong in the first quarter of the year despite a reduction in major rounds (like Microsoft’s investment in OpenAI last year). The first quarter saw over $12bn invested in AI startups across 1,100 deals, with the total amount of capital deployed representing about a 4% increase on the final quarter of 2023.
I expect to see more of the same throughout 2024 as investors stay motivated by NVIDIA’s seemingly unstoppable rise, though don’t expect the party to last forever. All things—good or otherwise—must come to an end. I suspect the scaffolders will find that foundation model developers are perfectly capable of building the vast majority of products themselves. The new wave of developer start-ups, however, may fare better (especially in modalities like video where you can get more bang for your buck).
Best of the rest
Friday 7 June
Testing and mitigating elections-related risks (Anthropic)
Open Source AI: The Overlooked National Security Imperative (Just Security)
Banks say growing reliance on Big Tech for AI carries new risks (Reuters)
Silicon Valley in uproar over Californian AI safety bill (FT)
Concern rises over AI in adult entertainment (BBC)
Thursday 6 June
Nvidia value surges past $3tn and overtakes Apple (BBC)
AI used to predict potential new antibiotics in groundbreaking study (The Guardian)
US antitrust enforcer says ‘urgent’ scrutiny needed over Big Tech’s control of AI (FT)
US regulators to open antitrust inquiries of Microsoft, OpenAI and Nvidia, NYT reports (Reuters)
Could AI put an end to animal testing? (BBC)
Wednesday 5 June
Hawking was wrong: Philosophy is not dead, and it has kept up with modern science (Substack)
What is the Best Way for ChatGPT to Translate Poetry? (arXiv)
Are language models rational? The case of coherence norms and belief revision (arXiv)
The Stakes Are Incredibly High.' Two Former OpenAI Employees On the Need for Whistleblower Protections (TIME)
Securing Research Infrastructure for Advanced AI (OpenAI)
Code & conduct: How to create third-party auditing regimes for AI systems (Ada Lovelace Institute)
Situational Awareness (Leopold Aschenbrenner)
How AI could roil the next economic crisis (Axios)
Tuesday 4 June
To Believe or Not to Believe Your LLM (arXiv)
Llama 3 updated model card (Hugging Face)
Why Would You Suggest That? Human Trust in Language Model Responses (arXiv)
AMD's Su: AI 'mega-cycle' calls for faster pace of chip rollouts (Fierce Electronics)
AI Employees Fear They Aren’t Free to Voice Their Concerns (The Wall Street Journal)
Can California fill the federal void on frontier AI regulation? (Brookings)
Google’s AI Overviews misunderstand why people use Google (Ars Technica)
AMD’s AI Ambitions Take Aim at Intel and Nvidia (PYMNTS)
Monday 3 June
FineWeb: decanting the web for the finest text data at scale (Hugging Face)
Harvard Undergraduate Survey on Generative AI (arXiv)
Scientists should use AI as a tool, not an oracle (Substack)
Problematizing AI Omnipresence in Landscape Architecture (arXiv)
Selling Data for AI May Be Publishers’ Salvation (The Information)
AI safety becomes a partisan battlefield (Axios)
AI isn't a daily habit yet for teens, young adults (Axios)
Nvidia dominates the AI chip market, but there’s more competition than ever (CNBC)
An official court ruling draws the line on AI use in journalism, but is it enough? (The Hill)
The Billion-Dollar Price Tag of Building AI (Time)
Job picks
Some of the interesting (mostly) non-technical AI roles that I’ve seen advertised in the last week. As usual, it only includes new positions that have been posted since the last TWIE (but lots of the jobs from the previous edition are still open).
Expression of interest, Research / Consulting Teams, Institute for Law and AI (Remote)
Head of Policy, SaferAI (Paris)
Program Manager, Artificial Intelligence, US Government (Remote, US)
Director of Government Affairs, Institute for AI Policy and Strategy (Washington D.C.)
Delivery and Engagement Lead, International AI Safety Report, UK AI Safety Institute (London)
Senior Tech Policy Specialist, Simon Institute for Longterm Governance (Geneva)
Director, Content and AI Policy, Risk, Compliance, Integrity, Google (US)
Communications Specialist, FAR AI (Remote)
Insider Risk Forensic Investigator Lead, Anthropic (San Francisco)
Policy Fellow, AI Governance, Standards and Regulation, Alan Turing Institute (London)