The Week in Examples #2 [26 August]
Including China and the global safety summit, barriers to conscious AI systems, and the new era of AI sovereignty
And we are back for the second edition of The Week In Examples. Please read on as I bring to you an end of week roundup of different news from industry, government and civil society as well as opinion and analysis of all things AI. As a reminder, we have thoughts on the most interesting three pieces I’ve seen this week, links to other AI news that you might have missed, and a vignette of the week to finish.
I want The Week In Examples to be a useful resource, so I’ll need your help to review, refine, and improve. Please don’t hesitate to write to me at hp464@cam.ac.uk with feedback. (It helps a lot!)
Three things
1. Calls for China to attend UK AI summit
What happened? Huw Roberts of the University of Oxford wrote to the Financial Times to argue that excluding China from the AI summit would be a mistake. The letter came days before the UK announced that the summit would take place 1-2 November at Bletchley Park, home to the UK’s codebreakers in the Second World War.
What’s interesting? Roberts makes three arguments in favour of China’s inclusion. First, that the risks posed by AI transcend borders (Roberts suggests that if, for example, western firms agree norms preventing the use of AI to create chemical weapons, a bad actor could use simply Chinese systems if similar safeguards do not exist for the country’s systems.) Second, that any international agreement made will receive pushback in the US if it is perceived to provide China with advantages. Third, while the US and UK have largely been slow to regulate AI, China has pushed through a raft of new laws.
What else? Personally, it is difficult to see a situation in which China is likely to allow its systems to be used for extremely dangerous activities while the UK and US do not. This is all the more unlikely given, as Roberts notes, China has been “proactive” on regulation while the US and UK have “largely been dragging their feet.” We should also remember that the summit is not the only chance to secure international agreement. Agreement on the initial charter for the IAEA, for example, took dozens of meetings over the space of well over a decade. Some included both the United States and the Soviet Union, while others included one but not the other.
2. No barriers to building conscious AI, say researchers
What happened? Researchers including Robert Long from the Center for AI Safety and Yoshua Bengio of the University of Montreal published a paper surveying different theories of consciousness and applying them to recent AI systems. They suggest that, while no current AI systems are conscious, there are no obvious technical barriers to building conscious systems in the future.
What’s interesting? The group assessed different theories of consciousness (like recurrent processing theory, global workspace theory, predictive processing, and attention schema theory) to create ‘indicator properties’ of consciousness that can be described in computational terms. These properties were then applied to different architectures (including the transformer) to make the case that none currently satisfy the criteria associated with these indicators. The paper goes on to argue, however, that there is nothing to prevent new AI systems fulfilling the conditions of these indicators in the future.
What else? Ultimately, no-one really knows what consciousness is (hence the researchers survey lots of different theories to create shared indicators). It’s also not clear whether something is lost when expressing indicators in computational terms to allow for their application to different architectures. I like that the paper acknowledges that existing risks do not turn on whether or not AI is conscious, such as concerns around bias, the use of AI to enable repression or the automation of jobs (page 68). I would go even further to argue that future outcomes are also not dependent on the consciousness question. The public imagination tends to get hung up on whether AI could be conscious, but it is not all that clear to me that consciousness is a prerequisite for something to be smarter than us (or indeed a lot smarter than us). There are, though, obvious epistemic, moral, and practical benefits of determining whether AI systems are conscious.
3. AI sovereignty is here to stay
What happened? Former Google researchers Llion Jones and David Ha have set up Sakana AI in Japan. Jones was one of the researchers behind the transformer, while Ha led Google’s AI research arm in Japan before a brief stint at Stability AI. Sakana is derived from the Japanese word さかな (sa-ka-na) for fish, which seeks to evoke the idea of “a school of fish coming together and forming a coherent entity from simple rules.”
What’s interesting? Sakana is launching with a pretty explicit sovereignty angle. Ha wrote on Twitter that "I believe that the AI field is either too Western-Centric or China-Centric, depending on where you live. I think it is time for Japan to shine in the AI space. My ideal future is one where technology will NOT be dictated by the Bay Area or by Beijing." The move comes shortly after France’s Mistral AI launched with a $113M seed round and support from Cédric O, the former digital minister for Emmanuel Macron’s government, to build foundation models in France.
What else? I suspect there will be more to come as far as efforts to build local foundation models are concerned, but I am not so sure about how much staying power some of these projects will have. It is, after all, one thing to raise money to train a model for millions of dollars, but quite another to raise money for training runs that cost billions of dollars. While it is certainly possible that these sorts of efforts will continue to proliferate and be backed to the hilt even as the cost of staying competitive begins to skyrocket, support will be contingent on how soon AI begins to make a dent in the social, economic and political environment.
Best of the rest
Friday 25 August
Time Magazine: The Heated Debate Over Who Should Control Access to AI
The Guardian: The professor’s great fear about AI? That it becomes the boss from hell
CNBC: Alibaba launches AI model that can understand images and have more complex conversations
Financial Times: Can AI crack comedy?
The Guardian: New York Times, CNN and ABC block OpenAI's GPTBot web crawler from scraping content
Financial Times: What can a virtual village made up of AI chatbots tell us about human interaction?
Thursday 24 August
The Register: Hollywood studios agree AI-generated content should not reduce humans' pay or credit
Harvard Medical School: Scientists Discover Previously Unknown Way Cells Break Down Proteins
The Verge: Google, Amazon, Nvidia, and others put $235 million into Hugging Face
The Verge: Meta launches own AI code-writing tool: Code Llama
Associated Press: Nvidia’s rising star gets even brighter with another stellar quarter propelled by sales of AI chips
Reuters: AI startup Modular raises $100 mln in General Catalyst-led funding
TechCrunch: a16z-backed AI video generator Irreverent Labs raises funding from Samsung Next
Reuters: Britain will host AI summit at World War Two code-breaking centre
Wired: The Myth of ‘Open Source’ AI (Opinion)
Wednesday 23 August
The Algorithmic Bridge: ChatGPT Killed the Old AI. Now Everyone Is Rushing to Build a New One (Substack)
TechCrunch: OpenAI brings fine-tuning to GPT-3.5 Turbo
Reuters: Germany plans to double AI funding in race with China, U.S.
Washington Examiner: Leading House Democrat advocates new agency to regulate AI
Associated Press: Gov. Evers creates task force to study AI’s effect on Wisconsin workforce
The Hill: AI cheating is hopelessly, irreparably corrupting US higher education (Opinion)
Tuesday 22 August
Axios: Meta releases an AI model that can transcribe and translate close to 100 languages
TechCrunch: Meta confirms AI ‘off-switch’ incoming to Facebook, Instagram in Europe
Reuters: China's Baidu beats quarterly revenue estimates, cheers generative AI progress
The Guardian: Nvidia shares hit all-time high as chipmaker dominates AI market
The Washington Post: Top senator calls for a new GI Bill for AI
Bloomberg: AI Funds Don’t Like AI Stocks (Opinion)
Monday 21 August
The New York Times: How Nvidia Built a Competitive Moat Around A.I. Chips
The Information: Beware of Fear-Mongering in AI Regulation; Meta Scores Another Point for Open-Source
Reuters: Arm IPO to put SoftBank's AI hard sell to the test
TechCrunch: Developers are now using AI for text-to-music apps
Reuters: YouTube starts Music AI incubator with Universal Music as partner
The Guardian: ‘Very wonderful, very toxic’: how AI became the culture war’s new frontier’
The Hill: AI art can’t earn copyright, judge rules
The Guardian: UK to spend £100m in global race to produce AI chips
TechCrunch: The human costs of the AI boom