2 Comments
User's avatar
Michael Spencer's avatar

Do you think that the U.S.'s approach which clearly is not about maintaining "social stability" and perhaps even has the potential of under-regulating Generative A.I. in the U.S. might actually have depolarizing impacts on things like the health of democracy, social media, misinformation and some aspects of job displacements in Capitalism?

It appears like Europe, China and the U.S. have fundamentally different approaches regarding risk mitigation in this case. How do you expect that to play out in the near future?

Expand full comment
Harry Law's avatar

At the moment, there are more similarities between the European and Chinese approaches than you might think: both are working towards a major piece of 'horizontal' legislation, both recognise the need to regulate the value chain (though admittedly China focuses on algorithms), and both recognise the need for labelling, explainability, bias mitigation etc. (though again this manifests itself in different ways). Ultimately, while the motivations might be different, the interesting thing is that there's a degree of convergence with respect to the levers that governments can pull. For example, where Europe is interested in watermarking/labelling to maintain epistemic security and combat misinformation, China is more interested in doing so to maintain social stability. That means that even though the final version of these measures will be conditioned by different values and implemented in different ways, there's a degree of overlap in the focus and substance of the approaches. As for the US, it will be interesting to see how governance proceeds at the state versus federal level, and whether for the latter we'll get more than voluntary codes like the NIST risk management framework or NTIA's recent call for input.

Expand full comment