Discussion about this post

User's avatar
DotProduct's avatar

Call me old fashioned, but markets solve a lot of this: we’re already seeing natural segmentation eg Anthropic building helpful/harmless models, Grok positioning as less censored, specialised models emerging for different communities. Users vote with their feet (and wallets), creating competitive pressure for responsiveness without anyone having to decree what a culture “really” believes.

Excluded groups become entrepreneurial opportunities. Paternalism evaporates when users choose rather than accept imposed values. Stasis becomes commercially suicidal when adaptive competitors steal market share. Users reveal preferences through choices and usage patterns, trading off alignment against speed, accuracy, or cost. It’s price discovery for values, not bureaucratic imposition.

This becomes harder in autocracies where governments can mandate regime-aligned models (eg China and US today). However, we’re already living in a world of very large public frontier models alongside small, private, hyper personalised local LLMs that run on consumer hardware. The latter are often fine tuned as “amoral”. Even authoritarian states struggle with perfect information control when the technology itself is becoming decentralised. Rather than trying to solve alignment through democratic deliberation, let competitive forces do what they do best: find out what people actually want and give it to them.​​​​​​​​​​​​​​​​

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

I think AI should be -- should always be -- a little bit alien.

Expand full comment
9 more comments...

No posts