At the height of Henry Ford’s influence, the industrialist used the media, exhibitions, and became the world’s largest film distributor to mould the United States into a car-loving country.
For Saffron Huang, achieving that comes down to one problem: the ability of everyday people to govern transformative technology. The former DeepMind researcher is fond of sharing the quote of famed late biologist Edward O. Wilson: “We have paleolithic emotions, medieval institutions, and godlike technology.”
We’re stuck with our emotions and blessed by our technology, but our institutions need an upgrade. Saffron’s solution is upgrading what she terms our Collective Intelligence: how society chooses and delivers its goals. Examples include concepts you’ve heard before, like national democracy, capitalism, or the courts.
These systems have enabled incredible progress, but Saffron isn’t the only one to notice that they’ve fallen off the pace in recent decades. Climate change, pandemics, and growing inequality are all examples of what happens when democracy falls behind and capitalist markets operate unconstrained. With transformative technologies, like AI, blockchain, and biotech, the change promises to be even more dramatic.
“How can we have more collective control over our frontier technologies?”
— Saffron Huang, co-founder, Collective Intelligence Project
As the co-founder of a new think tank called the Collective Intelligence Project (CIP), Saffron’s goal is to tackle “the problems that existing collective systems could not solve.” Founded with colleague and friend Divya Siddarth, the pair left DeepMind and Microsoft last year, respectively, to help reimagine how society decides – and gets – what it wants.
With support from luminaries like former CEO of the Wikimedia Foundation, Katherine Maher, esteemed economist and author Daron Acemoğlu, plus researchers from the world’s leading universities, AI firms, civic groups, and governments, the organisation has big plans to give society the ability to shape transformative technologies for the better.
“The idea of the Collective Intelligence Project,” Saffron explains on Curation, the Culture3 podcast, “is essentially asking how we can have more collective control over our frontier technologies, so that we can steer it towards the collective benefit.” So far, she warns, it looks like technology is heading in the opposite direction. Rather than “benefiting people broadly,” cutting-edge tools like ChatGPT are, for example, built on “an underclass of data labellers” who create datasets to improve AI tools and moderate its content.
That imbalance illustrates what the Collective Intelligence Project calls the transformative technology trilemma. The idea is that transformative technologies make trade-offs between progress, safety, and participation. “Lots of people take two out of those three things,” Saffron explains.
For example, the UK Government's creation of an AI Safety Summit for leading AI firms drew criticism that a small group of elites were writing their own rule book. Professor of Computer Science at the University of Southampton, Dame Wendy Hall, who co-chaired the Government’s 2017 AI review, warned that the advice was “mainly coming from the big tech companies.” She cautioned, “is it right for the people making money to be the people designing the regulation? You need voices other than the tech companies themselves.”
Social media illustrates the free market failure, where progress and participation come “at the sacrifice of safety.” Meanwhile, the “community-led approach sacrifices progress, but gains safety and participation.” The third failure is the “authoritarian camp”. For example, governments might track the use of GPUs to control which organisations develop cutting-edge AI systems.
“You need voices other than the tech companies.”
— Dame Wendy Hall, Professor of Computer Science, University of Southampton
Saffron’s ultimate vision, she affirms, is engaging society to direct transformative technologies on a path that is safe, creates progress, and is inclusive and participatory. She points to how, in the 20th century, the world successfully regulated the car industry to introduce mandatory seat belts that have saved millions of lives, or how “we successfully regulated planes, and now giant flying metal objects can land safely, thousands of times per hour, around the world.”
Partnerships between governments and the private sector, as well as voluntary standards-setting committees between companies, represent moves in the right direction today, though Saffron admits that “right now, I’m not sure there’s an institution I would point at and say they’re doing an amazing job.”
That said, most transformative technologies are relatively new, so Saffron remains optimistic — “it’s always going to take time to figure things out.” And, singling out investments made by the UK Government in Foundation AI models and the Advanced Research and Invention Agency (ARIA), she suggests that “a renaissance for UK public interest technology” could be on the horizon. “The UK is doing a reasonably good job of trying to be a leader in AI regulation,” she adds.
Amidst those hints of optimism, CIP exists to research and trial new democratic institutions to navigate that trilemma, and be a hub for others trying to do the same. “We think about it in two parts,” Saffron explains, undaunted by the challenge. “An information problem – how do we process, understand, and combine people’s inputs and values – and then the incentive problem, designing bespoke institutions that get the collective good you want.”
The ‘information problem’ comes first. “In certain respects,” Saffron continues, “we think real democracy has never been tried.” Traditional representative democracy, where citizens vote for a politician or party every few years, struggles to deliver enough information for governments “to understand and enact what voters want.”
She’s interested in ideas that “better surface and aggregate that information,” like quadratic voting, where voters can allocate multiple votes, but each additional vote they allocate to a specific cause has diminishing influence. Inspired by CIP advisor and Microsoft researcher Glen Weyl, Saffron says that quadratic voting is a potential answer to the question: “if people really care about a particular idea, can you weigh that (strong preference) more?”
It could be seen as naive to suggest that the world’s problems can be solved simply by a better understanding of what voters want. Saffron is highly aware that the incentives which governments face – as well as other organisations – can still lead them astray.
“A lot of people in tech do really value having a good impact,” she admits, “but incentives can lead them in a different direction.” She emphasises the venture capital industry as an example, where investors are incentivised to prioritise quick wins within a few years.
“The way folks usually get that return on investment is you either go public or you get sold,” Saffron explains. Both routes end up creating increasingly larger companies, and society loses access to what small and medium businesses can offer. “People have described VC funding like jet fuel: it’s not good for everything. You don’t want jet fuel to be valorised as the only valid and important outcome. Is there a way that the incentives can be better aligned with our values?”
“In certain respects, we think real democracy has never been tried.”
— Saffron Huang, co-founder, Collective Intelligence Project
Other approaches to building the businesses of tomorrow pale in comparison to venture capital. Business loans are tough to get, particularly for start-ups that spend highly on software, whilst bootstrapping your company without any funding faces the same challenge with even less support.
Saffron is interestested in alternatives like community-based models, such as Kickstarter-style experiments where backers support a project in the early days, and Exit to Community models (Perpetual Purpose Trusts are an example), where stakeholders, typically customers or employees, take ownership of an organisation after it’s running at a steady rate, with the purpose of keeping it that way.
One of the problems with that is when it comes to transformative technologies, people often don’t know what to think. “We’ve run public input processes into AI, but people don’t feel they know enough to have an opinion,” Saffrons admits. “I think there’s a way we sell ‘tech’ as this magical, fearsome thing, and that’s just a cultural choice which creates certain power dynamics for the tech industry.”
“Are we building solutions without problems?”
— Saffron Huang, co-founder, Collective Intelligence Project
Saffron’s fundamental motivation for her work is navigating questions like “what ends are we trying to enable with our increasingly capable means? What are we doing this for?”
Those questions stem from the worry that this cultural narrative often ends up as “technology for technology’s sake.” She muses, “historically, we innovated towards filling our needs. Are we a historical anomaly, building solutions without problems?”
That might be better than building problems without solutions, but Saffron is in the business of matching those two opposites. With partners in the USA, Taiwan, and the UK, the team are scaling up their Alignment Assemblies programme this year to engage society writ large on artificial intelligence.
In other words, learning how people want AI to behave, guiding AI models in that direction, and figuring out whether that’s really happening. “It’s going to be a busy rest of the year,” Saffron summarises. She cautions that “real change happens slowly – and with lots of people doing their bit,” but the CIP research engine is quickly getting into gear.