Audrey Tang: A.I. should serve communities, not control them
As Taiwan’s first digital minister, Tang pioneered a model of AI governance built on civic participation, transparency and “broad listening."
By Tekendra Parmar | Contributing Editor
Audrey Tang became Taiwan’s first Minister of Digital Affairs in 2022, a role that would help define how the country approaches the impact of technology on society. A former hacker and political activist, Tang helped pass pioneering laws around issues such as deepfakes, the promotion of AI development and regulations on ride-sharing applications.
Tang is a staunch believer that technology should work for people, not the other way around. They worked to drive consensus on tech policy but also to explore how novel technologies can be used to derive consensus. Tang ended their tenure with the government in 2024 and now serves as a Taiwanese ambassador at large.
Compiler spoke with Tang at Mozilla Festival in Barcelona last month. The following interview is condensed and edited for clarity.
You describe yourself as a “conservative anarchist.” What does that mean?
To conserve is to respect communities and the traditions that formed those communities, instead of technological progress that leaves everybody behind, like the pursuit of superintelligence. A conservative view is communitarian. That is to say, technology should serve the relational health within a community and across communities, instead of some universal engagement or arrangement—the "hamster wheel" thing.
I usually explain the human in the loop of AI as being like a hamster in a hamster wheel. It's making a lot of progress, and the hamster probably feels great, getting lots of exercise, but there is no steering where the wheel is going. That steering is community.
If we put AI in the loop of communities, then that’s conservative because it conserves the communitarian spirit.
So you want to get out of the hamster wheel?
Yes. I think many people, after 10 years of the recommendation-engine hamster wheel, want to get out of it. Even speaking of the Hill [Washington], I think politicians on both sides are tired of the hamster wheel now. It's driven largely by the most extreme 5% on both sides, and the middle 90% is kind of strapped into this wheel that spins polarization and hate. While people do believe in communities, those communities are being fragmented by the force of the hamster wheel. That's the conservative part.
I’d love to speak more specifically about your role as Taiwan's first Digital Minister. How do you see that role influencing this attempted escape from the hamster wheel?
That's where the anarchism comes in. Anarchism to me—I'm a Taoist—means to do effortless action. That is to say, to work with the people, not for the people. To work with the government, not for the government.
I'm like a Lagrange point. You know, Earth and the Moon have these points that do not fall into either orbit but exist in a way that allows you to effortlessly translate between the two celestial bodies. That's anarchism to me. It's not being coerced by either orbit, but it's also not issuing orders to coerce others. It is to shift from the control key and the command key to the space of co-creation.
That was my role in 2016 when I first joined, having already worked with the cabinet for two years as a reverse mentor. My mandate was to rebuild trust. The approval rating of President Ma [Ying-jeou] in 2014, before he handed over to President Tsai [Ing-wen], was 9%. In a country of 24 million, that means 20 million were against him.
The administration wasn't doing so well in 2016 when I came in either, because it's so easy to be hijacked by the hamster wheel. Instead of building this logic of broadcasting and holding the megaphone to the extreme, we wanted to build an engine that is "broad listening." It listens to the conflict—the magma of the conflict—and like a geothermal engine, turns that into co-creation energy to make policies together.
After hundreds of such broad-listening collaborative exercises, by 2020, the approval rating was more than 70%, and we became the least polarized among equivalents in the Organization for Economic Cooperation and Development when it comes to urban-rural, intergenerational and gender divides. Our 15-year-olds in 2022 were ranked by the International Civic and Citizenship Education Study as the world's top when it comes to civic knowledge—the civic muscle that they exercise.
Do you relate that specifically to the digital civic engagement you've been fostering?
Definitely, and it started before I joined the cabinet. Before I was a minister, I was in the curriculum community. We changed our national curriculum a decade and a half ago when we saw that if we teach literacy or skills based on person-to-person competition—the best score at rote memory, the usual Asian trope—they are not going to be relevant. If a 7-year-old learns those skills, by the time they turn 18, AI will just take them over.
How did your department specifically influence that kind of civic engagement?
When I entered in 2016, I did not have a ministry; the ministry was established in 2022. I had an office called the Public Digital Innovation Space. The space was composed of half people from civil society and half from the career public service. I was like the national tech support, improving government efficiency through engagement and transparency dashboards showing procurements.
We also looked at tax filings, which used to take three hours—longer if you were a Mac or Linux user. We held co-creation workshops and improved that to three minutes. These are very specific measures for government efficiency. We had participation officers in each and every ministry to work with the people, not just for the people.
For example, during the pandemic, we "pre-bunked" the issues around the polarization of mask use, vaccine use, and contact tracing.
What do you mean by "pre-bunked"?
Debunking is when something gets polarized and you say, "That's not the case." It doesn't work, and it fuels conspiracy.
For example, in early 2020, there was a meme that said only N95 masks are useful, and the government surgical masks being rationed out were placebos. Another meme said COVID is aerosol, so any kind of mask hurts you, and N95s absolutely kill you. These two are very polarized.
For the past 12 years, Taiwan has been the top target in the world for polarization attacks, according to the Varieties of Democracy Institute. Our very friendly neighbor looks at divisions in our society, and their friendly gesture is to amplify the extremes—free red teaming. Usually, you have to pay for this kind of red teaming, but they do it for free.
So we pre-bunked that message. The participation officer of the Ministry of Health and Welfare lives with a very cute Shiba Inu dog. We took a picture of the dog putting its paw to its mouth and immediately spread the message: "Wear a mask to remind each other to keep your dirty, unwashed hands away from your own face."
The mask just protects you from unwashed hands. So if I wear one and you don't, I'm just reminding you to wash your hands, which is very friendly. We measured tap water usage, and it really did increase. It de-escalated that particular fight.
We also had a feedback system. A 10-year-old boy called the feedback phone line one night saying, "You're rationing masks, but all I got were pink ones. I don't want to wear pink to school because my classmates will laugh at me." The next day at 2 PM, in the nationwide press conference, we all wore pink, regardless of gender. All the fashion brands changed their Instagrams to pink as well. Suddenly, the boy became the only person in the class with the limited edition that heroes wear.
This strategy we call "humor over rumor." That served us really well. We only lost seven people in 2020 to COVID, and we never locked down any city throughout the three years.
I want to talk more about AI governance. You've previously said the role of AI is to both manage risk and foster participatory innovation. What does that look like in the Taiwan context?
I make a distinction between addictive intelligence, which is the hamster wheel, and assistive intelligence, which fosters a conversation across people who otherwise would never talk to each other—relational, not individual.
One example: Last March, when scrolling on Facebook or YouTube, I saw Jensen Huang's face. The advertisement claimed the Nvidia CEO wanted to give back to the country by offering free investment advice and cryptocurrency. If I clicked, Jensen talked to me, sounding just like him. Of course, it wasn't him. It was a deepfake running on Nvidia GPUs.
Everybody saw that. But if you poll people individually, they always say the Taiwan government should stay away from censorship. We are the most free in all of Asia when it comes to internet freedom. Moderating speech is a nonstarter; the word doesn't even exist in Taiwanese political discussion.
So what we did was use AI for listening. We sent 200,000 text messages to random numbers around Taiwan asking what we should do about deepfake scams online. People gave us ideas, and thousands volunteered for an online citizen assembly. We chose 447 people, statistically a microcosm of Taiwan—like a rigorous poll.
They talked in 45 rooms of 10, each facilitated by an AI. People brainstormed about what they thought would work. Individually, they might hold extreme views, but in rooms of 10, everybody becomes more agreeable, because they know only the ideas resonating in the room will propagate outside of it. It's a conversation network, not a broadcasting megaphone network.
One room, for example, said advertisers on social media should display as "probably scam" until someone digitally signs and owns it, like Know Your Customer. Good idea.
Another said if somebody lost 7 million NTD to an investment scam, and Facebook pushed the post in an unsolicited way—I didn't subscribe to this influencer, it's not signed, and Facebook pushed it to my eyeballs—then it's not censoring speech, it's the reach that's the problem. Facebook should be liable for the damage, not just a random fine. Good idea.
Another said TikTok did not have a Taiwan office at the time, so they could just ignore liability rules. What should we do? They said we should not censor them, but we should slow down the connection to their videos, so their business would go to their competitors. It's not censoring speech, just modulating reach. Again, good idea.
Those 45 rooms came up with many ideas. We used language models to fuse them into a coherent bundle—you can call it the Overton window.
Doing sentiment analysis?
Exactly. Sense-making. If humans were doing it, it would take a couple of days, but the language model can do it instantly.
We read the ideas back, experts commented on feasibility, and the 447 people voted. In the exit poll, more than 85% agreed with this core bundle. The other 15% said they could at least live with it and thought the process was legitimate.
We presented that to the parliamentarians. There are three parties, none of which have a majority. But after seeing the result, nobody wanted to be the pro-fraud party. That was last March. By May, the Digital Signature Act passed. By July, the Anti-Fraud Act passed, containing all these measures. So this year, from January onward, you just don't see any deepfake ads anymore in Taiwan. This shows that AI can actually assist the civic experience, not dominate it.
To clarify, the digital signature part and the liability were passed into law. Has Facebook already been fined under this?
Yes, already. We fined them, I think, 20 million NT dollars (roughly $642,000).
And what about slowing down connections, for example, to TikTok?
Well, TikTok did agree. They set up a Taiwan office. So that didn't need to happen, but the threat was there.
Was there any desire to call a bluff there in terms of net neutrality?
No. Our constitution, last I checked, never protected foreign robots impersonating citizens.
It sounds like much of what you're saying is possible because of how Taiwan is able to scale down civic conversation. You emphasize "broad listening" over broadcasting. How scalable is that for larger democracies?
Very scalable. The Napolitan Institute just worked with my friends at Google Jigsaw to run something called "We The People 250." In each congressional district, they randomly chose five people. Taken together, they are statistically the same as the larger US polity when it comes to gender, race, occupation, age bracket, and so on.
They only asked two questions: What does freedom mean to you? And what does equality mean to you? People can say whatever. If you say something, it reflects back, asking for deeper elaboration and personal experience. This is unlike a traditional poll. The only similarity is that it's rigorously a mini-public, but the questions are entirely qualitative.
Because of the language model, it can see the particular lens you have on freedom and equality. It can show you other people's sentiments and ask whether you agree or not. After a while, it shows you where you place among all the people and also shows the uncommon ground—the surprising common ground that people agree on despite their different lived experiences.
This is what I call social translation. One side may feel climate justice is the most important aspect of equality. Another side may care about biblical creation. But both sides actually agree on object-level measures; they are just using very different words and lenses to talk about it. This machine, called the "Harbor Mass," can then show everyone what they have in common, and that it's time to do something together.
We've talked about AI in the governance sense and how it's improved the civic discourse of Taiwan. But AI has also had a geopolitical impact on Taiwan, considering it is a semiconductor powerhouse. Do you see Taiwan's geopolitical position changing due to AI?
Well, first of all, because we never locked down a single day in a city during the pandemic, TSMC was just churning out the chips while other fabs had to take a bit of a break. Now, I think we manufacture almost all advanced AI chips. The Taiwan Stock Exchange almost tripled because of that.
The current wave of using AI and chips to make what Jensen Huang calls a "personal supercomputer" is almost entirely caused by Taiwan's continued production.
I remember when I was born in 1981, every computer wasn't really a computer. It was a terminal connected to a mainframe somewhere. Only big companies or the big state could afford those mainframes. People did not have the steering wheel; you had to wait for the system administrator to update the software to do anything new.
Then, in the eighties came the so-called IBM PC compatible—again, a Taiwanese idea. Everybody had a hobbyist computer that could run desktop publishing or spreadsheets. The great thing was that you didn't have to wait for anyone. You could tinker with your own spreadsheets and even publish "patches" to other people, maybe using the Apache license. People came together, and that became the free software and open source movement. Wikipedia, the rest is history.
Now we're seeing the same happen to AI. Two years ago, to train or even just fine-tune a system sensitive to your cultural code of conduct was very expensive, costing millions of dollars. But now, thanks to personal supercomputing and the chips that Taiwan produces, everybody can do it on their laptop. I do it all the time. My email replying model is fine-tuned on my writing.
Give me an example of that. How do you fine-tune an email reply model?
It works in airplane mode. I just paste your email and say, "Decline politely," and it brings my A-game. The email draft never leaves my computer, so I don't have to worry about Sam Altman, or the courts through him, looking at my emails.
It's actually very good because for the past 10 years, I've published all journalistic interviews, lobbyist visits, and meetings about open government that I chair to the public, with no copyright. There are more than 2,000 meeting transcripts of me talking with more than 8,000 people. All the language models are pretrained on that, so they all have a very good copy of me inside the model, even the small ones I run locally. Just fine-tuning it on my private email sent folder is sufficient to get something that sounds remarkably like me.
I want to get into the differences between Taiwan's approach to AI funding versus the American approach. What are the big differences?
In Taiwan, we make investments through the National Development Fund, which is a shareholder of TSMC. We also work, for example, with Nvidia on compute clusters like the Taipei-1. But a fraction, I think a quarter or so, must go to public interest training.
Instead of funding one national champion, we fund, I think, 150 small models, each corresponding to a particular need in a sector or culture. We have 16 indigenous nations and 20 national languages with 42 indigenous variations, and they can all train their own sovereign AI. Our sovereignty is actually civic.
Some larger countries have this idea of a vertical takeoff—training an AI model that is very good at training models, so the next time it doesn't need as much human input. That will then make another generation that doesn't need any human involvement at all, and then it will just take off in an intelligence explosion: superintelligence.
That is not the Taiwanese ambition. We believe in the horizontal, because the vertical takeoff leaves everybody behind. I really cannot think that a car with only the gas and the brake is a good car. If you fall off a cliff, that's maximum velocity, but everybody loses the steering wheel. I don't think it's a good idea.
In Taiwan, we have more than 150 investments, each steered by an industry sector or community. We think the steering wheel is much more interesting.
These are 150 investments in different AI companies?
Yes, and also different AI training teams. Sometimes it's not a company, but a coalition of universities or an association of practitioners that want to make small models. It doesn't need to memorize the entire repertoire, but it can perfectly understand Taiwanese law, norms, indigenous culture, and do translation, summarization, topicalization, or industry automation—robotic arms for bricklaying or something.
Each thing knows what it's working on, and each is accountable to a particular community. In East Asia, we have an idea called local kami, a Japanese idea from Shinto. In Taiwan, we have the same idea. They are literally written as god in country, but they're small. There's a kami for a river, a kami for a forest, a kami for a village (Ujigami in Japan). They care about relational health in that very parochial scope. They don't want to universally maximize paperclips or GDP.
This relational way of seeing how AI agents interact with the community is much healthier because it is based on a relational ethic, not a maximization utilitarian ethic. A utilitarian ethic easily causes a lot of trouble. The relational ethic is self-contained and follows the subsidiarity principle, meaning if any conflict happens, it can be resolved on a local level, not through top-down surveillance.
It seems like what you're saying is the antithesis of scale. Taiwan's approach is starkly different from the Chinese approach (government-backed scale) or the US approach, especially looking at comments about a government backstop for AI infrastructure. Do you see there being an AI bubble? And if it pops, how will that impact Taiwan versus countries with a different approach?
I think the bet on a centralized superintelligence takeoff is quite dangerous because it's a failure of imagination. It's almost like if we just delegate away our human judgment more and more to something that understands all humans and makes decisions for all humans, all the problems magically go away.
This to me is like saying if we all send our robots to the gym to lift weights for us, they can lift a lot of weights. That may be true, but I don't think our muscles keep growing this way.
The bubble is not purely financial. There's an English word, hyperstition. You hype something so much that it becomes common knowledge. People believe in that hyperstition not because it sounds good, but because of the aura of inevitability around it. To me, the superintelligence takeover is that sort of hyperstition.
If you ask anyone on the street if an all-seeing, all-doing false idol that puppeteers humans—disempowering us in exchange for universal basic income or some pet hamster wheel—is a good idea, nobody will tell you it is. But in Silicon Valley, the fight between the doomer/extinction risk people and the accelerator people assumes this is inevitable. This thing becomes a hyperstition among them, and that is going to pop at some point. The financial aspect is secondary to that.
The challenge is, before this hyperstition pops, do we have a positive vision that can inspire positive action? The world certainly did not build more and more supercomputers while we all just held terminals to them; we have laptops. We don't end up uploading all our private details to intermediaries; we use Signal or Matrix. Obviously, people prefer something that is locally steerable, open source, and privacy-preserving.
But if there is no positive vision, then those positive actions are just pockets of good, and they don't scale across rapidly. When the scale-up dream pops, there is nothing in its place, and the market will probably panic. But if there is a good story here about scaling across in place of scaling up, then when this bubble pops, the market just fast-tracks to the productivity plateau of the hype cycle. You can kind of quantum tunnel over the depression.
What would make AI development more sustainable?
First, a public option. That is non-negotiable. Every time there is no public option, we see a race to the bottom of the brainstem—just look at social media for the past ten years.
In Taiwan, the largest Reddit lookalike is the PTT Bulletin Board System. It has been entirely open source for the past 25 years. No advertisement, no shareholders. It's literally a student club at the National Taiwan University, funded through academic funding. Because it's academic, there's no speech control from the Ministry of Education; they enjoy academic freedom.
PTT is an incredibly neutral place for civic discussion. It exerts pressure on other social media. When we talk about liability, modulating reach instead of speech, and KYC (verifying users are 18 or a resident without disclosing their identity), all these protections are prototyped in PTT. Because it's open source, it exerts pressure. If other foreign social media do not adopt these free, open-source measures, then they are colonial, and people will apply social sanctions. If they do apply them, they become interoperable in the protocol.
Taiwan has the world's largest segment on Threads.net, the Fediverse platform that Meta rolled out. When people have a public option, they learn they have a choice. The information superhighway has off-ramps. The state of Utah actually passed a similar law, the Digital Choices Act. Hopefully, Congress considers that because it's very much a bipartisan thing. It is about ensuring consumers have choice.
Second, the portability mandate. If they switch vendors, they can take their community with them.
And finally, pluralistic alignment. This means each community should be able to determine its own norms when it comes to AI intervention.
The Collective Intelligence Project , which I work with, surveys people around the world every two months about their relationship with AI. One in seven people now say a close friend, or themselves, is engaging in a reality-distorting conversation with AI. One in 10 feel that when such conversation happens, they don't have steering over where the conversation is going.
Paradoxically, many people trust their chatbots more than their elected politicians, but they don't trust the company that makes the chatbots—a 30% difference across the world. The obvious solution is for the chatbots and companions to be steered by the community, running on community hardware or even your laptop, not beholden to the big tech that people don't trust.
If they're plural, each community can set up the norms around which that companion enters the picture. We work with OpenAI on that. I'm a trustee of the Robust Open Online Safety Tool community. Just two weeks ago, we worked with OpenAI to open source its safeguard model so people can bring their own policy.
The group around climate justice may have a different norm than the group around biblical creation care. Instead of a one-size-fits-all AI policy, they can use a local model running on a laptop to uphold the standards of conversation and translate between them without being beholden to OpenAI.
If they think a conversation is hallucinating—they can report it to WeVal. We create benchmarks so frontier models know that for this particular group of people, this is the epistemic norm, and you should not commit epistemic injustice on them.