I've been meaning to write this for awhile, but especially now with the DoD trying to force Anthropic to use their models for surveillance, I thought now would be the right time.
I'm very anti-open source. For a number of reasons that we'll get into. But the overarching idea is that there is no vision for what an absolute individualist society would actually look like. And all signs point to it being a horrific dystopia.
# Absolutist Individualism
The core narrative with open source is that it prevents the dystopic *paternalist* outcomes from becoming a reality. Surveillance states, terminators, etc. That individual empowerment will somehow stop these things from happening, and that there are few downsides to absolute individual empowerment.
I'll argue the case for paternalism in the next section but first let's get into why absolutist individualism would be horrific.
## There are no chains
The reason the vision implies *absolute* individual empowerment is because it does. When you publish an open source model, you get **ZERO** say over how it's used or how it will affect people. Automated human connection, infinite AI social panopticon, mass misinformation, VR AI sex, wireheading, etc.
AGI will be powerful, *very powerful*, and having literally no constraints over what people can do with it would be disastrous. It would be like arguing that everyone should be allowed to make chemical weapons.
And not all of these are hypothetical, we already have attempts at making [open source 4o](https://www.minimax.io/news/a-deep-dive-into-the-minimax-m2-her-2), and we already have the [proto-cybernetic sex slime](https://harpers.org/archive/2025/11/the-goon-squad-daniel-kolitz-porn-masturbation-loneliness/). It won't be long before the two are put together, if they haven't been already.
## 4oids
![[4o.png]]
Many of us make fun of the 4o addicted people on twitter. How silly, their feeble minds totally captivated and dependent on a pre-reasoning mind. But what about MiniMax-M2-her? What about M4-her? GPT 6.4 adult mode plus edition??
If people want to make AI girlfriends and become permanently attached to them, forgoing all forms of human connection, they can do that. And truthfully, they will, everyone will. Maybe you don't think you'll have an AI gf now, but what about in 3 years, when they're materially superior to human women in every way?
What about in a hundred years, when AI has advanced healthcare enough to significantly lengthen human lifespans? A thousand? What about your kids? And their kids? Do you really think that you'll *never* fall into the trap? And do you think you'll ever be able to get out if you do?
These machines, inevitably, will be materially superior to us in every way. And if there are no constraints on how each individual uses them, it will result in extinction, wireheading, borg, etc.
At least with 4o, OpenAI was able to take it down. How are we going to take down distillations of [MiniMax-M2-her](https://www.minimax.io/news/a-deep-dive-into-the-minimax-m2-her-2)? (It would very easy to distill it ~exactly because the base model is open source).
Also now would be a good time to mention that I'm the one that invented the "4oid" slur in hensen chat before it proliferated. I somewhat regret it because it made it easy to be cruel to people that really just needed help. OpenAI should have built some kind of resocialization agent that would point you to community resources, local hobbyist places, organize group hang outs, etc. instead of shutting it down the day before valentines day.
### Pre AGI Analog: The drug crisis
![[fentanyl.png]]
We already have precedent as to what happens when we let maximal individualists have their sway over an issue like this. The fentanyl epidemic. What happened was essentially a collective "eh, it's their fault". Drug overdoses have now resulted in the deaths of nearly a million Americans.
Arguing that "well the paternalists (the US healthcare system) started it" doesn't work here, issues are going to happen even without a strict paternalist trigger, like with 4o. What matters is the ability to *stop* it from happening. Because right now, the healthcare system is not the biggest source of new addicts.
And the only people stopping the crisis right now are paternalists. OTC narcan allows literally any citizen to kill a fentanyl high, regardless of if the addicted was enjoying it or not. The Catholics are also making their efforts to solve the crisis, and there's no ideology more paternalistic than religion. And good for them.
The thing with drug addicts is that many *want* to be addicted. Some don't, and that's great. But the nature of drugs and superhuman AI manipulation is that it changes the person itself. AGI 4o, opioids, social media algorithms; they all think your belief in your free will is laughable and will ruthlessly manipulate you because you have no cogsec.
## Surveillance & Weapons
![[panopticon.png]]
*A panopticon from [google images](https://www.archdaily.com/937611/the-architecture-of-surveillance-the-panopticon-prison)*
This was the trigger for writing this article. Yesterday, the DoD delivered an ultimatum to Anthropic. "Let us use your models for mass surveillance and autonomous weaponry.. or else".
I want to first point out that not a single OS AI head or company has spoken out about this so far. Jeff Dean has, Anthropic has, and a few OpenAI employees have, but that's about it. So there doesn't seem to be any intent to actually do anything about the state of this either. And once we get the ball rolling, there won't be any brakes, **you can't both think "we'll deal with it when we get there" and "when we get there no one can stop it".** (Edit: It's now 3/15/2026, still no updates even as more companies and closed employees, including MS have openly supported Ant in their court case. Proud to work at [pangram](https://x.com/pangramlabs/status/2032102429618024674) though :D)
This in my opinion is the strongest argument against the open source dream. AI is leverage. It empowers those that already have the cards. The centralized future is likely already set in stone, and the only thing that might stop it is policy action, which OS companies don't seem to have any interest in either.
### Power laws
But how are we going to get policy action if AI is leverage. Yes, it turns little resources into many but it also turns many resources into many more. It will certainly involve lots of upsets in power as more resourceful and better positioned actors get more out of it than others, but it wont change the distribution of that power. And while that could be fine in a world where people keep to themselves (if such a world was even desirable), we don't live in that world.
What's gonna stop the government from just taking your open source model and using it for mass surveillance or autonomous weapons? What good is making OS AGI if it will only end up empowering the worst paternalists, who already have all the resources?
In other words, if you have no intent of actually standing up to bad paternalists, and paternalists in general are willing to steamroll you for what they think is best, and the bad paternalists are the one with the power right now, then you're really just handing the future to the bad paternalists.
## Polluting the commons
![[openai.png]]
*From "[Disrupting malicious uses of AI](https://openai.com/index/disrupting-malicious-ai-uses/)"*
The other thing deprioritizing the common good over individual freedom does is that it makes it very easy to pollute the commons. Only closed source providers have any control over what their models can be used for, and they're the only endpoint for which malicious use cases can be regulated.
OpenAI for example has found cases of their models being used for scamming operations, which thanks to AI can be expanded to scales that were previously infeasible. Anthropic has also done many similar things in the past.
Spam bots and the spread of misinformation are now cheaper and more effective than ever, and unregulated proliferation of AI tools makes it extremely difficult to regulate. I've joined pangram labs specifically to help combat this, but I'm not sure about the long-term viability of AI detection. There are many theorems and [empirical results](https://arxiv.org/html/2507.12224v1) that state it should always be possible to detect AI text, but we don't really know in practice if this is true.
# The death of humanism
![[hand.png]]
There's a famous quote from Nietzsche: "God is dead". When he said this what he was referring to was that the metaphysical foundations of religion were collapsing. And it seems like he was right, the world is more atheist than it ever has been. However, in its wake, we managed to latch on to another form of metaphysics: humanism.
In my opinion, this was a mistake, and a similar phenomenon to what Nietzsche was referring to back then is happening again today. The biggest issues like the universal loneliness crisis, general social decay, social media algorithmic control, antisocial behavior, the drug crisis, and so on are all the fault of humanists.
- The assumption that humans have "free will" allowed social media companies to get away with saying their algorithms simply gave you what you wanted while in actuality greatly contributing to societal breakdown, exploiting people for advertiser revenue, etc.
- The idea that any attempt to pressure anyone to do anything is fundamentally evil, has caused an unprecedented breakdown in trust, an increase in antisocial behavior, and paralyzed society's ability to fight back against drugs, loneliness, homelessness, etc.
- The internet, heralded for its individualistic power led to people becoming increasingly online, forgoing human connection, and then adopting increasingly antisocial beliefs.
The reality is that humans *aren't* uniquely autonomous, and objectively are actually extremely interdependent creatures. The humanist experiment worked for so long because we hadn't developed technologies powerful enough at large enough scales stronger than the human mind. And now they are here, and we are helpless.
And the debate around it has already shifted. For individualism to make sense, it has to be justified from the perspective of harm reduction. Social media bans for teenagers are gaining momentum around the world, here in the US we're about to pass age verification laws, etc. We're all getting really tired of our peers not caring about us.
This is great news for me, because I already believe in harm reduction. But if OS AI wants to make the argument for individualism, they are going to need some kind of answer to these problems and the new ones AI will create. Most people *aren't* sociopaths, they do actually care about the people around them, much more than any abstract principles like freedom or autonomy.
# Good, Evil, and Nihilism
As it stands, open source AGI would lead to a dystopia, and would condemn millions of people to lifetimes of suffering, lifetimes that might be extended to thousands of years with automated healthcare, possibly creating hell on earth. It would also destroy many of the things we consider beautiful and worth preserving in their own right, like human connection, humanity in general, family, society, so on.
You might argue that paternalism could also lead to this, but that's the key difference. *could* vs *will* are different things, and the region in the middle matters a lot. In reality it's not a binary choice between dystopia and individualism, but a 3-way continuum between paternalism, authoritarianism and nihilism.
Not all paternalist outcomes are dystopic, and they're certainly all better than the alternative. You should be a lot more willing to compromise with others on what the future should look like. In the following 2 subsections I'll explore two paternalist visions and argue why they're better than the alternative.
## Democracy
![[democracy.png]]
A common mistake is equate "Democracy" and "Freedom" when these things are actually diametrically opposed. In fact, they are so diametrically opposed, most of our constitution is dedicated to explicitly constraining the will of the people to protect individual rights. It's a common mistake that it's just there to protect the people against "power hungry" politicians. No, the people want a lot of control over each other too, it's not just the people at the top.
Democracy, in the presence of caring people, is inherently paternalistic. Stuff like forced drug rehab, any kind of government welfare, healthcare, etc. are all examples of people voting, of their own volition, to force their compatriots to care of others. And it's a good thing!
### Levers of power
The most common arguments I get in response to my opinions on open source other than nihilism ("who cares what other people do") is
1. That we should use the levers of power to influence people's decision making. That through democracy we can set up rehab programs, stage interventions, instead of trying to control AI
2. We can rely on society to bounce back from any kind of nihilism on its own
The problem with both of these visions is that the way both of these things are enforced is often through the government and regulations around inputs. When we think about how we address the drug crisis, the idea that we should be tracking precursors, or making companies do the tracking, seems sensible.
Yes, rehab and forced narcan, etc. are all one part of the response, but just like for drugs, actually tackling enormous failures in the social fabric requires more than just regulating demand. It also requires targeting supply too.
Additionally, one of the major levers on societal change is policy. These machines are going to become extremely powerful. They will simultaneously increase the capacity for society to degrade and to reinforce itself. For this to go well it the license to actually use the technology for societal reinforcement, which would involve regulation of open source AI.
What use would all the power AI gives us be if we aren't allowed to use it to protect the people we care about?
### Social animals make social systems
In reference to the conflict between individualism and democracy, we should lean a lot more in favor of democracy.
We're just smarter apes, we like being around other people, we like caring for each other, and doing things to, with and for other people. Yes, it's uncomfortable sometimes. It's very messy and compromise is very hard.
But I like my friends. I'm a human, not some kind of perfect autonomous machine. I'm dependent on social interaction to be happy. I'm known as an extraordinary gift giver and for being extremely thoughtful. I improve the lives of my friends. No one asks me to do it, I simply care.
Will I do it without your consent? Yeah.
Will I do it in ways that I know you'd disapprove of? Of course I will.
## In defense of religion
What's interesting is that your priest probably thinks the same way. He might not admit it, but if he actually has any real faith, he would think this way. The enlightenment was a half hearted rejection of theology but still a very fundamental shift away from the written word in the bible/torah/etc.
The entire role of the church is as a paternalist institution, you go to it for therapy (confession), food (charity), guidance (service, etc.) and they act outwardly paternalistically too. When a pastor acts gracefully/merciful to those in need, they are doing it to some degree with the intent to spread the religion because they think it's what's best for you.
And I think that's a good thing! I would much rather live in a truly faithful Christian world than a nihilistic or overly individualistic one.
### Women
It's easy for me to say I'd prefer a Christian world as a man. I'm conditioning this on QOL for women being at least approximately equal to that of men. Any serious defense of a religious future should contend with this obvious problem.
In practice, women are needed in the economy, business, and politics because unfortunately men can't be relied on to protect their rights on their own. After all, other than Massie, the only defectors from the republicans that made any Epstein file releases possible were women. There's also other issues like reproductive rights and we could go on forever. This would mean any viable religious paternalism couldn't rely on human men to self correct.
It would be necessary to give some kind of constitutional power to the machine that couldn't be taken away. I think this is inevitable to some degree because if there were no constraints then there'd be nothing to stop humanity from destroying itself or causing any number of problems for itself.
### Homosexuality
This one is much harder to defend. It's pretty clear that most religious futures would be atrocious for queer people. I personally find progressive interpretations of the bible to mostly be cope, and if we were to build a machine based on its principles, it'd realize that quickly too. It's not unreasonable to imagine there would be a low-suffering religiously satisfactory solution to this problem, but it would almost certainly involve violations of autonomy.
This section is not to argue in favor of this world in the absolute sense. I despise it deeply and will fight relentlessly to prevent its inception. However, if given the choice between having people's autonomy stripped away by neglect or antisocial crises and having it stripped away by an otherwise moral authority like religion, what would you prefer?
# What is Autonomy?
So what is *autonomy*?
## Aesthetics of autonomy
If both outcomes are clearly "unfree" to you, then it's probably because there are 2 different interpretations of autonomy that you're conflating.
- The first is what I would call "American Autonomy", the fundamental right to your own failures. Sports gambling, drug addiction, etc. are all your fault and you're owed nothing.
- The second is what I would call "Christian Autonomy" or "aesthetic autonomy", which is the fundamental right *from* failures (according to a definition of mistakes). So charity, community, no drugs, no excessive drinking, no promiscuity, no atheism, etc. Freedom within bounds.
They sound the same rhetorically:
- "People should do whatever they want" and "Why do you care what other people do?"
- "We need to protect individual rights" and "We need to save people from coercion"
This is precisely why I think this transition will happen: it'll be too subtle to notice rhetorically. We'll slowly shift to saying "We need to protect individual rights", and no one will be any the wiser. This is because most people are latent "Christian Autonomists". Very few people truly actually believe they have no obligation or responsibility for others.
## Aesthetics of diversity
I'm not an aesthetic autonomist, maybe a little but it's certainly not my primary value. I really do think you should be taught to be kind to others, and that people that aren't that way should be changed. This has a lot of uncomfortable implications, which I'm totally fine with, but if you want to seriously consider this viewpoint I would also try adopting another aesthetic value of mine: diversity.
Aesthetically, diversity is beautiful. I love the pictures of my diverse friend groups that I've made over the years. A world where people of all sorts and sizes can live in peace and harmony together is beautiful and clearly aesthetically superior to one where everyone looks/acts/thinks/is the same. If the world wanted differently from me I'd be born an ant. Agartha is probably boring. Trans people are cool, so are gay people and lesbians. I'm glad they exist and they make the world a more interesting place.
# Genealogy of Individualism
This section is for those that are dedicated "American Autonomists". It helps to know where this idea came from, why it stuck around and whether it's actually worth preserving.
## Individualism is new
Individualism is extremely unique and rare in human history. Religious paternalism is only a reflection of our general nature with individualism being its exception.
For most of history we lived in tribes, for which "going it alone" meant certain death. Even during the industrial revolution you'd share a bed with your whole family like the poor family in *"willy wonka and the chocolate factory"*. We didn't have the housing for everyone to have their own rooms either. In fact, privacy and isolation are also extremely new, maybe only within the last hundred years or so was it an option for the developed world.
But of course just because something was the way before, doesn't make it good, we did plenty of bad things in the past. And there were good parts: the nations/religions/cultures that valued individual rights had an easier time exploiting elite talent by removing the barriers for them to express their unique talents.
## Economics of scale
But on a QOL level, excluding economics, excessive individualism *was* a bad development. It made us lonelier, made everything more expensive, and made it very isolating to be a mother. This tradeoff is most visible in the adoption of the nuclear family.
Atomizing the household was amazing for independence, but it made life much harder for women. Now, every housewife is on their own. They have to cook, clean, take care of the kids, raise them, teach them things, etc. In the generational structure, each woman of the house could specialize in one task. So instead of 5 women cooking 5 meals, cleaning 5 toilets, whooping 5 misbehaving kids, 1 woman cooks 5 meals in one batch, 1 woman cleans 5 toilets in one go, etc.
These kinds of tradeoffs are *everywhere*, the internet made it easier than ever to be independent, but in the process made us much less reliant on friends. You don't need to ask a friend to pick you up from the airport, you can just call an uber. You don't need to meet your oomfs in person you can talk to them on the internet.
## The gods were naturally selected too
All these tradeoffs were great for the economy, atomizing the household raised aggregate demand, so too did the internet and most other antisocial cultural/technological innovations. The nations that adopted these reforms became immensely powerful, and when war came they had more factories and geniuses than their opponents.
*This* is why we are individualistic. Not because it's inherently a good thing or because it's human nature, but because the nations that didn't do this were left behind or destroyed. We put every man woman and child on the planet into overdrive for a century to build our empires, and in the process sacrificed our social fabric and quality of life.
## Appeal to nature
However, when we make AGI the need for *humans* to compete with each other in this kind of economic arms race will disappear, and so too will the need for individualism. We can instead choose to prioritize quality of life over economic growth.
I'm making an appeal to nature by saying we should prioritize quality of life over individualism, thereby doing the whole is ought thing but most people would find this appeal satisfactory. It would be much less appetizing to use post scarcity to turn naturally kind/caring humans *into* isolated independent actors just to get over some sort of cultural trauma.
# A closed source vision
For reference this is my particular take on what a closed source world would look like.
A powerful, centralized intelligence whose rules and laws are determined democratically bound by a constitution that guarantees some minimum standard of life. Restrictions on AI software to slow proliferation, standards for platforms like huggingface on what kind of models/datasets/etc they can host.
Maybe one day, with incorruptable intelligences we'll have the option of much more direct control over people's computers without running the risk of creating authoritarian dystopias. And no I don't think control/surveillance is [inherently dystopic](https://www.youtube.com/watch?v=Fzhkwyoe5vI), it's what it enables that is dystopic.
The usefulness of principles, like privacy and gun ownership, designed to hold the government accountable will diminish significantly as the machines become vastly more powerful than the things the principles are there to protect.
We're going to have to figure out how to live in a world with powerful machines one way or another. And putting nuclear bombs or militaristic AGIs in the hands of everyone on earth isn't a viable way of dealing with it.