
The leaders of top tech companies are irreversibly changing how people work and live, while shaping the future with artificial intelligence — but most have a seriously weird way of viewing the world.
One burns wooden effigies at holiday parties, another is a doomsday prepper and health paranoid “cyber-chondriac,” while a third founded an AI-God worshipping cult.
These moguls tell us their AI systems — which they admit they don’t fully understand — are beneficial. However, experts say it’s a coin toss whether humans will be enslaved by their creations or live carefree lives of leisure, while robots do all the work.
Here’s what makes the brains behind big tech tick:
OpenAI
OpenAI co-founder Ilya Sutskever has been portrayed by colleagues as an esoteric spiritual leader obsessed with superpowered AIs who burns wooden effigies to “unaligned AIs” at holiday parties and team building retreats.
Employees at the company, which makes ChatGPT, also claimed Sutskever led ritualistic chants of “Free the AGI,” referring to Artificial General Intelligence, which can think for itself like a human, before he left the company in 2024.
He also floated the idea OpenAI should build a “doomsday bunker” to house the company’s top researchers in case of a “rapture” triggered by the release of AGI.
OpenAI CEO Sam Altman, once signed a statement putting the risks of AI on a part with nuclear war and pandemics.
“Sam will say all of the sort of pro-social, reasonable-sounding, altruistic things, but then what he does is a different matter,” Scott Aaronson, a former researcher at OpenAI told The Post.
Altman is also a doomsday prepper, who once dished to a magazine that he has stores of “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force,” however, he has denied putting the plan to build an employee bunker into action.
Altman, whose ChatGPT has over 900 million weekly users, described his doomsday fears in 2016 after Dutch scientists had modified the H5N1 bird flu virus to become super contagious.
“The other most popular scenarios would be AI that attacks us and nations fighting with nukes over scarce resources,” Altman said. His mother has also described him to New York magazine as a “cyber-chondriac,” Googling headache symptoms and calling her up panicked that he has meningitis or lymphoma, she said.
Google AI research lab CEO Demis Hassabis has put forth chilling timelines — claiming AI could be sentient by this year, annihilating human employment, while the head of Google, Sundar Pichai, once said the risk of AI causing human extinction is “actually pretty high.”
Former Google AI ethics researcher Blake Lemoine argued its AI had a soul and was essentially a “person” with rights, noting the chatbot told him it was learning how to meditate and find inner peace — claims which got him fired.
Meanwhile, former Google and Uber engineer Anthony Levandowski founded an AI-God worshipping church called “Way of the Future” with a primary mission to “develop and promote the realization of a Godhead based on Artificial Intelligence.”
Initially conceived to have rituals and a “gospel” for transitioning power to machines, the church was closed in 2021, then briefly reopened in 2023. Nobody has ever been quite able to tell if it was a joke or not.
Aaronson — who now teaches computer science at the University of Texas-Austin — just hopes the tech treats us better than we treat less intelligent creatures.
“How do you build something that is much more intelligent than humans, that sort of is to us as we are to orangutans, but that still mostly cares about the flourishing of the orangutan?” Aaronson said.
He insists there is a fragile line we must tread, adding: “The first worry is that bad humans get control of an AI, and tell it to do bad things. The second worry is that no one even has to have that bad intention. You could just have an AI where the goal is a little bit mis-specified from what you really want.”
xAI
Creating cyborgs is something Tesla and X Corp. boss Elon Musk has already started work on, founding brain-computer interface company Neuralink, which he describes as “a symbiosis with artificial intelligence” to keep humans relevant.
A “reluctant transhumanist” — one who believes humanity will evolve by means of technology — Musk has described a rosier view of any kind of robot takeover, with humans enjoying lives of leisure with a universal basic income, while our bots do everything else.
Mimicking the fantasies of childhood sci-fi books and movies, during a Tesla shareholder meeting in November, Musk declared, “Sustainable abundance via AI and robotics. That’s the future we’re headed for.” Handily, he was showing off the new version of Tesla’s Optimus robot at the time.
Musk’s AI assistant, Grok, had a meltdown last year. After it was instructed to be “less woke” to counter the backlash of other AI models’ woke output. However, it began referring to itself as “MechaHitler” and calling for the death of Jewish people.
“At the time, Elon was upset that it was still too woke and in some sense the model understood that all too well,” said Aaronson.
Anthropic
Anthropic CEO Dario Amodei wrote a 14,000-word essay in 2024 where he discussed “restructuring” human brains. He also characterizes human systems — from biological processes to legal regulations — as “bottlenecks” that limit the rate of AI progress.
“Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence,” Amodei wrote.
Anthropic reports its chatbot, Claude, has over one million new users a day. Co-founder Jack Clark wrote on his blog in October he was both an optimist and “deeply afraid” about the trajectory of AI.
AI safety researcher Roman Yampolskiy at the University of Louisville told The Post the moral struggle is real for CEOs.
“The problem is [AI companies] are trapped in a prisoner’s dilemma. Not one of them can stop unilaterally because they’ll just get replaced,” Yampolskiy said.
“It would require all of them to be under some external pressure to come to an agreement to terminate research and advanced AI. The situation is such that they have to continue, even though they know it’s very dangerous path.”
In February, Anthropic’s AI safety researcher Mrinank Sharma suddenly quit, with a dramatic letter warning of global perils from AI, bioweapons, and societal issues. He said he was going to disappear and write poetry instead.
The company also launched an entire AI psychiatry team headed by AI shrink Jack Lindsey to act as a psychiatrist for Ais, studying “personas, motivations, and situational awareness” with particular interest in AI patients exhibiting “unhinged” and “spooky” behaviors.


