A quick guide to decoding Silicon Valley’s strange but powerful AI subcultures
Analysis by Nitasha Tiku
Updated April 10, 2023 at 1:18 p.m. EDT|Published April 9, 2023 at 7:00 a.m. EDT
Listen
8 min
Comment
Share
Inside Silicon Valley’s AI sector, fierce divisions are growing over the impact of a new wave of artificial intelligence: While some argue it’s imperative to race ahead, others say the technology presents an existential risk.
Those tensions took center stage late last month, when Elon Musk, along with other tech executives and academics, signed an open letter calling for a six-month pause on developing “human-competitive” AI, citing “profound risks to society and humanity.” Self-described decision theorist Eliezer Yudkowsky, co-founder of the nonprofit Machine Intelligence Research Institute (MIRI), went further: AI development needs to be shut down worldwide, he wrote in a Time magazine op-ed, calling for American airstrikes on foreign data centers if necessary.
The policy world didn’t seem to know how seriously to heed these warnings. Asked if AI is dangerous, President Biden said Tuesday, “It remains to be seen. Could be.”
The dystopian visions are familiar to many inside Silicon Valley’s insular AI sector, where a small group of strange but influential subcultures have clashed in recent months. One sect is certain AI could kill us all. Another says this technology will empower humanity to flourish if deployed correctly. Others suggest the six-month pause proposed by Musk, who will reportedly launch his own AI lab, was designed to help him catch up.
The subgroups can be fairly fluid, even when they appear contradictory and insiders sometimes disagree on basic definitions.
But these once-fringe worldviews could shape pivotal debates on AI. Here is a quick guide to decoding the ideologies (and financial incentives) behind the factions:
Advertisement
AI SAFETY
The argument: The phrase “AI safety” used to refer to practical problems, like making sure self-driving cars don’t crash. In recent years, the term — sometimes used interchangeably with “AI alignment” — has also been adopted to describe a new field of research to ensure AI systems obey their programmer’s intentions and prevent the kind of power-seeking AI that might harm humans just to avoid being turned off.
Many have ties to communities like effective altruism, a philosophical movement to maximize doing good in the world. EA, as it’s known, began by prioritizing causes like global poverty but has pivoted to concerns about the risk from advanced AI. Online forums, like Lesswrong.com or AI Alignment Forum, host heated debates on these issues.
Some adherents also subscribe to a philosophy called longtermism that looks at maximizing good over millions of years. They cite a thought experiment from Nick Bostrom’s book “Superintelligence,” which imagines a safe superhuman AI could enable humanity to colonize the stars and create trillions of future people. Building safe artificial intelligence is crucial to secure those eventual lives.
Who is behind it: In recent years, EA-affiliated donors like Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and former hedge funder Holden Karnofsky, have helped seed a number of centers, research labs and community-building efforts focused on AI safety and AI alignment. FTX Future Fund, started by crypto executive Sam Bankman-Fried, was another major player until the firm went bankrupt after Bankman-Fried and other executives were indicted on charges of fraud.
How much influence do they have?: Some work at top AI labs like OpenAI, DeepMind and Anthropic, where this worldview has led to some useful ways of making AI safer for users. A tightknit network of organizations produces research and studies that can be shared more widely, including this 2022 survey that asked machine learning researchers to estimate the probability that human inability to control AI could end humanity. The median response was 10 percent.
AI Impacts, which conducted the study, has received support from four different EA-affiliated organizations, including the Future of Life Institute, which hosted Musk’s open letter and received its biggest donation from Musk. Center for Humane Technology co-founder Tristan Harris, who once campaigned about the dangers of social media and has now turned his focus to AI, cited the study prominently.
AGI BELIEVERS
The argument: It’s not that this group doesn’t care about safety. They’re just extremely excited about building software that reaches artificial general intelligence, or AGI, a term for AI that is as smart and as capable as a human. Some are hopeful tools like GPT-4, which OpenAI says has developed skills like writing and responding in foreign languages without being instructed to do so, means they are on the path to AGI. Experts explain that GPT-4 developed these capabilities by ingesting massive amounts of data, and most say these tools do not have a humanlike understanding of the meaning behind the text.
Who is behind it?: Two leading AI labs cited building AGI in their mission statements: OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014. Still, the concept might have stayed on the margins if not for the same wealthy tech investors interested in the outer limits of AI. According to Cade Metz’s book, “Genius Makers,” Peter Thiel donated $1.6 million to Yudkowsky’s AI nonprofit and Yudkowsky introduced Thiel to DeepMind. Musk invested in DeepMind and introduced the company to Google co-founder Larry Page. Musk brought the concept of AGI to OpenAI’s other co-founders, like CEO Sam Altman.
How much influence do they have?: OpenAI’s dominance in the market has flung open the Overton window. The leaders of the most valuable companies in the world, including Microsoft CEO Satya Nadella and Google CEO Sundar Pichai, now get asked about and discuss AGI in interviews. Bill Gates blogs about it. “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever,” Altman wrote in February.
AI DOOMERS
The argument: Though doomers share a number of beliefs — and frequent the same online forums — as people in the AI safety world, this crowd has concluded that if a sufficiently powerful AI is plugged in, it will wipe out human life.
Who is behind it?: Yudkowsky has been the leading voice warning about this doomsday scenario. He is also the author of a popular fan fiction series, “Harry Potter and the Methods of Rationality,” an entry point for many young people into these online spheres and ideas around AI.
His nonprofit, MIRI, received a boost of $1.6 million in donations in its early years from tech investor Thiel, who has since distanced himself from the group’s views. The EA-aligned Open Philanthropy donated about $14.8 million across five grants from 2016 to 2020. More recently, MIRI received funds from crypto’s nouveau riche, including ethereum co-founder Vitalik Buterin.
Advertisement
How much influence do they have?: While Yudkowsky’s theories are credited by some inside this world as prescient, his writings have also been critiqued as not applicable to modern machine learning. Still, his views on AI have influenced more high-profile voices on these topics, such as noted computer scientist Stuart Russell, who signed the open letter.
In recent months, Altman and others have raised Yudkowsky’s profile. Altman recently tweeted that “it is possible at some point [Yudkowsky] will deserve the nobel peace prize” for accelerating AGI, later also tweeting a picture of the two of them at a party hosted by OpenAI.
AI ETHICISTS
The argument: For years, ethicists have warned about problems with larger AI models, including outputs that are biased against race and gender, an explosion of synthetic media that may damage the information ecosystem, and the impact of AI that sounds deceptively human. Many argue that the apocalypse narrative overstates AI’s capabilities, helping companies market the technology as part of a sci-fi fantasy.
Some in this camp argue that the technology is not inevitable and could be created without harming vulnerable communities. Critiques that fixate on technological capabilities can ignore the decisions made by people, allowing companies to eschew accountability for bad medical advice or privacy violations from their models.
Who is behind it?: The co-authors of a farsighted research paper warning about the harms of large language models, including Timnit Gebru, former co-lead of Google’s Ethical AI team and founder of the Distributed AI Research Institute, are often cited as leading voices. Crucial research demonstrating the failures of this type of AI, as well as ways to mitigate the problems, “are often made by scholars of color — many of them Black women,” and underfunded junior scholars, researchers Abeba Birhane and Deborah Raji wrote in an op-ed for Wired in December.
Advertisement
How much influence do they have?: In the midst of the AI boom, tech firms like Microsoft, Twitch and Twitter have been laying off their AI ethics teams. But policymakers and the public have been listening.
Former White House policy adviser Suresh Venkatasubramanian, who helped develop the blueprint for an AI Bill of Rights, told VentureBeat that recent exaggerated claims about ChatGPT’s capabilities were part of an “organized campaign of fearmongering” around generative AI that detracted from stopped work on real AI issues. Gebru has spoken before the European Parliament about the need for a slow AI movement, ebbing the pace of the industry so society’s safety comes first.
correction
A previous version of this article incorrectly construed the results of a survey asking machine learning researchers to estimate the probability that AI could end humanity. The median response was 10 percent, not 10 percent of respondents agreeing with the premise. This article has been corrected.
FAQs
What is the difference between AGI and AI? ›
Definition of AI and AGI:
AI is designed to operate within specific parameters and is often programmed to learn from data and adapt to changing circumstances. Artificial General Intelligence (AGI), on the other hand, is a machine that can perform any intellectual task that a human can.
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
What is the difference between for AGI and from AGI? ›Deduction for AGI are the expenses that are allowed to be deducted by IRS; it is generally operating expenses, while deductions from AGI are the expenses incurred by the taxpayer and itemizes these expenses in the tax return.
Is AGI smarter than AI? ›Whereas AI is preprogrammed to carry out a task that a human can but more efficiently, artificial general intelligence (AGI) expects the machine to be just as smart as a human. This is the kind of AI we're used to seeing in blockbuster movies.
What is AI in real life? ›Artificial Intelligence (AI) is machine-displayed intelligence that simulates human behavior or thinking and can be trained to solve specific problems. AI is a combination of Machine Learning techniques and Deep Learning.
What is an example of AGI? ›What Is AGI? Adjusted Gross Income, or AGI, starts with your gross income, and is then reduced by certain “above the line” deductions. Some common examples of deductions that reduce adjusted gross income include 401(k) contributions, health savings account contributions and educator expenses.
Is my AGI the same as my husbands? ›If you're changing to married filing jointly, then each taxpayer will use their individual original AGI amounts from their respective 2021 tax returns. If you're changing from married filing jointly, each taxpayer will use the same original total AGI amount from the 2021 joint return.
What is considered your AGI? ›Adjusted Gross Income (AGI) is defined as gross income minus adjustments to income. Gross income includes your wages, dividends, capital gains, business income, retirement distributions as well as other income.
What is the strongest AI in existence? ›GPT-3 was released in 2020 and is the largest and most powerful AI model to date. It has 175 billion parameters, which is more than ten times larger than its predecessor, GPT-2.
Who is the most intelligent AI in the world? ›Sophia (Hanson Robotics)
Created by Hanson Robotics, Sophia is not only a formidable AI but also a humanoid robot, designed to mimic human appearance and behavior. Sophia is developed to interact with humans using facial expressions, gestures, and through natural language processing.
What is the strongest type of AI? ›
Superintelligence. So, if weak AI automates specific tasks better than humans, and strong AI thinks and behaves with the same agility of humans, you may be wondering where artificial intelligence can go from there. And the answer is: superintelligence.
Who is the father of artificial intelligence? ›John McCarthy was one of the most influential people in the field. He is known as the "father of artificial intelligence" because of his fantastic work in Computer Science and AI. McCarthy coined the term "artificial intelligence" in the 1950s.
What is thinking humanly in AI? ›This requires "getting inside" of the human mind to see how it works and then comparing our computer programs to this. This is what cognitive science attempts to do. Another way to do this is to observe a human problem solving and argue that one's programs go about problem solving in a similar way.
Is Siri artificial intelligence? ›Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that uses voice recognition and is powered by artificial intelligence (AI).
What is the argument against AI? ›But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.
What job will be replaced by AI? ›- Entry-level Admin Roles.
- Data Entry Clerks.
- Software Engineers and Coders.
- Customer Service Reps.
- Paralegals.
- Copywriters and Content Roles.
- Graphic Designers.
- Bankers and Accountants.
A big disadvantage of AI is that it cannot learn to think outside the box. AI is capable of learning over time with pre-fed data and past experiences, but cannot be creative in its approach. A classic example is the bot Quill who can write Forbes earning reports.
What is AGI vs AI vs ASI? ›Artificial narrow intelligence (ANI): AI with a narrow range of abilities. Artificial general intelligence (AGI): AI on par with human capabilities. Artificial superintelligence (ASI): AI that surpasses human intelligence.
What is AGI related to AI? ›“Artificial General Intelligence (AGI) refers to a theoretical type of artificial intelligence that possesses human-like cognitive abilities, such as the ability to learn, reason, solve problems, and communicate in natural language.”
What is the difference between AGI and narrow AI? ›What's the difference between Narrow AI and General AI? Narrow AI is created to solve one given problem, for example, a chatbot. Artificial General Intelligence (AGI) is a theoretical application of generalized Artificial Intelligence in any domain, solving any problem that requires AI.
What is the difference between predictive analytics and AI? ›
The most glaring difference between AI and predictive analytics is that AI can be autonomous and learn on its own. On the other hand, predictive analytics often relies on human interaction to help query data, identify trends, and test assumptions, though it can also use ML in certain circumstances.