As AI anxiety sweeps through Silicon Valley, why does a16z choose to "remain indifferent"?

In response to the widespread attention surrounding “A Major Event Is Happening” and the resulting AI panic, blogger David Oks recently wrote a rebuttal.

图片 The article points out that the current panic over AI causing a “massive unemployment avalanche” is seriously exaggerated.

David Oks believes that labor substitution depends on “comparative advantage” rather than “absolute ability.” As long as the overall output of “humans + AI” remains better than AI working alone, humans will not be quickly replaced.

The real world is full of “bottlenecks” created by institutions, organizations, and human nature. These factors determine that technological diffusion is gradual rather than explosive.

At the same time, demand is elastic, and efficiency improvements often lead to more, not less, labor demand.

“AI will profoundly change society, but the process will be slow and uneven. Ordinary people don’t need to panic,” he says.

David Oks is an American blogger and researcher, also a research partner at venture capital firm a16z.

Below is the full article—

图片 Two days ago, a person named Matt Shumer posted an article titled “A Major Event Is Happening” on Twitter.

Almost immediately, the article went viral. As of now, it has nearly 100 million views and continues to grow.

What’s more notable is that it was widely shared by people with very different viewpoints, such as conservative commentator Matt Walsh (“This is a very good article”) and liberal commentator Mehdi Hasan (“Perhaps the most worth-reading article today, this week, or even this month”).

I also heard countless people say that this article was actively forwarded to them by parents, siblings, and friends.

I predict that Shumer’s article will ultimately become the most-read long-form piece of the year.

Its resonance is easy to understand.

For most ordinary users, “artificial intelligence” is just a free version of ChatGPT, used to answer questions, write emails, and so on.

But now, people are beginning to realize that AI will become a powerful force in reality.

This year is the year ordinary people start seriously thinking about how it will change human life. And their first concern is naturally whether AI will take their jobs, make their skills worthless, and worsen their lives.

Panic is spreading. The Atlantic Monthly discusses AI-induced unemployment, Bernie Sanders talks about AI job losses, and Matt Walsh states: “AI will destroy millions of jobs.”

This is already happening. Everything is changing. The avalanche has arrived.

Most of what we are debating now will soon become irrelevant. We are entering a moment of panic.

Therefore, at this moment, if someone claims to be from the “AI industry” and writes an article saying we are in a situation similar to February 2020—just as COVID-19 cases were rising exponentially—that’s quite fitting.

His point is that, like the pandemic, AI is about to enter ordinary people’s lives with incredible impact; and the only way for ordinary people to prepare in advance is to subscribe to AI products, save more money, spend an hour a day experimenting with AI, or even follow Matt Shumer to “stay updated on which model is best right now.”

This isn’t really a good article—much of it is obviously AI-generated, and Shumer himself admits this—but in the dissemination of any viewpoint, timing and positioning are often more important than content quality. And Shumer’s timing and positioning are perfect.

I believe no other article will influence ordinary people’s perceptions of AI more deeply. It will become a landmark text of this era.

And that’s very bad. The problem isn’t that it was written by AI; it’s that its judgment about AI’s impact is fundamentally wrong.

I don’t think we are in a situation similar to the eve of the 2020 pandemic. I don’t believe ordinary people need to worry too much about AI. And I don’t think the conclusions drawn from that article—massive upcoming unemployment, world upheaval within months, “the avalanche has begun”—are based on reality.

I fear these misunderstandings could lead to disastrous consequences.

I say this not because I don’t believe in AI. On the contrary, I think AI will be extremely important, and its ultimate impact will be at least comparable to the invention of electricity or the steam engine, and may even become one of the most significant inventions in human history. The future will be fundamentally different from the past.

But that doesn’t mean we are in a “February 2020” type of world. I really don’t think we will see mass unemployment, a sudden end to human intellectual labor, or any scenario resembling an “avalanche.”

The next few years may seem strange, especially if you keep track of AI’s latest developments. But the impact of AI in the real world will be slower and more uneven than Shumer imagines. Human labor will not disappear quickly. And whether ordinary people spend an hour a day using AI tools or not, they will generally be fine.

The real labor replacement is much harder than people think.

图片 AI will become extremely powerful: it will continually amaze us, capabilities will keep improving, and the rate of improvement will accelerate. Many tasks already performed by AI are comparable to those of a qualified human, and the number of such tasks will only grow.

But this does not mean human labor will be replaced on a large scale.

The key to understanding labor substitution is: it depends on “comparative advantage,” not “absolute advantage.”

The issue isn’t whether AI can perform a human task, but whether—when humans are involved—the overall output is better than AI working alone.

In other words, does human involvement still enhance productivity? That’s a completely different question. Even if AI outperforms humans on each individual task, as long as the combined “humans + AI” output is higher, there is still economic justification to keep humans involved.

Take software engineering as an example: even if AI capabilities are very strong, current “human-AI collaboration” (the “cyborg” model) still outperforms AI alone—because you still need to tell AI your preferences, company requirements, and client needs.

This is good news for workers because their productivity increases. As long as demand is elastic, the outlook for human labor remains optimistic. (This might also explain why, within a year of Claude Code’s release, the number of software engineering jobs actually increased.)

As AI capabilities improve, human complementarity may gradually decline, but this “cyborg era” will last longer than people expect.

A world with no human complementarity is an extreme assumption: AI would dominate any task under any condition, with no scenario requiring human involvement. That’s unrealistic.

The problem isn’t that models aren’t good enough; it’s that the real world is full of “human bottlenecks.”

The world is managed by humans, and humans are inherently inefficient, emotional, conservative, competitive, and easily frightened. As long as these bottlenecks exist, humans will be needed to handle them.

Bottlenecks determine everything.

图片 Almost all inefficiencies in various fields stem from human factors: laws and regulations, corporate culture, tacit knowledge, personal conflicts, industry norms, office politics, national politics, rigid hierarchies, bureaucracies, reliance on relationships, preferences for narratives and brands, changing tastes, limited understanding, and most importantly—resistance to change.

In the long run, technology will gradually erode these bottlenecks, like water slowly smoothing rocks. But it takes time. General-purpose technologies like electricity took decades to significantly boost productivity. AI’s diffusion may be faster, but bottlenecks still exist.

This also explains why, despite models being so powerful, real-world job displacement remains limited.

GPT-3 has been around for six years, GPT-4 for three, and even in industries like customer service outsourcing—one of the easiest to automate—large-scale layoffs have not occurred.

Change is gradual, more like diffusion than a tsunami.

Intelligence isn’t the limiting factor; organizational and institutional structures are.

Demand for human labor might even increase.

Why, in a scenario where AI has absolute advantage, could human labor still grow? Because demand elasticity is far greater than we imagine. This is the “Jevons paradox”: efficiency gains can lead to increased total demand.

Software is a typical example.

Every time programming efficiency improves—through higher-level languages, frameworks, tools—it ultimately leads to more software demand and more engineering jobs. If AI significantly boosts productivity, software demand could further explode.

As long as humans and AI remain in a complementary phase, this is generally good for workers.

Even if no work is needed, humans will invent new jobs.

图片 In the long run, human complementarity might approach zero. But this process will be very long, and before that, we may already have entered a highly prosperous society.

Historically, every increase in productivity has led humans to allocate the surplus resources to new careers and activities.

From agricultural surpluses to today’s baristas, yoga instructors, podcasters, streamers—more and more strange and interesting professions will emerge in the future.

Ordinary people will be fine.

My judgment is that the overall impact of AI will be much milder than people imagine.

Yes, some will lose jobs, some will need to transition, and some will struggle to adapt. But the overall transition will be gradual.

The pandemic isn’t a good analogy. An average office worker—someone who doesn’t care about Anthropic, who invests in index funds monthly—probably won’t be in trouble because of AI.

Many things will gradually improve, some will worsen, and many will stay the same. They just need to gradually adjust their work methods without panic.

In the next few years, there will be uncertainty and chaos, but the real risk may not come from the technology itself, but from social and political backlash.

If the public is told that “AI will bring about a collapse of employment,” the result might not be more people learning AI, but a bipartisan populist movement demanding strict restrictions on AI, banning data centers, guaranteeing lifelong jobs, or even legislating to block technological progress.

If AI can bring higher productivity, faster medical and scientific advances, and a more glorious human civilization, such backlash would be a huge societal loss.

Perhaps making the public aware that AI is powerful and advancing rapidly is a good thing.

Shumer is right: a major event is indeed happening. But we don’t need to scare ordinary people because of it.

They will be fine.

GPT12,62%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)