10 Survival Rules for Ordinary People in the AI Era

In the room: about sixty people—founders, engineers, product managers, investors, new graduates, and a few people who call themselves, “Let’s listen first before we figure it all out.”

Main speaker: Alan Walker, a Silicon Valley serial entrepreneur, having lived through three cycles firsthand; now he only drinks black coffee—no question marks.

Time: April 2026, one week after the release of Project Glasswing.

Not a methodology, not a workplace skill.

It’s about, in a species-level turning point, how to stay alive—and then live well.

Opening · ALAN WALKER

“Someone messaged before coming, asking, ‘AI, has arrived—do ordinary people still have a chance?’ Alan didn’t reply. Because the question itself was wrong.

In 1440, before the Gutenberg printing press appeared, what was the most valuable profession in Europe—scribes. In monasteries, a senior scribe’s status was equivalent to today’s senior engineers: they controlled the production and circulation of knowledge. After the printing press appeared, some of them disappeared. The others became editors, publishers, authors, and teachers. They didn’t vanish—they migrated.

Every person in this room today is a descendant of that batch of scribes. Your ancestors weren’t wiped out by the printing press, which is why you can sit here and ask this question today. People who are able to sit here and ask this question are already among the luckiest in human history. The issue isn’t ‘whether there’s a chance.’ The issue is ‘whether you’re willing to see clearly where the opportunity lies.’

Today, I’ll give you ten. No fluff—each one is something I’ve thought through.” - Silicon Valley ALan Walker

Law I · Your opponent isn’t AI—it’s the person who knows how to use AI

What gets eliminated isn’t a profession. It’s the people who believe, “This has nothing to do with me.”

First, a counterintuitive fact: in any technological revolution, what’s destroyed isn’t jobs—it’s people who refuse to learn. This isn’t inspirational talk; it’s recorded history. In 1900, the U.S. had 41 million horses doing transportation work. When cars arrived, horse trainers disappeared—but mechanics, gas station workers, highway engineers, auto insurance actuaries, and traffic cops all emerged. Net increase, not net decrease.

In 1997, Deep Blue beat Kasparov, and everyone thought chess would be dead as a profession. In 2005, a competition called “centaur chess” appeared—an ordinary amateur chess player plus a standard PC, able to beat a combination of top grandmasters and a supercomputer. It wasn’t the strongest who won, and it wasn’t the strongest machine that won; it was the person who knew best how to work with the machine. This conclusion applies to every industry in 2026. Not a single word needs to be changed.

ALAN · On site

Your competition today isn’t Claude, isn’t GPT, isn’t Gemini. It’s the person sitting next to you, already using these tools to work, while you’re still stuck on “Is this thing legit?”

Technology adoption curves never treat everyone the same. After the printing press arrived, the first five years of those who mastered it defined the knowledge-production landscape for the next two hundred years. Today’s window might be much shorter than five years.

It’s not AI replacing you. It’s people who know how to use AI replacing you. These two sentences sound the same, but they determine radically different response strategies for you.

Law II · AI can’t steal the traps you’ve stepped into

Large language models can learn to walk through every piece of knowledge that has been written down. But they can’t walk through the part you haven’t written—and that part is what you truly value.

Philosopher Michael Polanyi wrote a book in 1966 with just a thin hundred pages titled The Tacit Dimension (Polanyi 1966). The core proposition is only one sentence: “What we know is always more than we can say.” He gave an example: you can recognize a face, but you can’t tell me how you recognize it. This ability exists in your nervous system; it can’t be turned into language—therefore it can’t be taught and can’t be copied.

The essence of large language models is an extreme compression and retrieval of knowledge humans have already expressed. They ingest everything that’s been written: textbooks, papers, code, conversations. But there’s one kind of knowledge they can’t touch: the judgment you accumulated through eighteen failed projects; the intuition you get after seeing a certain situation three times; the sense of human nature you develop after clawing your way through an industry. These things were never written into any document. They exist in the form of neural circuits in your brain; they can only be triggered by experience, and can’t be transmitted through language.

So, what you think are useless experiences are your real moat in the AI era. Those detours you’ve taken, the mines you stepped on, the judgments you got wrong—these are becoming a scarce asset that AI can’t reach. The premise is that you must consciously systematize them: write them down, speak them out, and teach them to others.

ALAN · On site

I know someone who’s been in the catering business for eighteen years. He can’t use Excel, can’t write code, and his Mandarin is a bit clunky. But he can, within the first thirty minutes before a new shop opens, walk through once and tell you which dish will cause problems today, which employee’s in the wrong state today, and roughly what the table-turn rate will be tonight.

How does he know? He can’t explain it clearly. But that “can’t explain it clearly” is worth millions. AI can generate a complete catering management manual, but it doesn’t have the eighteen years of pits he personally stepped into.

Systematize the traps you’ve stepped into. Turn your failure cases into language. This isn’t writing a memoir—it’s forging the most underestimated moat in the AI era.

LAW III · Depth is proof; crossing into other fields is the weapon

AI can make “good enough” work in any single domain. What it can’t do is stack the underlying logics of two domains together and see a third possibility.

Economics has a concept called comparative advantage (Ricardo 1817). It means: you don’t need to be better than others at everything—you only need to be more efficient than others in some combination. Put it in today’s context: the source of comparative advantage has shifted from a single skill to a cross-domain combination—your biology background plus your financial instincts plus your product thinking form a perspective that AI can’t reproduce with single-source training data.

In human history, the innovations that truly change the landscape almost never happen inside a discipline; they happen at the boundaries. Mendel was a monk; he studied peas with statistics and laid the foundation for genetics. Shannon was a mathematician; he used the concept of entropy from thermodynamics to understand communication and created information theory. Jobs was a practitioner of Zen and an aesthetic thinker; he welded the humanities to engineering and defined consumer technology. In an era when AI can rapidly cover any single domain, the ability to connect across fields is one of humanity’s last cognitive advantages.

› Find your deepest domain—this is the anchor. Without it, everything else is floating duckweed.

› Deliberately build enough knowledge in two or three adjacent or even opposing domains; you don’t need mastery.

› Train “connection intuition”: can the underlying logic of this domain explain the phenomena in that domain?

› AI helps you retrieve; you make the connections—that’s division of labor, not competition.

ALAN · On site

I’ve seen the most formidable investors. It isn’t the one with the strongest finance. It’s the one whose finance is “good enough,” who has genuine technical feel, who has insight into human nature, and who has memory of history. When these four dimensions combine, AI can’t reproduce them today—because the core of “insight” is integration. Integration requires you to have been hit by different systems in the real world, not pattern-matched from training data. Your complex experiences are the places AI hasn’t been able to colonize—yet.

Only depth without breadth makes you a well. With cross-domain work, you become a net. AI is water; it will flow to all wells—but the net is something you create yourself.

LAW IV · Attention is the only truly scarce thing in the AI era

AI makes the cost of producing information approach zero. That means information itself approaches having little value. And its scarce complement—focused attention—is becoming the hardest currency of this era.

Herbert Simon wrote a line in 1971 predicting today’s Simon 1971: “The richness of information creates a scarcity of attention.” He said it before the birth of the internet. Back then, he only used the most basic economic logic: once something becomes extremely abundant, its own value drops, while the value of its scarce complement rises.

Today, the volume of content produced by AI every day already exceeds the total of what humans produced in the previous few hundred years. Your brain hasn’t upgraded; your total attention is fixed. Whatever you give attention to is what you’re voting for—what capability you’re cultivating. Someone who floats in fragmented information for three hours a day isn’t wasting time; they’re actively downgrading their cognitive system into a consumption terminal—only able to receive, not produce; only able to react, not think.

Here’s a counterintuitive conclusion: in the AI era, the ability to read deeply is scarcer and more valuable than programming ability. AI can write code, retrieve information, and generate reports. But it can’t replace you in truly understanding a book and integrating it into your own judgment framework. A person who can focus for a long time, think independently, and judge autonomously is a collaborator in front of AI. A person who only consumes fragments is an AI consumption terminal. A terminal doesn’t need to think; it only needs to receive.

ALAN · On site

I have a test: take a book you think is important, sit down and read it for two hours without touching your phone. If you can’t do that, your attention has already been colonized. This isn’t a moral judgment; it’s an assessment of cognitive ability. In an age when AI levels everyone’s production efficiency, people who can maintain deep focus are cognitive nobles—not because they’re smarter, but because they protect what most people have already given up.

Protect your attention, and you protect your cognitive sovereignty. Give up attention, and you voluntarily downgrade into an AI consumption terminal—not a collaborator.

LAW V · Credibility is the one thing AI can’t mass-produce

AI can generate your resume, imitate your writing style, and fake your voice. It can’t fake the trust that accumulates after you keep your word in real relationships, again and again.

What is trust, at its core? From the perspective of game theory, trust is the result of repeated games (Axelrod 1984). In sufficiently many interactions, two people verify that the probability of “doing what you say” is high enough, so they’re willing to lower defensive costs and move into a more efficient cooperative state. This process can’t be compressed, can’t be forged, and can’t be mass-produced. Because its essence is a record of commitments fulfilled over time.

When AI can generate any content and simulate any style, real interpersonal credibility will paradoxically appreciate. The more AI floods the world, the more scarce and valuable it becomes for something to be “a real person, and reliable.” Your reputation is the only anti-counterfeit label you have in the AI era.

One step deeper: credibility isn’t just “you do what you say.” Credibility is “others are willing to place their uncertainty on you.” When someone hands you something whose outcome is unknown, it’s not because they’re certain you can pull it off—it’s because they believe you will give it your all, provide honest feedback, and not disappear. This kind of trust relationship is a private contract AI can’t enter. It’s offline—emotional—and accumulated through history.

ALAN · On site

I know someone. No elite-school background, no big-company experience. His English is also awkward. The only thing he has is this: in the past fifteen years, every commitment he made has been fulfilled—none of them failed. Now, every time he posts a message, fifty people respond to him first. In the AI era, what is that called—signal penetration. In a world where AI creates infinite noise, his signal is clear. None of those fifty people treat him that way because his resume is pretty.

Every time you keep a promise is making the most valuable investment in the AI era. Every time you break a promise is destroying assets that AI can’t help you rebuild.

LAW VI · Answers depreciate. Good questions appreciate

AI can answer any question within three seconds. It doesn’t know which questions are worth asking. That “doesn’t know” is where your position is.

For the entire human education system, for three hundred years, it has trained one thing: answering standardized questions. Exams test answers; interviews test problem-solving; performance reviews test output. The underlying assumption of this system is that questions are fixed and answers are scarce. After AI arrives, this assumption is completely overturned: answers are no longer scarce, and good questions become the scarce commodity.

Einstein said that if given an hour to solve a life-and-death problem, he would spend fifty-five minutes defining the problem and five minutes finding the solution (Einstein, attributed). In 2026, the meaning of that changes: those five minutes can be outsourced to AI. Those fifty-five minutes can only be done by you.

What is a good question. Good questions have three traits: first, they make you see what you previously couldn’t see; second, they make the other side of the conversation re-examine their assumptions; third, they open a new space of possibilities rather than narrowing the boundary of an existing answer. Building this capability relies on lots of reading, lots of dialogue, and switching back and forth between different systems—until you develop a native distrust of what you once took for granted.

ALAN · On site

In the AI era, the most competitive way to work is like this: you kick off AI with a good question. AI generates ten answers. Then, with an even better question, you mine the tenth-one—the direction AI itself didn’t think of. In this feedback loop, you are the director and AI is the actor. If you only know how to receive AI’s output, you’re the audience. The audience doesn’t pay the director. The world is always short of good directors—and never short of audiences.

Learning to ask questions is more valuable than learning to answer. Because AI can answer everything but doesn’t know what to ask. That “doesn’t know” is your territory.

LAW VII · Find where “because there are people, it’s valuable”

Not all efficiency is worth optimizing. There’s a kind of value precisely because it is inefficient, because it requires real people—and it’s getting more and more expensive.

In 1899, Veblen described a peculiar type of good (Veblen 1899): the higher the price, the greater the demand—because a high price itself is part of the value. Today, human participation is becoming a Veblen attribute of certain kinds of services: because there are real people, it’s valuable; the rarer it is, the more valuable it is.

Think about it: the judgment of a doctor who truly understands your situation, versus an AI-generated diagnostic report—how many times of difference is there? The friend sitting across from you when you’re at your hardest moment versus any AI companion app—how irreplaceable is that? A decision-maker who can make the call face-to-face and take responsibility on the spot versus an AI-optimized proposal—what’s the essential difference? The common trait in these scenarios is: the presence of a human is itself part of the value—and an inseparable part.

From the perspective of human evolution, this isn’t strange. Humans are super social animals, and our nervous systems are designed to respond to the existence of real human beings. Oxytocin, mirror neurons, facial expression recognition systems—these mechanisms don’t respond to AI. When an AI tells you, “I understand how you feel,” your peripheral systems know it’s fake—even if your rational mind is temporarily convinced. Humans have a biological need for humans that can’t be replaced by digital simulation.

ALAN · On site

I predict an industry that will surge against the trend in the AI era: hospice care. Not because AI can’t provide information or companionship—but because no one wants, in the last moments of their life, to face a screen. This is an extreme case of “human premium,” but it points to a general rule: find those fields that become more automatic and more hollow—the opportunity is yours there. The more efficient and colder a place is, the more valuable human warmth becomes.

Ask yourself: if everything in this were done by AI, what would the customer lose? That “lost thing” is your permanent moat.

LAW VIII · Uncertainty isn’t your enemy—it’s your last advantage

Evolution never rewards the strongest; it rewards those who survive longest through change. People who can maintain initiative amid high uncertainty are the true strong ones in the AI era.

Nassim Taleb proposed in Antifragile a framework that changed my worldview (Taleb 2012): there are three types of systems in the world. Fragile systems collapse under pressure; robust systems maintain under pressure; antifragile systems become stronger under pressure. He said nature doesn’t reward the robust—it rewards the antifragile. Muscles grow under stress; immune systems strengthen in infection; economies advance through creative destruction.

In the AI era, uncertainty is structural and won’t go away. Every few months, new models appear—new capability boundaries—new industries are reshaped. This isn’t temporary chaos; it’s a new steady state. You can’t predict the next card. What you can do is train yourself to be able to act, learn, and keep direction even without knowing what the next card is.

A deeper truth: uncertainty is the last weapon ordinary people have against big institutions. In a world of certainty, big companies, big governments, and big capital have absolute advantages—they have resources, scale, and moats. But in fast-changing uncertain environments, their scale becomes a burden, their processes become shackles, and their history becomes a liability. Meanwhile, you—a person who can make decisions within 72 hours and pivot fully within a week—has a kind of flexibility that big institutions can never replicate.

ALAN · On site

More concretely: make small bets, iterate quickly, and don’t go all-in on any single judgment. Build a life structure that can absorb mistakes, rather than a life structure that must be correct forever. Keep the cost of failure within what you can bear, and raise your learning speed to the highest level you can maintain. You can’t predict which industry AI will disrupt next. But you can train yourself so that when the day comes and AI disrupts it, you feel excitement—not panic. Big institutions fear uncertainty because they’re too heavy to move. You’re light—you can turn. This is your last structural advantage. Don’t waste it with anxiety.

Uncertainty is ordinary people’s only structural advantage against big institutions. Big institutions fear it—you should love it.

LAW IX · Keep producing. Turn your cognition into a public asset

AI enables everyone to “produce content.” But content and opinions are two different things. People who have unique viewpoints and keep expressing them will generate exponential visibility in AI noise.

In economics, there’s a concept called network effects (Metcalfe 1980): the value of a network is proportional to the square of the number of its nodes. Your public expression is your node in the network of human knowledge. Every article, every talk, every viewpoint increases your connection count. And the value of nodes comes from their uniqueness—not from their quantity.

Before AI makes the cost of producing content approach zero, scarcity was production capability. After that, scarcity is trustworthy unique viewpoints. Anyone can use AI to generate an “AI survival guide,” but not everyone can write an article that makes people finish reading and feel, “This person has seen the real world.” The latter requires real experience, independent judgment, and continuous thinking—these three things are what AI can’t do for you.

More fundamentally: if you don’t output, you don’t exist. In the digital age, to exist is to be seen, and to be seen is the possibility for value to flow. A person who has many good ideas in their head but never expresses them is equivalent to someone who doesn’t know anything in the information stream of the world—they’re both transparent. Turning your cognition into a public asset is the most underestimated compounding behavior in the AI era.

ALAN · On site

I know someone who works in factory management in a second-tier city. No elite-school background, no flashy resume. Three years ago, he started writing online about his real experience in factory operations—not a methodology, but bloody failure cases and the conclusions he drew from them. Today he has 200,000 readers, three factories proactively consult him, and publishers want him to write a book. He didn’t get smarter; he simply moved what used to live only in his head into the world. Once the world sees it, value flows to him. If you don’t output, the world doesn’t know you exist.

Put what’s in your head into the world. Not to perform, but so the world knows you exist—and so value knows where to find you.

LAW X · Manage your energy, not your time

Time management is the logic of the industrial age—factories need stable output, so you trade time for product. The AI era needs bursts of creative cognition, so what you need to manage is energy, not time.

The core assumption of the industrial age is: time is a function of output. You work eight hours, and the value produced is eight hours’ worth. This logic holds on the assembly line because assembly line work is linear, stackable, and doesn’t require peak states. But creative work isn’t linear. Two hours in a peak state can produce something that twenty hours in a fatigued state can’t.

Neuroscience has already confirmed this (Kahneman 2011): high-level human cognitive functions—deep analysis, creative connections, complex judgment—depend on highly active states in the prefrontal cortex. This state is extremely energy-consuming; each day only has a limited time window. Most people use that most expensive time window to handle emails, scroll social media, and hold low-quality meetings—then use whatever remains in a fatigued state to do work that requires deep thinking, and then complain that their efficiency is low and they lack creativity.

In the AI era, this mistake becomes even more deadly. Because AI can already handle all tasks with low cognitive cost—informaton retrieval, format organization, data summarization, standardized writing. What it can’t replace is the judgment, insights, connections, and creativity produced in your high-cognition peak state. If you give your peak time to low-value tasks, you’re using the most expensive resource to do the cheapest work, while leaving the work that needs you most to the worst state.

ALAN · Full venue wrap-up

Every morning, about three hours of my day are in my peak state. During those three hours, I don’t check messages, don’t hold meetings, and don’t reply to emails. I only do one thing: think about the most important questions of today. Everything else— including a lot of work—I use AI to handle, or I leave it for the afternoon. This isn’t laziness; it’s rational allocation. How much your day’s three most expensive hours are worth depends on what you use them for. After AI arrives, the answer becomes more extreme than before: if you use it right, your peak output is ten times that of an average person; if you use it wrong, your low point is no different from AI. Asimov wrote three laws of robots to set boundaries for machines. Today, these ten are meant to help people reclaim their place. Your place is in your peak, not on the assembly line.

You don’t need more time. You need to protect your best time and use it for things only you can do.

“AI isn’t your ceiling—it’s your lever.

Your place is in your peak, not on the assembly line.”

I Your opponent is never AI—it’s the person who knows how to use AI

II AI can’t steal the traps you’ve stepped into

III Depth is proof; crossing into other fields is the weapon

IV Attention is the only truly scarce thing in the AI era

V Credibility is the one thing AI can’t mass-produce

VI Answers depreciate. Good questions appreciate

VII Find where “because there are people, it’s valuable”

VIII Uncertainty isn’t your enemy—it’s your last advantage

IX Keep producing. Turn your cognition into a public asset

X Manage your energy, not your time

-Melly

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments