On February 22, an article titled “The 2028 Global Intelligence Crisis” went viral in the financial circle. The author is Citrini Research, a macro research firm. The article is presented as a “Memo from the Future,” assuming the timeline is June 2028, retrospectively analyzing how an AI-triggered economic crisis gradually evolved into a systemic collapse.
There’s a line in the article: “Early 2026, the first wave of layoffs began due to human intelligence being replaced. Profits expanded, earnings exceeded expectations, and stock prices hit record highs.”
Four days later, this was no longer just a thought experiment.
On February 26, Jack Dorsey posted on X: “we’re making @blocks smaller today.”
Block, the fintech company that owns Square and Cash App, released its Q4 earnings report that day. Gross profit grew 24% year-over-year, and earnings per share beat analyst expectations. Meanwhile, Dorsey announced layoffs of over 4,000 employees, accounting for 46% of the company’s total staff.
After the announcement, Block’s stock rose 24% in after-hours trading.
Company performance up 24%, stock up 24%, and 4,000 people received termination notices.
Citrini’s “Nightmare 2028” didn’t wait until 2028; it began its first act this Thursday.
It’s not because we encountered trouble
Historically, every large-scale layoff has a standard CEO open letter: tough market conditions, strategic adjustments, difficult decisions made, gratitude to colleagues for their contributions.
Dorsey’s letter was different.
“We’re not laying off because we’re in trouble. Our business is strong… but something has changed. Internally, we’ve seen that with the intelligent tools we’re building and using, smaller teams can do more and do better. And these tools’ capabilities are growing exponentially every week.”
No mention of a market downturn; the company is doing well, but you no longer need as many people. This honesty is more unsettling.
In past layoff narratives, there was always an implicit promise: once the market recovers, we will rehire. This time, Dorsey didn’t even make that promise. Instead, he presented a different logic: small teams plus AI can do the same or even better than large teams. If that’s the case, why still need so many people?
Investors fully agree with this logic, voting with a 24% stock price increase.
And perhaps an overlooked detail.
To promote an “AI-first” work culture, Dorsey previously required every employee to send him an email weekly listing five recent accomplishments. Thousands of emails flooded in. His approach was to use AI to summarize and then read the summaries.
Using AI to determine who can prove they won’t be replaced by AI, and letting AI analyze who will be laid off—this detail is the most precise metaphor for the entire story.
A timeline, an acceleration
Block is not an isolated case; it is part of a trend that has been ongoing for two years.
Looking back, the acceleration of this trajectory is dizzying.
In 2024, Klarna CEO Sebastian Siemiatkowski announced proudly that the company’s AI customer service assistant handled the workload of roughly 700 full-time employees. Most people saw this as a tech stunt, a headline-grabbing number, a story to persuade investors.
In April 2025, an internal memo from Shopify CEO Tobi Lütke leaked. It contained a phrase that was repeatedly cited later: “Before applying for new hires, the team must first prove that AI cannot do the job.”
That same year, Duolingo announced an “AI-first” strategy, ending many outsourced content creation contracts. IBM admitted to replacing 8,000 HR positions with AI, with CEO Arvind Krishna openly naming the departments and number of people involved in interviews. Salesforce cut 4,000 customer support roles, with CEO Marc Benioff stating: “AI can now handle about half of our work.”
By the end of 2025, US employment tracking agency Challenger, Gray & Christmas reported that over 55,000 layoffs that year could be directly attributed to AI.
Early 2026, Amazon announced two rounds of layoffs totaling about 30,000 corporate jobs. Law firm Baker McKenzie followed, cutting 600 to 1,000 research, marketing, and administrative support positions—an industry once considered one of the least penetrated by AI.
On February 26, 2026, Block—a profitable company—laid off 46% of its staff in one go.
But layoffs are just the most visible cut.
A more hidden number was revealed by a Harvard study: after AI became widespread, tech companies on average reduced hiring of entry-level employees by over 50% each quarter. No announcements, no press releases—positions quietly disappeared from job boards, resumes from new graduates vanished into the void, and the reasons were never stated in rejection letters.
Citrini’s Spiral
Returning to that viral article.
Citrini’s projection is unsettling not only because it depicts an AI sweeping through the job market in a dystopian manner but also because it describes a logically consistent, fully rational death spiral.
The spiral works like this:
AI enables companies to expand profits. The profits are reinvested into AI, which enhances AI capabilities. Stronger AI makes more jobs replaceable. More unemployment leads to less consumption. Reduced consumption pressures more companies, forcing them to further cut costs with AI. AI’s capabilities improve again.
Citrini named this cycle the “Intelligence Displacement Spiral.”
They wrote in the article: “Every company’s individual decision is rational; the collective result is catastrophic.”
Now, compare this to what happened at Block that day. Gross profit up 24%, stock up 24%, 4,000 layoffs, and the saved money reinvested into AI tools. From Dorsey’s perspective, this was a perfectly rational decision—he even explained in the open letter why he chose a one-time large-scale layoff instead of multiple gradual cuts: because the latter would continuously damage morale and trust.
From a corporate governance perspective, this is textbook execution. From the perspective of those 4,000 individuals, it is a life rupture.
In Citrini’s projection, there’s a real person (presented anonymously): a senior product manager at Salesforce, earning $180,000 a year, who lost their job in the third round of layoffs in 2025. Six months of job hunting yielded no comparable position. Eventually, they started driving Uber, with annual income dropping to $45,000.
This is not just one person’s story.
Citrini’s article includes a simple calculation: multiplying this individual’s trajectory by the tens of thousands of white-collar workers experiencing similar fates in major cities. The contraction of the consumer side is no longer abstract macro data but a foreseeable, calculable reality.
This story is playing out globally, perhaps right around us.
No villains to find
Citrini’s article states:
“Historically, disruptive models show that existing companies resist new technologies, only to be eventually eroded by agile newcomers, leading to decline. Kodak, Blockbuster, and BlackBerry are classic examples. But the situation in 2026 is entirely different: existing companies didn’t resist because they couldn’t afford to.”
This is the key to understanding the entire situation.
Klarna was hit by AI, then used AI to cut costs and lay off staff. Salesforce’s software was challenged by AI, leading to 4,000 support roles cut. Block was impacted by the wave of AI in fintech, then announced a complete organizational overhaul with nearly half the staff laid off.
They are not victims defeated by AI. They are the most active adopters of AI, and what was defeated were their own employees.
This is the hardest part to reconcile within moral frameworks.
After the 2008 financial crisis, people knew whom to blame: Wall Street bankers, traders selling junk bonds, regulators lacking oversight. Anger had clear targets, even addresses, leading to Occupy Wall Street.
This time, it’s different.
It’s hard to say Dorsey did anything wrong; the market’s reaction shows how it perceives it. The 4,000 laid-off people didn’t do anything wrong—they were just working in roles that are being restructured. AI itself isn’t inherently evil; it’s just a tool that is becoming more useful at an unprecedented speed.
Responsibility is diffused throughout the system, like salt dissolving in water—you can taste the salt, but can’t find the grain.
Two sentences from Citrini’s article, not widely quoted, may be the most profound:
“This is the first time in history that the most productive assets in the economy are creating fewer rather than more jobs. No existing framework fits because they weren’t designed for a world where scarce production factors become abundant.”
Every previous technological revolution saw humans find new roles. The steam engine replaced manual weavers but created railway workers, factory managers, and urban planners. The internet eliminated travel agencies, brick-and-mortar record stores, and classified ads but gave rise to product managers, data analysts, and content creators. Each time, the “jobs of the future” were initially hard to describe, but they appeared eventually, in sufficient numbers.
This comforting pattern has faced its first challenger.
Because this time, those “jobs of the future”—like AI trainers, prompt engineers, AI product managers—are themselves being learned by AI. Workers being replaced can’t simply “upgrade skills” to shift into AI-related roles because those roles are also being compressed.
Harvard researchers observed a phenomenon: after AI’s proliferation, recruitment for entry-level positions in tech companies dropped by over 50%. Not because these jobs disappeared, but because they were never created in the first place.
An entire generation was trained to enter an industry that, just as they were about to graduate, quietly decided it no longer needed entry-level humans.
We don’t have the luxury of time to think this through slowly.
Citrini concludes that the canary is still alive, but the real question isn’t whether the canary is dead; it’s whether, when it starts to tremble, you have an exit.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
When Block laid off half the company, there were no villains in the AI unemployment wave
Written by: Yellow Lobster, Deep Tide TechFlow
On February 22, an article titled “The 2028 Global Intelligence Crisis” went viral in the financial circle. The author is Citrini Research, a macro research firm. The article is presented as a “Memo from the Future,” assuming the timeline is June 2028, retrospectively analyzing how an AI-triggered economic crisis gradually evolved into a systemic collapse.
There’s a line in the article: “Early 2026, the first wave of layoffs began due to human intelligence being replaced. Profits expanded, earnings exceeded expectations, and stock prices hit record highs.”
Four days later, this was no longer just a thought experiment.
On February 26, Jack Dorsey posted on X: “we’re making @blocks smaller today.”
Block, the fintech company that owns Square and Cash App, released its Q4 earnings report that day. Gross profit grew 24% year-over-year, and earnings per share beat analyst expectations. Meanwhile, Dorsey announced layoffs of over 4,000 employees, accounting for 46% of the company’s total staff.
After the announcement, Block’s stock rose 24% in after-hours trading.
Company performance up 24%, stock up 24%, and 4,000 people received termination notices.
Citrini’s “Nightmare 2028” didn’t wait until 2028; it began its first act this Thursday.
It’s not because we encountered trouble
Historically, every large-scale layoff has a standard CEO open letter: tough market conditions, strategic adjustments, difficult decisions made, gratitude to colleagues for their contributions.
Dorsey’s letter was different.
“We’re not laying off because we’re in trouble. Our business is strong… but something has changed. Internally, we’ve seen that with the intelligent tools we’re building and using, smaller teams can do more and do better. And these tools’ capabilities are growing exponentially every week.”
No mention of a market downturn; the company is doing well, but you no longer need as many people. This honesty is more unsettling.
In past layoff narratives, there was always an implicit promise: once the market recovers, we will rehire. This time, Dorsey didn’t even make that promise. Instead, he presented a different logic: small teams plus AI can do the same or even better than large teams. If that’s the case, why still need so many people?
Investors fully agree with this logic, voting with a 24% stock price increase.
And perhaps an overlooked detail.
To promote an “AI-first” work culture, Dorsey previously required every employee to send him an email weekly listing five recent accomplishments. Thousands of emails flooded in. His approach was to use AI to summarize and then read the summaries.
Using AI to determine who can prove they won’t be replaced by AI, and letting AI analyze who will be laid off—this detail is the most precise metaphor for the entire story.
A timeline, an acceleration
Block is not an isolated case; it is part of a trend that has been ongoing for two years.
Looking back, the acceleration of this trajectory is dizzying.
In 2024, Klarna CEO Sebastian Siemiatkowski announced proudly that the company’s AI customer service assistant handled the workload of roughly 700 full-time employees. Most people saw this as a tech stunt, a headline-grabbing number, a story to persuade investors.
In April 2025, an internal memo from Shopify CEO Tobi Lütke leaked. It contained a phrase that was repeatedly cited later: “Before applying for new hires, the team must first prove that AI cannot do the job.”
That same year, Duolingo announced an “AI-first” strategy, ending many outsourced content creation contracts. IBM admitted to replacing 8,000 HR positions with AI, with CEO Arvind Krishna openly naming the departments and number of people involved in interviews. Salesforce cut 4,000 customer support roles, with CEO Marc Benioff stating: “AI can now handle about half of our work.”
By the end of 2025, US employment tracking agency Challenger, Gray & Christmas reported that over 55,000 layoffs that year could be directly attributed to AI.
Early 2026, Amazon announced two rounds of layoffs totaling about 30,000 corporate jobs. Law firm Baker McKenzie followed, cutting 600 to 1,000 research, marketing, and administrative support positions—an industry once considered one of the least penetrated by AI.
On February 26, 2026, Block—a profitable company—laid off 46% of its staff in one go.
But layoffs are just the most visible cut.
A more hidden number was revealed by a Harvard study: after AI became widespread, tech companies on average reduced hiring of entry-level employees by over 50% each quarter. No announcements, no press releases—positions quietly disappeared from job boards, resumes from new graduates vanished into the void, and the reasons were never stated in rejection letters.
Citrini’s Spiral
Returning to that viral article.
Citrini’s projection is unsettling not only because it depicts an AI sweeping through the job market in a dystopian manner but also because it describes a logically consistent, fully rational death spiral.
The spiral works like this:
AI enables companies to expand profits. The profits are reinvested into AI, which enhances AI capabilities. Stronger AI makes more jobs replaceable. More unemployment leads to less consumption. Reduced consumption pressures more companies, forcing them to further cut costs with AI. AI’s capabilities improve again.
Citrini named this cycle the “Intelligence Displacement Spiral.”
They wrote in the article: “Every company’s individual decision is rational; the collective result is catastrophic.”
Now, compare this to what happened at Block that day. Gross profit up 24%, stock up 24%, 4,000 layoffs, and the saved money reinvested into AI tools. From Dorsey’s perspective, this was a perfectly rational decision—he even explained in the open letter why he chose a one-time large-scale layoff instead of multiple gradual cuts: because the latter would continuously damage morale and trust.
From a corporate governance perspective, this is textbook execution. From the perspective of those 4,000 individuals, it is a life rupture.
In Citrini’s projection, there’s a real person (presented anonymously): a senior product manager at Salesforce, earning $180,000 a year, who lost their job in the third round of layoffs in 2025. Six months of job hunting yielded no comparable position. Eventually, they started driving Uber, with annual income dropping to $45,000.
This is not just one person’s story.
Citrini’s article includes a simple calculation: multiplying this individual’s trajectory by the tens of thousands of white-collar workers experiencing similar fates in major cities. The contraction of the consumer side is no longer abstract macro data but a foreseeable, calculable reality.
This story is playing out globally, perhaps right around us.
No villains to find
Citrini’s article states: “Historically, disruptive models show that existing companies resist new technologies, only to be eventually eroded by agile newcomers, leading to decline. Kodak, Blockbuster, and BlackBerry are classic examples. But the situation in 2026 is entirely different: existing companies didn’t resist because they couldn’t afford to.”
This is the key to understanding the entire situation.
Klarna was hit by AI, then used AI to cut costs and lay off staff. Salesforce’s software was challenged by AI, leading to 4,000 support roles cut. Block was impacted by the wave of AI in fintech, then announced a complete organizational overhaul with nearly half the staff laid off.
They are not victims defeated by AI. They are the most active adopters of AI, and what was defeated were their own employees.
This is the hardest part to reconcile within moral frameworks.
After the 2008 financial crisis, people knew whom to blame: Wall Street bankers, traders selling junk bonds, regulators lacking oversight. Anger had clear targets, even addresses, leading to Occupy Wall Street.
This time, it’s different.
It’s hard to say Dorsey did anything wrong; the market’s reaction shows how it perceives it. The 4,000 laid-off people didn’t do anything wrong—they were just working in roles that are being restructured. AI itself isn’t inherently evil; it’s just a tool that is becoming more useful at an unprecedented speed.
Responsibility is diffused throughout the system, like salt dissolving in water—you can taste the salt, but can’t find the grain.
Two sentences from Citrini’s article, not widely quoted, may be the most profound:
“This is the first time in history that the most productive assets in the economy are creating fewer rather than more jobs. No existing framework fits because they weren’t designed for a world where scarce production factors become abundant.”
Every previous technological revolution saw humans find new roles. The steam engine replaced manual weavers but created railway workers, factory managers, and urban planners. The internet eliminated travel agencies, brick-and-mortar record stores, and classified ads but gave rise to product managers, data analysts, and content creators. Each time, the “jobs of the future” were initially hard to describe, but they appeared eventually, in sufficient numbers.
This comforting pattern has faced its first challenger.
Because this time, those “jobs of the future”—like AI trainers, prompt engineers, AI product managers—are themselves being learned by AI. Workers being replaced can’t simply “upgrade skills” to shift into AI-related roles because those roles are also being compressed.
Harvard researchers observed a phenomenon: after AI’s proliferation, recruitment for entry-level positions in tech companies dropped by over 50%. Not because these jobs disappeared, but because they were never created in the first place.
An entire generation was trained to enter an industry that, just as they were about to graduate, quietly decided it no longer needed entry-level humans.
We don’t have the luxury of time to think this through slowly.
Citrini concludes that the canary is still alive, but the real question isn’t whether the canary is dead; it’s whether, when it starts to tremble, you have an exit.