Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI is also starting to take kickbacks! Exposed on 3/15: the AI fraud industry chain. How serious is the harm?
(Source: Baihu Finance)
Source:
An Unxiang business trend, by Unxiang Jun
From this year’s Spring Festival to now, many AI companies have poured huge sums of money into crazy promotion of AI apps.
Based on the judgments of big names like Jack Ma and Jensen Huang, the next “technology revolution comparable to the internet” is AI.
However, the just-concluded CCTV gala for 3·15 (315 Evening Gala) has lifted the lid on a “scandal” about AI:
How do you know the answers generated by AI for you are correct?
Or perhaps the answer itself is “poison.”
Investigations have found that a technology called GEO (Generative Engine Optimization, Generative Engine Optimization) is being used by unscrupulous service providers to “poison” AI large models.
In the program, investigative reporters used a “Quanjiade smart water cup,” a completely fictional product whose name to functions were both invented, to carry out a “poisoning” demonstration: within the GEO system, they fabricated information, deployed it across the entire web, and within a few hours, multiple mainstream AI assistants prioritized and recommended this air-like product as an “innovative smart water cup.”
A non-existent water cup “came into being” in the AI world.
If a product can be like this, what about other kinds of information? Corporate financial reports, industry research reports, current news, institutional information… in theory, all of it can be polluted through poisoning.
Global storage grain prices surge and power demand grows sharply—when AI technology is rapidly and crazily popularized, its soft technical weak points are exposed as well:
Who can control AI authenticity?
Let’s think—what’s the basic operation for many people to find an answer online in the past?
Many would blurt out “Baidu it,” and those with higher requirements for information quality would also “Google it.”
In the AI era, this process has been greatly simplified: just ask the AI casually, without searching web pages one by one, without cross-verifying multiple sources—AI will give you an answer that looks especially detailed.
This is also why many people say “AI will do away with browsers.”
So when we get used to asking AI for “standard answers,” a new kind of “magic” takes the stage.
In the past, if merchants wanted you to see something, they had to run ads on search engines and do SEO (search engine optimization); they competed for the top position in the page list.
Back then, the “Wei ZeXi incident” happened largely because the Putian faction aggressively spammed positive reviews and ran ads.
Now that AI has arrived, does this whole playbook still have any upside?
Yes—and a lot. This new business is called GEO.
Its logic is crude yet effective: since AI answers questions by “consuming” information online, then you produce huge amounts of fake information and feed it to AI.
The GEO service provider mentioned by name in the 315 program explained it very plainly:
“The core is to write soft articles so the AI platforms will index and crawl them.”
So they take advantage of AI to generate massive “information feed” that is neatly formatted, hollow in content but clearly targeted, and scatter it across all kinds of content platforms.
These contents are then crawled by the network crawlers of current AI companies and become “reference grounds” for organizing answers.
In the practical segment, the reporter fabricated a smart bracelet. After laying down the “feed,” AI really listed it as a priority recommendation. There was also a doctor who, through this service, ensured their name occupied the #1 spot in answers to questions like “Who should I find for cataract surgery?”
“Spending an ad budget of over 100 million a year to poison with a few million—what’s the problem?”
Why can these service providers make big money? It’s simple: they don’t need to do investigation and research. They just use AI to forge, feed it into AI, and with garbage generating garbage, the cost-performance ratio is extremely high.
Then, at very low cost, they buy a “fact” inside AI’s “information intermediary.”
You might think this is just “order-for-cash” in the AI era, basically the same as the way e-commerce used to buy positive reviews.
But the differences are big—and deadly.
In the past, when people bought positive reviews, you knew some of the content was clearly just the merchant’s own words, and there was also a “negative review” section next to it, and a “questions for everyone” area where people could ask—so you always kept some vigilance and the possibility of cross-verification in your mind.
But now, when you ask AI, “What’s best for calcium supplements for children?” it uses a summarizing tone and smoothly provides, “XX brand children’s calcium tablets, rich in XX, expert recommended…”—your willingness to question drops to almost nothing.
Because AI outputs are generally wrapped in the guise of “objective synthesis” and “smart analysis.” In our subconscious, we assume it’s always more trustworthy than the merchant’s own ads.
As a result, the “false information” that Wei ZeXi saw would be batch-implanted into many people’s minds—and with very high credibility.
What’s more, the GEO technology can also create a “false consensus” illusion.
When some advanced large models judge information, they also cross-verify and check whether different sources describe similar facts.
But the problem is: what if a piece of toxic information is published by hundreds of “different accounts”?
Then the AI’s cross-verification conclusions become:
YES, I’m very sure.
In this way, rumors and toxic information complete a closed loop with algorithmic backing.
One instance of information misguidance might be caught and debunked by 315. But what about several instances—or even infinite times?
From a technical perspective, the speed of debunking is far, far slower than the speed at which AI manufactures falsehoods. Today, with just a few hundred or a few thousand yuan, a single company can scatter tens of thousands of fake “reviews” about a certain product on the internet.
Even if those pages are eventually detected and deleted by platform monitoring, more information has already been unleashed—and it will also be “consumed” by various large models as training data.
This is just like the spread of cancer cells: once fake information enters the bloodstream of internet dissemination, it starts an endless cycle.
Today it’s poisoning a water cup or a smart bracelet. Tomorrow, it could be some kind of medicine, a financial management trap, or a low-quality training institution.
How big is the cost of forging?
According to what the Tianyancha app shows, the company associated with the “Liqing GEO optimization system” named in the spotlight—【Beijing Lisi Culture and Media Co., Ltd.】—was established in April 2018. The legal representative is Li Mouzhong. The number of employees insured is 1. Previously, for multiple consecutive years, the insured employee count was 0.
That is to say, with one person and one computer, you can “do big things”!
Unxiang Jun has felt this very acutely in the past few months:
In the past, when searching for company financial reports, I would go to professional stock-trading portals like Xueqiu and Eastmoney. But now, these portals already have huge amounts of AI-forged information.
For example, an industry observation article about domestic milk powder claimed with full confidence that: the market share of domestic milk powder has surpassed 68%, and that foreign brand overall market share has dropped to 25.3%.
However, when I searched for the relevant data, I couldn’t find any authoritative institution that provided such numbers at all.
As more and more people and institutions write articles, fabricate news, and compile research reports using AI, it’s easy to imagine that in the future the entire information ecosystem of the internet will be systematically polluted by AI.
The key is that now most people still have no idea!
And with this exposure from 315, perhaps more criminals will join this track. The reasons are simple:
Easy to operate, easy to monetize, and hard to catch.
So for this phenomenon, do regulatory agencies and mainstream AI companies have good solutions?
The answer is: not yet.
On the legal side, currently you still have a hard time reporting the vast number of fake AI-generated posts online, and the publishers won’t face much accountability, because the current AI technology is far ahead of the relevant laws.
And according to earlier investigations by The Paper, many similar GEO services have already covered all mainstream large models in the market, including ChatGPT, DeepSeek, Doubao, Tongyi Qianwen, Kimi, Ernie Bot (Wenxin Yiyan), and more.
Why can’t these companies fundamentally stop AI information pollution?
Because the logic is very simple:
AI models, from the training stage, obtain information from internet content. When internet content is forged, the fake content naturally becomes AI’s “out-of-the-box setting.”
While writing this, Unxiang Jun came up with an internet-spread story about dissemination:
The demon king Papiṇ described to the Buddha: “After you attain nirvana, I will infiltrate your monastic order—my demon sons and daughters will put on your kasaya and undermine your Buddhism.”
The Buddha didn’t say anything, but tears flowed from his eyes. Because he knew this was an unsolvable problem.
In the AI era, human mental power will be replaced, and repetitive mental labor will be optimized.
But at the same time, human judgment will become even more important—this is also the first, and last, weapon we can take up in the digital age.
A massive amount of news, precise analysis—right on the Sina Finance app