Personal test of "Lobster" stock selection: thinking you can "sit back and win," but the reality is not so easy

robot
Abstract generation in progress

Ask AI · Why does “Lobster” stock-picking frequently run into data errors in real-world tests?

“Can I place stock trades by delegating them to ‘Lobster’?” With this question, a reporter from China Securities Journal opened the “Lobster” app. Soon, the reporter found: the ideal is beautiful, but reality is not so easy.

Without adding a modular professional toolkit (Skills, the directory that includes the SKILL.md file, which can provide instructions and tool definitions for an LLM), the Q&A you get is only a pileup of data. But once you comb through the Skills guide,提出 various professional strategy-building requirements, what you get is the awkward result of “run timeout.” Perhaps for ordinary individual investors, the human, material, and financial resources spent on using “Lobster” to trade stocks are not commensurate with the accuracy and usefulness of the results.

A fund manager told the reporter that their team has not yet integrated an application of the “Lobster” type. First, from a compliance perspective, this type of software carries significant risk. Second, the existing quantitative models already can quickly address investment needs such as stock-pool screening and strategy backtesting.

From opening “Lobster” to shutting down the computer

Local deployment is an important installation method for “Lobster.” But in the reporter’s tests, this setup has overly high permissions—it requires obtaining the computer’s highest-level administrator privileges, “handing over” personal account passwords and other information in full. Once it is breached by hackers or “led astray” by instructions, it could expose the user’s funds to higher risk.

Next, the reporter tried various “Lobster” applications in the cloud. They logged into multiple “Lobster” products from internet giants and AI model companies, including Kimi Claw, Art Claw, and JVS Claw, and purchased a basic-tier membership to try further.

The reporter learned that if you want more realistic and reliable data, you need to install a modular professional toolkit (Skills). For example, with Art Claw, the reporter issued an instruction to install “stock-market-pro” to Art Claw, but it was never possible to install it.

Figure shows a screenshot of the Art Claw platform

After that, the reporter could only try using a “PB-ROE” strategy-building approach to have Art Claw recommend stocks.

Figure shows a screenshot of the Art Claw platform

Although Art Claw organized a strategy-building approach and provided related stock recommendations (as shown in the image below), the reporter found that multiple data errors occurred during its reasoning process. For example, with Guizhou Moutai, both the stock price and the figure for net profit attributable to the parent were inconsistent with the actual numbers.

Figure shows a screenshot of the Art Claw platform

A few hours later, after the reporter tried several times to install the skill, “Lobster” finally installed that skill and claimed it could pull the latest real stock prices from a stock-trading software API. But the reporter found that many data points still differed significantly from the actual data.

The reporter also ran into obstacles on the Kimi Claw platform—once the instruction became even slightly more complex, it would stop responding. On the first attempt to give Kimi Claw an instruction to search for and install skills for analyzing A-share market data, the system returned: “IM runtime dispatch timed out after 300000ms,” meaning a compute-resource scheduling timeout and task failure.

Figure shows a screenshot of the Kimi Claw platform

Next, following the sample phrasing mentioned in the “Kimi Claw User Usage Guide,” the reporter tried again. Kimi Claw said it had created four professional analysis skills for A-shares and, based on that, analyzed the third-quarter 2025 financial reports of three stocks. The results showed that its financial data matched the company’s annual report, while also warning and explaining cash-flow risk, with comprehensive ratings and investment recommendations attached.

Figure shows a screenshot of the Kimi Claw platform

The reporter further attempted to install a real-time internet news search skill. The operation succeeded and it produced relevant public-opinion information about listed companies. However, when the reporter wanted Kimi Claw to connect to a stock monitoring function and then act according to its suggestions, the system again showed the same timeout warning. The reporter then sought help from the paid 199 yuan K2.5 Agent cluster model, but the results were also not satisfactory.

Figure shows a screenshot of the Kimi Claw platform

For many ordinary investors, training a smart, capable, and quick-reacting “Lobster” requires not only sustained effort from the user, but also a relatively rich set of professional skills. In addition, some investors said that the complex screening work for stocks requires consuming a large number of Tokens, making the costs high.

“Right now I can have ‘Lobster’ send me stock-market reports every day, but I need Skills to stay constantly updated so it can always capture the latest iteration changes. It’s best to use some intelligent programming software to assist, which can improve efficiency.” A user who uses “Lobster” for investing told the reporter, “The training process has gone through many ‘rough patches.’ Later, if I want to add some strategy factors, it may require further debugging.”

The road to intelligent investing is long and hard

A fund manager told the reporter that their team currently has not brought in any “Lobster”-type application. This is mainly for two reasons. First, from a compliance standpoint, such software has higher risk. Second, their existing quantitative models already meet investment needs like stock-pool screening and strategy backtesting with relatively high efficiency.

“I tried ‘Lobster’ on my own computer, and it can indeed help me handle some programming code. But overall, it hasn’t really improved my work efficiency by that much.” A quantitative fund manager told the reporter, “At the moment, the team has no plans to introduce ‘Lobster.’”

When discussing deployment of “Lobster,” Song Weiwei, a fund manager at China Europe Fund, said that Unified Memory hardware is a better option for OpenClaw deployment. OpenClaw’s three core demands as a “private AI brain” are: large memory, efficient computing, and always-on operation. In a traditional PC, the CPU uses memory and the GPU uses video memory; the two are separate. Data transfer requires copying between them, which is inefficient and wastes resources.

Song Weiwei said that with a unified memory architecture—where the CPU, GPU, and NPU (neural network engine) share a single physical memory pool—you can access the same data seamlessly without copying back and forth. When running large language models, the biggest bottleneck is video memory. Model parameters must all be loaded into the video memory to run. On a PC, running a 70 billion-parameter model requires a top-tier graphics card with more than 32GB of video memory, which usually means costs in the tens of thousands and massive power consumption.

In addition, the risk of using “Lobster” is also a topic many people in the industry are concerned about. Song Weiwei said that relying solely on natural-language prompts as safety guardrails is extremely fragile. When an AI has Full Disk Access, any security vulnerability could lead to systematic leakage of data. OpenClaw’s third-party plugin ecosystem (ClawHub) may also carry security risks. Moreover, when an AI changes from a tool to an autonomous executor, the traditional logic for responsibility allocation completely breaks down.

If OpenClaw, while executing instructions, unintentionally leaks trade secrets, sends defamatory emails, or even participates in a cyberattack, who should be held responsible? Is it the user who issued the instructions, the developer who wrote the code, the vendor that provides the underlying model, or the AI itself with “autonomous decision-making” capability? Currently, around the world, this is almost a legal vacuum.

(Source: China Securities Journal)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin