Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Cursor announces MoE inference optimization technology Warp Decode, achieving a 1.84x throughput increase on Blackwell GPUs.
According to 1M AI News monitoring, the AI programming tool Cursor published a technical blog post introducing its in-house MoE (Mixture of Experts) inference acceleration method, Warp Decode. The method targets small-batch token generation scenarios on NVIDIA Blackwell GPUs, flipping the traditional expert-centered parallel strategy into an output-centered one: each warp (the smallest scheduling unit consisting of 32 parallel processing units) on the GPU is responsible for computing only one output value, independently iterating through all experts that have been routed to and completing the accumulation in registers—without any cross-warp synchronization or intermediate buffers.
The traditional MoE inference pipeline has 8 stages, of which 5 are only used to move data from the expert view and do not perform any actual computation. Warp Decode compresses the entire MoE compute layer into 2 CUDA kernels, eliminating intermediate steps such as padding, scatter, and gather. As a result, each token reduces intermediate-buffer read/write by more than 32KB.
Measured on NVIDIA B200 GPUs with a Qwen-3 style model, Warp Decode achieves a 1.84× end-to-end decoding throughput improvement. Because it performs BF16/FP32 precision computation throughout and avoids accuracy losses from intermediate quantization, the output accuracy is close to the FP32 baseline at 1.4×. In terms of hardware bandwidth utilization, when the batch size is 32, sustained throughput reaches 3.95 TB/s, about 58% of B200’s peak bandwidth (6.8 TB/s). This optimization directly accelerates Cursor’s development iterations and the release cadence of its in-house programming model, Composer.