Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Luma Rolls Out Uni-1, Its First Model Integrating Reasoning And Image Generation
In Brief
Luma unveiled Uni-1, its first model that combines reasoning and image generation in one architecture — in a major shift from the video-focused startup’s roots.
AI video-generation startup Luma introduced Uni-1, its first model that integrates reasoning and image generation within a single architecture, marking a strategic shift from the company’s previous focus on video content.
According to the company, over the past three years, Luma’s work has evolved from scene reconstruction to 3D generation and the scaling of video diffusion, but visual media alone has limitations without integrated understanding. Uni-1 is positioned as the firm’s first unified model designed to combine reasoning and generative capabilities, aiming to advance multimodal general intelligence.
Luma describes general intelligence as the ability to reason, imagine, manipulate symbols, and simulate environments. While existing AI systems can perform these functions separately, Uni-1 seeks to combine them within a single framework, modeling time, space, and logic together to enable problem-solving that traditional, segmented pipelines cannot achieve.
The model is built as a decoder-only autoregressive transformer, representing text and images in a single interleaved sequence that functions as both input and output. Uni-1 is capable of performing structured internal reasoning, breaking down instructions, resolving constraints, planning composition, and rendering images accordingly.
Uni-1 demonstrates the ability to “think in language and imagine and render in pixels,” a capability described by Luma as “intelligence in pixels.” Additional outputs, including audio and video generation, are expected in subsequent releases. The model is intended not just as a tool but as a platform that transforms how businesses operate by integrating reasoning directly into creative workflows.
Luma Agents Extend Unified Intelligence
Building on Uni-1, Luma recently launched the Luma Agents, a suite of AI-driven tools designed to handle end-to-end creative production across text, image, video, and audio. The agents operate using Luma’s Unified Intelligence family of models, which are trained on a single multimodal reasoning system. Luma positions the agents as a solution for advertising agencies, marketing teams, design studios, and enterprise clients, offering coordinated creative generation across multiple modalities.
The Luma Agents are compatible with other AI models, including Luma’s Ray 3.14, Google’s Veo 3 and Nano Banana Pro, ByteDance’s Seedream, and ElevenLabs’ voice-generation tools. According to Amit Jain, Luma’s CEO and co-founder, the agents leverage Uni-1’s integrated architecture, which has been trained across audio, video, image, language, and spatial reasoning, allowing them to plan, execute, and generate content in a coordinated, intelligent workflow.