Beijing auto show observation: Is the Automotive Industry Collectively Abandoning “Vase AI”?
If 2023 is considered the "Year One" for automotive large models, then 2026 is the year of "demystification" - car manufacturers no longer content with having in-car AI write poetry and tell jokes, but instead begin to ask a more practical question: what exactly can this "brain" in the car accomplish for the user?
Behind this inquiry lies a paradigm shift—from “generative AI” to “agentic AI.”
At this auto show, companies such as Lenovo, Volcano Engine, Tencent, SenseTime Jueying, and MediaTek have unveiled a flurry of intelligent agent solutions, collectively pointing to a shared conclusion: cars are evolving from passive "voice assistants" into "human-like intelligent agents" capable of proactive perception, autonomous decision-making, and end-to-end task execution.
This transformation is far more profound than simply replacing a screen with a larger one or adding a more powerful chip.
Will AI start getting practical things done?
Stepping into the main pavilion of this year's auto show, "intelligent agents" have become the most frequently mentioned keyword at nearly every booth of major automakers and technology suppliers.
But upon closer inspection, it becomes clear that this round of AI integration into vehicles is fundamentally different from the previous two or three years.
In the past, in-vehicle AI large models have more often played the role of "knowledgeable but powerless" passengers. Users ask, and it answers; users give commands, and it executes. Although speech recognition has improved and conversations have become more natural, the underlying logic remains essentially the same as before—still "command-based" or "chatting" interaction.
Industry observers have pointed out incisively that these features often become "white elephants" in real-world driving scenarios.
The watershed of 2026 lies in the fact that Agent agents are given "limbs" — they need to possess the full-chain capabilities of understanding intentions, breaking down tasks, scheduling resources, executing operations, and completing the loop.
Zhong Xuedan, Vice President of Tencent Smart Mobility, offered a succinct assessment of this situation: “In the second half of the automotive intelligence race, competition is no longer about how many AI features a vehicle offers, but rather about who can first integrate large models, vehicle capabilities, and service ecosystems into an intelligent central hub that is perceivable, planable, executable, and continuously evolvable.”
But Zhong Xuedan also made a sharper judgment: the industry's "hype" about large models should cool down.
He openly stated in an interview with Gashua Auto before the auto show: "In the past year, we may have promoted large models a lot, but in fact, they have not solved any problems. It's better to ask Yuanbao on the phone, which can solve more problems — there's no need to migrate large models to vehicles. In cars, it's more about agents that are integrated with the vehicle's sensor information, vehicle functions, and scenarios related to the car, which can bring more differences."
This statement highlights an industry pain point: Simply embedding large models into vehicles is meaningless; the key is to use agents to genuinely solve problems in specific scenarios.

Image source: Tencent
What is the technological foundation behind this transformation? At this year's auto show, multiple players offered their own answers.
On the opening day of the auto show, VolcEngine unveiled a new-generation automotive AI solution built on an Agentic AI architecture. Its core breakthrough lies in employing a unified “AI brain” to deeply integrate key functional domains—including vehicle control, navigation, and intelligent driving—thereby achieving a complete closed loop of “perception, reasoning, execution, memory, and learning.”
Unlike traditional "question and answer" systems, this system has self-driven capabilities. During long trips, it can automatically switch to modes such as singing, telling stories, or playing cartoons based on the front seat passenger's condition, without the driver needing to repeatedly give instructions.
More notably, it is the market penetration data. At the launch event, the relevant person in charge of ByteEngine disclosed the latest industry achievements. Currently, more than 50 car brands and 145 models have adopted the Doubao large model, with a total of over 7 million vehicles. The daily average of completed in-cabin interactions and service loops exceeds 30 million times.
MediaTek unveiled a more ambitious concept at this year's auto show—"AI-Defined Vehicle" (AIDV).
MediaTek Vice President and General Manager of the Automotive Platform Business Unit, Yutai Zhang, explained: “AI-defined vehicles are not merely about using AI to implement functions; more importantly, the models must be capable of rapid iteration, ensuring that the vehicle’s ‘intelligent brain’ remains constantly updated and online.”
This means that AI is no longer an additional feature of the car, but the core soul of the entire vehicle architecture.
MediaTek’s automotive cockpit platform C-X1 uses a 3nm process and provides up to 400 TOPS of full-modal AI computing power, serving as the on-device computing foundation for this "active intelligent cockpit."

Source: Mediatek
And the path of Lenovo's vehicle computing is more unique.
Its “Vehicle Computing 2.0” strategy, unveiled on the first day of the auto show, centers on two core products: the Auto AI Box and the OneAI Automotive Intelligence Platform. Built upon the NVIDIA DRIVE AGX Thor-Z platform, this solution delivers 360 TOPS of AI computing power (FP8) for cockpit intelligence applications and supports on-device deployment of multimodal large language models up to 30 billion parameters, reducing interaction latency from seconds to milliseconds.
Lenovo's vice president Xu Liang described the 2.0 stage strategy as upgrading from a "computing platform" to an "AI agent platform," making the car the "personal mobile AI intelligent hub" after smartphones and PCs.
From these releases, a clear industry consensus emerges: 2026 is being viewed by the industry as the watershed year when in-vehicle AI transitions from generative to agentic.
Large models are no longer just "ornaments" in the cockpit, but are beginning to truly take on the role of "handling tasks."
However, to truly understand where the tipping point of this change lies, we need to examine whether two conditions are mature.
Zhong Xuedan provided a precise breakdown in the interview: “From dialogue to execution, it relies on two things. First is the inherent capability of the technical foundation—when large models are deployed in vehicles, the initial priority is optimizing dialogue and improving user experience; however, to transition from dialogue to actionable execution, model capabilities must evolve to ensure safety and controllability. Second is the recent evolution of engineering paradigms over the past six months—paradigms such as Manus, which impose constraints and governance on intelligent agents, yield more stable outputs.”
He further pointed out that the combination of these two conditions makes “from saying to doing” possible: “On the one hand, models have advanced in engineering; on the other hand, strong ecosystem integration capability is required—if we possess the capability but find nothing usable when attempting to execute, it still won’t work.”
This breakdown reveals a key logic: the arrival of Agents in 2026 is not the result of a single technological breakthrough, but rather the simultaneous alignment of model capabilities, engineering paradigms, and ecosystem connections.
"Single Brain, Multiple Forms" and Ecological Racing: The Path Divergence of Automotive Intelligence Agents
If one word were to summarize the technical route competition of AI Agents at this auto show, it would be "One Brain, Multiple Forms" — using a single unified intelligent agent to drive diverse capabilities across different scenarios and terminals.
However, Gasgoo Auto has observed that, under this overarching direction, automakers have adopted distinctly divergent approaches.
Edge AI: Computing Power in a "Box"
SenseTime Jueying is a typical representative of this school. At the auto show, it debuted the Sage Box (Qianji Smart Box), which features a three-layer architecture comprising the Sage on-device model, the Qianji System, and the New Member native intelligent agent. Its core selling points are “zero-cost tokens,” “always-on responsiveness,” and “one brain, multiple forms.”
The New Member agent of SenseTime's Jueying has achieved a core leap from "being able to chat" to "being able to get things done" — it supports navigation with ambiguous intentions, plans personalized itineraries by combining user memory and scene information; and can simultaneously recognize and process commands from multiple people and execute tasks with one click.
In the longer term, SenseTime Jueying has launched the SenseAuto Go Robotaxi cabin-and-driving integration solution and partnered with T3 Travel to launch trial operations within this year, extending intelligent agent capabilities from the cabin to full-stack autonomous driving.
The logic of edge-side deployment lies in that data privacy, network dependency, and response latency are the three major weaknesses of cloud-based large models. Moving computing power to the vehicle, whether it's Lenovo's Auto AI Box or SenseTime's Sage Box, essentially aims to replace the "cloud brain" with a "local brain."
This route places extremely high demands on chip computing power, which also explains why chip manufacturers and computing platforms have had an unprecedented presence at this auto show.
Regarding the division of computing power between the edge and the cloud, Zhong Xuedan also provided an assessment: the edge side is responsible for immediate response and basic security, while the cloud side handles complex scenarios. "The computing power at the edge side will become stronger and stronger, and the corresponding capabilities of the edge-side models will also become stronger, but more complex scenarios will definitely still be handled by the cloud. Complex scenarios must rely on the cloud, as the edge side is incapable of handling them."
This means that the edge-cloud hybrid architecture is not a transitional solution, but a long-term coexisting technical framework.
Ecosystem Platform School: Let Agents Grow on Services
By contrast is Tencent's "ecological city-building" strategy.
On the eve of the auto show, at the 2026 TIME DAY event, Tencent officially launched the "Mobility Full-Scenario Agent Open Platform," comprehensively upgrading its seven in-cabin intelligent agent products.
Tencent's approach isn't about building a "universal AI brain," but rather integrating its WeChat Mini Program ecosystem, payment capabilities, map services, and more into the vehicle cockpit, enabling AI agents with practical "task-completion abilities"—such as ordering food, booking restaurants, paying bills, and navigation—all within a seamless, closed-loop experience.
In a certain sense, Tencent is addressing an industry pain point: after AI large models have been widely integrated into vehicles, the functionalities available in the in-vehicle cockpit that can truly be considered “usable” have not advanced beyond the “instruction-based” and “companion chat” levels.
Tencent Group’s Senior Executive Vice President Tang Daosheng pinpointed the core issue: “As an intelligent carrier that integrates software and hardware and maintains strong connections with the physical world, the automobile is naturally suited for Agent scenario deployment.”
In other words, intelligent agents are not merely about “conversational” capability—they are fundamentally about “connectivity.” Only those who can integrate real-world external services into the vehicle can enable agents to truly “get things done.”
Whole-Vehicle Unified Dispatch: One Agent Governs the Entire Vehicle
Geely showcased a more aggressive whole-vehicle approach at this auto show. Its "1+2+N" multi-agent framework features the whole-vehicle-level super agent Eva, which coordinates intelligent driving and the cockpit while linking subsystems such as chassis and energy management to achieve millisecond-level synergy, extending AI agent control from the cockpit to the physical operation of the entire vehicle.
Qiu Xiaoshen, CEO of Axera, further validates this trend: “As edge-side AI agents evolve, the interior of future vehicles will no longer feature isolated cabin and ADAS systems operating independently. Instead, a unified ‘Agent entity’ will emerge to coordinate the vehicle’s various intelligent capabilities, delivering a more cohesive and integrated user experience.”
In other words, the current stage—where the cabin AI agent and the autonomous driving AI agent are developing separately—may just be a transitional phase; the endgame is a single "brain" managing all in-vehicle intelligence.
The Underlying Logic Behind Route Specialization
What’s worth pondering is that the outcome of this route dispute may not necessarily be “winner takes all.”
The edge-side approach addresses the issue of “intelligence without internet,” the ecosystem approach tackles “how much can be accomplished after becoming intelligent,” and the all-domain approach resolves “a unified experience from the cockpit to intelligent driving.” Logically, these three approaches are complementary.
What truly determines the outcome of competition may not be whose technical solution is flashier, but who can first achieve a scalable, sustainably evolvable, and low-cost replicable business loop.
As Zhang Junyi, CFO of Shangtang Jueying, pointed out: traditional automotive parts companies face the challenge of valuation limitations when entering AI+automotive businesses; while original AI companies entering the automotive industry are more likely to gain capital recognition. This inherent difference in industry attributes will profoundly impact the long-term investment capabilities of different players.

Image Source: SenseAuto
Stock prices, financing capabilities, and R&D talent costs are becoming hidden factors that determine the outcome of the intelligent agent competition.
China's Home Field in the Era of Intelligent Agents: The Restructuring and Anxiety of the Global Industrial Chain
If the story of AI agents at this auto show is limited to Chinese companies taking the lead, it overlooks a deeper dimension: multinational giants are following up at an astonishing speed, and their logic has undergone a fundamental shift.
Volkswagen Group unveiled its “Omnidirectional Intelligent Agent AI” product and technology roadmap at this auto show, announcing that the technology will be applied to mass-produced vehicles by the end of 2026, comprehensively empowering new CEA-architecture vehicles with intelligent agent AI capabilities.

BMW, based on Alibaba's Qwen large language model, has launched three AI agents specifically tailored for the Chinese market – "Car Expert," "Travel Companion," and "Knowledge Master," covering three major scenarios: car usage, travel, and knowledge Q&A.
Mercedes-Benz also showcased its AI-powered intelligent cockpit and advanced driver-assistance systems built on the MB.OS platform at the auto show.
A journalist from Germany’s ZDF television station remarked meaningfully on-site at the auto show: “Chinese electric vehicles are getting better and better, their performance is stronger, and their designs are increasingly bold—reshaping the global automotive market at an unprecedented pace.”
And the observation from Qatar's Al Jazeera was more straightforward: "The competitiveness of Chinese domestic car manufacturers is no longer just about price, but also competing in innovative concepts. This auto show sometimes resembles more of a technology expo."
However, beyond the narrative of "China's home ground," anxiety is equally palpable.
Amid this noise, a deeper challenge is emerging: as AI Agents move from concept to mass production, and from technical demonstrations into users' daily driving, the issue they face is no longer "whether it can be done" but "whether it is worth using."
【Copyright and Disclaimer】The above information is collected and organized by PlastMatch. The copyright belongs to the original author. This article is reprinted for the purpose of providing more information, and it does not imply that PlastMatch endorses the views expressed in the article or guarantees its accuracy. If there are any errors in the source attribution or if your legitimate rights have been infringed, please contact us, and we will promptly correct or remove the content. If other media, websites, or individuals use the aforementioned content, they must clearly indicate the original source and origin of the work and assume legal responsibility on their own.
Most Popular
-
Continental Plans to Begin Sale of ContiTech in Early 2026
-
$4 Billion! Medtronic Makes Another Acquisition
-
BASF Delivers First Batch of Innovative Cathode Materials for Semi-Solid-State Batteries to Weilan New Energy
-
Why did a century-old european dental instrument giant relocate its manufacturing hub to china?
-
Profit and Revenue Growth Struggle to Conceal Debt Repayment Pressure; Success of Kingfa Sci & Tech's High-End Strategy Yet to Be Seen