NVIDIA (NVIDIA) and MediaTek (MediaTek) have entered into a deep cooperation relationship. The two industry leaders team up to build an efficient central computing architecture. The NVIDIA DRIVE AGX computing platform will transform artificial intelligence from a single-instruction receiver into an AI agent with reasoning capabilities, integrating MediaTek’s Dimensity AX cockpit chips. By using edge computing and cloud resources, vehicles can deliver a low-latency, high-privacy, personalized AI agent experience.
In-vehicle AI agent hybrid architecture handles complex compute needs
In-vehicle AI agents adopt a Hybrid Architecture to meet diverse task requirements. On the edge, the NVIDIA DRIVE AGX platform executes, handling latency-extremely-sensitive (reaction time under 500 milliseconds) and privacy-involving local tasks, such as voice control, image recognition, and analysis of vehicle telematics data. Even when network reception is unstable or interrupted, the local side can still run large language models and vision-language models (VLM) with parameter counts of 7B or more, ensuring core functions continue operating. In contrast, the cloud environment acts as an “AI factory,” responsible for continuously executing high-compute tasks such as web search and complex itinerary planning—performing model training, fine-tuning, and verification—then deploying optimized results back to the vehicle to achieve dynamic performance balancing.
How does AI agent orchestration optimize the user experience?
To ensure users get a seamless experience when switching between edge and cloud, the system introduces an Agent Orchestration mechanism. When the driver makes a complex request, the system automatically identifies intent based on the current context and routes the task to the correct local or cloud agent for collaboration. For example, when discussing an itinerary, the system summons the local navigation and cloud search agents to work together. The key is Context Sharing technology: the system synchronizes relevant background information across different platforms, preventing users from having to repeat commands, and ensuring that cloud information is correctly fed back to the local system. This transparent interaction logic (UX Transparency) can track the status of asynchronous tasks, ensuring the system maintains stable, coherent service quality even when the network is switched or interrupted, reducing disruption to the driver.
Why is the software stack’s cross-platform deployment performance important?
NVIDIA provides a unified software architecture through NeMo and TensorRT, effectively narrowing the technical gap from research and development to real deployment. Developers can use TensorRT-LLM for large-scale inference in a cloud environment, then seamlessly migrate to deploy models on the TensorRT Edge-LLM in-vehicle edge platform. This consistency not only ensures model performance and reliability across different environments, but also establishes a “hybrid edge X cloud feedback loop.” By accumulating real vehicle usage data, the assistant program can continuously iterate and evolve—making its understanding and responses more precise—while automakers can update in-vehicle features more flexibly, giving vehicles the ability to self-optimize as they’re used, greatly extending the technical lifespan of in-vehicle systems.
NVIDIA teams up with MediaTek to realize the future AI-native car
NVIDIA and MediaTek’s central computer architecture combines MediaTek’s Dimensity AX series cockpit chips (SoCs) with NVIDIA’s DRIVE AGX platform—using Orin or Thor. In this architecture, Dimensity AX handles high-end in-car gaming workloads, multimedia, and traditional information entertainment systems (IVI). Meanwhile, NVIDIA DRIVE AGX focuses on handling AI compute, supporting multimodal applications and autonomous driving functions.
The chip architecture in the NVIDIA and MediaTek collaboration mainly combines MediaTek’s Dimensity AX series cockpit system-on-chip (SoC) with NVIDIA’s DRIVE AGX platform to build a Central Car Computer architecture.
Here are the key details of this collaboration architecture:
Core components pair MediaTek Dimensity AX C-X1 (or C-series) cockpit SoC with NVIDIA’s DRIVE AGX (such as Orin or Thor).
MediaTek Dimensity AX handles high-end in-car gaming, multimedia, and traditional information entertainment system (IVI) workloads.
NVIDIA DRIVE AGX offloads AI workloads, supports a wide range of AI models, and enables rich multimodal applications and autonomous driving.
MediaTek’s Dimensity platform and NVIDIA’s DRIVE AGX share the DriveOS software environment.
They are connected via PCIe, and use DriveOS NvStreams API to achieve seamless sharing of high-bandwidth data such as video and audio 2.
The cooperation between Nvidia and MediaTek provides automakers with highly scalable options. Automakers can upgrade to “AI-native” future vehicle models while maintaining their existing cockpit experience.
This article “NVIDIA and MediaTek’s dual-leader collaboration to build an AI-native assistant future car” was first published on Chain News ABMedia.
Related News
Arm’s earnings beat expectations; the CEO warned that the mobile market is weak, and launched the “AGI CPU” to expand its share in AI
Anthropic partners with SpaceX for computing power: secures the entire Colossus 1 facility—220k GPUs—and Claude lifts its usage limits
OpenAI Unveils the MRC Supercomputer Network Protocol! Teaming Up with NVIDIA, AMD, and Microsoft to Build the Stargate Infrastructure
Racing to “mine” residential power capacity shortages? NVIDIA partners with the smart-meter startup Span—mini AI data centers to power your home’s exterior walls
Debunking the AI bubble! BlackRock CEO: A shortage of compute power will give rise to a “compute futures market”