Automotive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/automotive/ Designing machines that perceive and understand. Tue, 10 Feb 2026 00:22:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Automotive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/automotive/ 32 32 Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit https://www.edge-ai-vision.com/2026/02/accelerating-next-generation-automotive-designs-with-the-tda5-virtualizer-development-kit/ Tue, 10 Feb 2026 09:00:45 +0000 https://www.edge-ai-vision.com/?p=56795 This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Introduction Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles. As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the […]

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments.

Introduction

Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles.

As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the TDA5 high-performance compute SoC family, which includes the TDA54-Q1. The TDA5 VDK enables developers to evaluate, develop and test devices in the TDA5 family ahead of initial silicon samples, providing a seamless development cycle with one software development kit (SDK) for both physical and virtual SoCs. Each device in the TDA5 family have a corresponding VDK to enable a common virtualization design and consistent user experience.

Along with the VDK, TI and Synopsys are providing additional components to create the full virtual development environment. Figure 1 provides an overview of available resources, which include:

  • The virtual prototype, which is the simulated model of a TDA5 SoC.
  • Deployment services from Synopsys, which are add-ons and interfaces that enable developers to integrate the VDK with other virtual components or tools.
  • Documentation for the TDA5 and the TDA54-Q1 software development kit.
  • Reference software examples for each TDA5 VDK and SDK to help developers get started.

Figure 1 Block diagram showing components provided by TI and Synopsys to get started with development on the VDK.

Why virtualization matters

Virtualization designs greatly reduce automotive development cycles by enabling software development without physical hardware. This allows developers to accelerate or “shift-left” development by starting software earlier and then migrating to physical hardware once available (as shown in Figure 2). Additionally, earlier software development extends to ecosystem partners, enabling key third-party software components to be available earlier.

Figure 2 Visualization of how software can be migrated from VDK to SoC.

Accelerating development with virtualization

The TDA5 VDK helps software developers work more effectively and efficiently, allowing them to use software-in-the-loop testing, so they can test and validate virtually without needing costly on-the-road testing.

Developers can use the TDA5 VDK to enhance debugging capabilities with deeper insights into internal device operations than what is typically exposed through the physical SoC pins. The TDA5 VDK also provides fault injection capabilities, enabling developers to simulate failures inside the device to get better information on how the software behaves when something goes wrong.

Scalability of virtualization

Scalability is another key benefit of the TDA5 VDK because virtualization platforms don’t require shipping, allowing development teams to ramp faster and be more responsive with resource allocation for ongoing projects. The TDA5 VDK also enables automated test environments, since development teams can replace traditional “board farms” with virtual environments running on remote computers. This helps automakers streamline continuous integration, continuous deployment (CICD) workflows to more efficiently and effectively accomplishing testing.

Since the TDA5 VDK is also available for future TDA5 SoCs, developers can scale work across multiple projects. If a developer is using the VDK for a specific TDA5 device (for example, TDA54), they can explore other products in the TDA5 family in a virtual environment without needing to change hardware configurations.

System integration

Virtualization designs such as the TDA5 VDK serve as the foundation for developers to build complete digital twins for their designs. By virtualizing the SoC, it can be integrated with other virtual components and tools to create larger simulated systems such as full ECU networks. Figure 3 shows how developers can leverage the capabilities of the Synopsys platform to integrate the VDK with other virtual components and simulate complete designs.


Figure 3 Diagram showing how the VDK can integrate with other virtual components and simulate complete designs.

 

Digital environment simulation tools can also be integrated with the TDA5 VDK to enable virtual testing in simulated driving scenarios, allowing developers to quickly perform reproducible testing. The TDA5 VDK also allows developers to leverage the broad ecosystem of tools and partners from Synopsys to get the most of their virtual development experience.

Getting started with the TDA54 VDK

The TDA54 SDK is now available on TI.com to help engineers get started with the TDA54 virtual development kit. Samples of the TDA54-Q1 SoC, the first device in the TDA5 family, will be sampling to select automotive customers by the end of 2026. Contact TI for more information about the TDA5 VDK and how to get started.

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems https://www.edge-ai-vision.com/2026/02/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems/ Mon, 09 Feb 2026 09:00:59 +0000 https://www.edge-ai-vision.com/?p=56608 This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety […]

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

New NVIDIA safety frameworks and technologies are advancing how developers build safe physical AI.

Physical AI is moving from research labs into the real world, powering intelligent robots and autonomous vehicles (AVs) — such as robotaxis — that must reliably sense, reason and act amid unpredictable conditions.

To safely scale these systems, developers need workflows that connect real-world data, high-fidelity simulation and robust AI models atop the common foundation provided by the OpenUSD framework.

The recently published OpenUSD Core Specification 1.0, OpenUSD — aka Universal Scene Description — now defines standard data types, file formats and composition behaviors, giving developers predictable, interoperable USD pipelines as they scale autonomous systems.

Powered by OpenUSD, NVIDIA Omniverse libraries combine NVIDIA RTX rendering, physics simulation and efficient runtimes to create digital twins and simulation-ready (SimReady) assets that accurately reflect real-world environments for synthetic data generation and testing.

NVIDIA Cosmos world foundation models can run on top of these simulations to amplify data variation, generating new weather, lighting and terrain conditions from the same scenes so teams can safely cover rare and challenging edge cases.

 

In addition, advancements in synthetic data generation, multimodal datasets and SimReady workflows are now converging with the NVIDIA Halos framework for AV safety, creating a standards-based path to safer, faster, more cost-effective deployment of next-generation autonomous machines.

Building the Foundation for Safe Physical AI

Open Standards and SimReady Assets

The OpenUSD Core Specification 1.0 establishes the standard data models and behaviors that underpin SimReady assets, enabling developers to build interoperable simulation pipelines for AI factories and robotics on OpenUSD.

Built on this foundation, SimReady 3D assets can be reused across tools and teams and loaded directly into NVIDIA Isaac Sim, where USDPhysics colliders, rigid body dynamics and composition-arc–based variants let teams test robots in virtual facilities that closely mirror real operations.

Open-Source Learning 

The Learn OpenUSD curriculum is now open source and available on GitHub, enabling contributors to localize and adapt templates, exercises and content for different audiences, languages and use cases. This gives educators a ready-made foundation to onboard new teams into OpenUSD-centric simulation workflows.​

Generative Worlds as Safety Multiplier

Gaussian splatting — a technique that uses editable 3D elements to render environments quickly and with high fidelity — and world models are accelerating simulation pipelines for safe robotics testing and validation.

At SIGGRAPH Asia, the NVIDIA Research team introduced Play4D, a streaming pipeline that enables 4D Gaussian splatting to accurately render dynamic scenes and improve realism.

Spatial intelligence company World Labs is using its Marble generative world model with NVIDIA Isaac Sim and Omniverse NuRec so researchers can turn text prompts and sample images into photorealistic, Gaussian-based physics-ready 3D environments in hours instead of weeks.

Those worlds can then be used for physical AI training, testing and sim-to-real transfer. This high-fidelity simulation workflow expands the range of scenarios robots can practice in while keeping experimentation safely in simulation.

Lightwheel Helps Teams Scale Robot Training With SimReady Assets

Powered by OpenUSD, Lightwheel’s SimReady asset library includes a common scene description layer, making it easy to assemble high-fidelity digital twins for robots. The SimReady assets are embedded with precise geometry, materials and validated physical properties, which can be loaded directly into NVIDIA Isaac Sim and Isaac Lab for robot training. This allows robots to experience realistic contacts, dynamics and sensor feedback as they learn.

End-to-End Autonomous Vehicle Safety

End-to-end autonomous vehicle safety advancements are accelerating with new research, open frameworks and inspection services that make validation more rigorous and scalable.

NVIDIA researchers, with collaborators at Harvard University and Stanford University, recently introduced the Sim2Val framework to statistically combine real-world and simulated test results, reducing AV developers’ need for costly physical mileage while demonstrating how robotaxis and AVs can behave safely across rare and safety-critical scenarios.

Learn more by watching NVIDIA’s “Safety in the Loop” livestream:

 

These innovations are complemented by a new, open-source NVIDIA Omniverse NuRec Fixer, a Cosmos-based model trained on AV data that removes artifacts in neural reconstructions to produce higher-quality SimReady assets.

To align these advances with rigorous global standards, the NVIDIA Halos AI Systems Inspection Lab — accredited by ANAB — provides impartial inspection and certification of Halos elements across robotaxi fleets, AV stacks, sensors and manufacturer platforms through the Halos Certification Program.

AV Ecosystem Leaders Putting Physical AI Safety to Work

BoschNuro and Wayve are among the first participants in the NVIDIA Halos AI Systems Inspection Lab, which aims to accelerate the safe, large-scale deployment of robotaxi fleets. Onsemi, which makes sensor systems for AVs, industrial automation and medical applications, has recently become the first company to pass inspection for the NVIDIA Halos AI Systems Inspection Lab.

 

The open-source CARLA simulator integrates NVIDIA NuRec and Cosmos Transfer to generate reconstructed drives and diverse scenario variations, while Voxel51’s FiftyOne engine, linked to Cosmos Dataset Search, NuRec and Cosmos Transfer, helps teams curate, annotate and evaluate multimodal datasets across the AV pipeline.​

 

Mcity at the University of Michigan is enhancing the digital twin of its 32-acre AV test facility using Omniverse libraries and technologies. The team is integrating the NVIDIA Blueprint for AV simulation and Omniverse Sensor RTX application programming interfaces to create physics-based models of camera, lidar, radar and ultrasonic sensors.

By aligning real sensor recordings with high-fidelity simulated data and sharing assets openly, Mcity enables safe, repeatable testing of rare and hazardous driving scenarios before vehicles operate on public roads.

Get Plugged Into the World of OpenUSD and Physical AI Safety

Learn more about OpenUSD, NVIDIA Halos and physical AI safety by exploring these resources:

 

Katie Washabaugh, Product Marketing Manager for Autonomous Vehicle Simulation, NVIDIA

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems https://www.edge-ai-vision.com/2026/02/what-sensor-fusion-architecture-offers-for-nvidia-orin-nx-based-autonomous-vision-systems/ Fri, 06 Feb 2026 09:00:44 +0000 https://www.edge-ai-vision.com/?p=56689 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why multi-sensor timing drift weakens edge AI perception How GNSS-disciplined clocks align cameras, LiDAR, radar, and IMUs Role of Orin NX as a central timing authority for sensor fusion Operational gains from unified time-stamping […]

The post What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

Key Takeaways

  • Why multi-sensor timing drift weakens edge AI perception
  • How GNSS-disciplined clocks align cameras, LiDAR, radar, and IMUs
  • Role of Orin NX as a central timing authority for sensor fusion
  • Operational gains from unified time-stamping in autonomous vision systems

Autonomous vision systems deployed at the edge depend on seamless fusion of multiple sensor streams (cameras, LiDAR, Radar, IMU, and GNSS) to interpret dynamic environments in real time. For NVIDIA Orin NX-based platforms, the challenge lies in merging all the data types within microseconds to maintain spatial awareness and decision accuracy.

Latency from unsynchronized sensors can break perception continuity in edge AI vision deployments. For instance, a camera might capture a frame before LiDAR delivers its scan, or the IMU might record motion slightly out of phase. Such mismatches produce misaligned depth maps, unreliable object tracking, and degraded AI inference performance. A sensor fusion system anchored on the Orin NX mitigates this issue through GNSS-disciplined synchronization.

In this blog, you’ll learn everything you need to know about the sensor fusion architecture, why the unified time base matters, and how it boosts edge AI vision deployments.

What are the Different Types of Sensors and Interfaces?

Sensor Interface Sync Mechanism Timing Reference Notes
 GNSS Receiver UART + PPS PPS (1 Hz) + NMEA UTC GPS time Provides absolute time and PPS for system clock discipline
 Cameras (GMSL) GMSL (CSI) Trigger derived from PPS PPS-aligned frame start Frames precisely aligned to GNSS time
 LiDAR Ethernet (USB NIC) IEEE 1588 PTP PTP synchronized to Orin NX Time-stamped point clouds
Radar Ethernet (USB NIC) IEEE 1588 PTP PTP synchronized to Orin NX Time-stamped detections
 IMU I²C Polled; software time stamp Orin NX system clock (GNSS-disciplined) Short-range sensor directly connected to Orin

Coordinating Multi-Sensor Timing with Orin NX

Edge AI systems rely on timing discipline as much as compute power. The NVIDIA Orin NX acts as the central clock, aligning every connected sensor to a single reference point through GNSS time discipline.

The GNSS receiver sends a Pulse Per Second (PPS) signal and UTC data via NMEA to the Orin NX, which aligns its internal clock with global GPS time. This disciplined clock becomes the authority across all interfaces. From there, synchronization extends through three precise routes:

  1. PTP over Ethernet: The Orin NX functions as a PTP Grandmaster through its USB NIC. LiDAR and radar units operate as PTP slaves, delivering time-stamped point clouds and detections that stay aligned to the GNSS time domain.
  2. PPS-derived camera triggers: Cameras linked via GMSL or MIPI CSI receive frame triggers generated from the PPS signal. This ensures frame start alignment to GNSS time with zero drift between captures.
  3. Timed IMU polling: The IMU connects over I²C and is polled at consistent intervals, typically between 500 Hz and 1 kHz. Software time stamps are derived from the same GNSS-disciplined clock, keeping IMU data in sync with all other sensors.

Importance of a Unified Time Base

All sensors share the same GNSS-aligned time domain, enabling precise fusion of LiDAR, radar, camera, and IMU data.

 

Implementation Guidelines for Stable Sensor Fusion

  • USB NIC and PTP configuration: Enable hardware time-stamping (ethtool -T ethX) so Ethernet sensors maintain nanosecond alignment.
  • Camera trigger setup: Use a hardware timer or GPIO to generate PPS-derived triggers for consistent frame alignment.
  • IMU polling: Maintain fixed-rate polling within Orin NX to align IMU data with the GNSS-disciplined clock.
  • Clock discipline: Use both PPS and NMEA inputs to keep the Orin NX clock aligned to UTC for accurate fusion timing.

Strengths of Leveraging Sensor Fusion-Based Autonomous Vision

Direct synchronization control

Removing the intermediate MCU lets Orin NX handle timing internally, cutting latency and eliminating cross-processor jitter.

Unified global time-stamping

All sensors operate on GNSS time, ensuring every frame, scan, and motion reading aligns to a single reference.

Sub-microsecond Ethernet alignment

PTP synchronization keeps LiDAR and radar feeds locked to the same temporal window, maintaining accuracy across fast-moving scenes.

Deterministic frame capture

PPS-triggered cameras guarantee frame starts occur exactly on the GNSS second, preventing drift between visual and depth data.

Consistent IMU data

High-frequency IMU polling stays aligned with the master clock, preserving accurate motion tracking for fusion and localization.

e-con Systems Offers Custom Edge AI Vision Boxes

e-con Systems has been designing, developing, and manufacturing OEM camera solutions since 2003. We offer customizable Edge AI Vision Boxes powered by NVIDIA Orin NX and Orin Nano. It brings together multi-camera interfaces, hardware-level synchronization, and AI-ready processing into one cohesive unit for real-time vision tasks.

Our Edge AI Vision Box – Darsi simplifies the adoption of GNSS-disciplined fusion in robotics, autonomous mobility, and industrial vision. It comes with support for PPS-triggered cameras, PTP-synced Ethernet sensors, and flexible connectivity options. It also provides an end-to-end framework where developers can plug in sensors, train models, and run inference directly at the edge (without external synchronization hardware).

Know more -> e-con Systems’ Orin NX/Nano-based Edge AI Vision Box

Use our Camera Selector to find other best-fit cameras for your edge AI vision applications.

If you need expert guidance for selecting the right imaging setup, please reach out to camerasolutions@e-consystems.com.

FAQs

  1. What role does sensor fusion play in edge AI vision systems?
    Sensor fusion aligns data from cameras, LiDAR, radar, and IMU sensors to a common GNSS-disciplined time base. It ensures every frame and data point corresponds to the same moment, thereby improving object detection, 3D reconstruction, and navigation accuracy in edge AI systems.
  1. How does NVIDIA Orin NX handle synchronization across sensors?
    The Orin NX functions as both the compute core and timing master. It receives a PPS signal and UTC data from the GNSS receiver, disciplines its internal clock, and distributes synchronization through PTP for Ethernet sensors, PPS triggers for cameras, and fixed-rate polling for IMUs.
  1. Why is a unified time base critical for reliable fusion?
    When all sensors share a single GNSS-aligned clock, the system eliminates time-stamp drift and timing mismatches. So, fusion algorithms can process coherent multi-sensor data streams, which enable the AI stack to operate with consistent depth, motion, and spatial context.
  1. What are the implementation steps for achieving stable sensor fusion?
    Developers should enable hardware time-stamping for PTP sensors, use PPS-based hardware triggers for cameras, poll IMUs at fixed intervals, and feed both PPS and NMEA inputs into the Orin NX clock. These steps maintain accurate UTC alignment through long runtime cycles.
  1. How does e-con Systems support developers building with Orin NX?
    e-con Systems provides customizable Edge AI Vision Boxes powered by NVIDIA Orin NX and Orin Nano. They are equipped with synchronized camera interfaces, AI-ready processing, and GNSS-disciplined timing. Hence, product developers can deploy real-time vision solutions quickly and with full temporal accuracy.

Prabu Kumar
Chief Technology Officer and Head of Camera Products, e-con Systems

The post What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems appeared first on Edge AI and Vision Alliance.

]]>
Driving the Future of Automotive AI: Meet RoX AI Studio https://www.edge-ai-vision.com/2026/02/driving-the-future-of-automotive-ai-meet-rox-ai-studio/ Wed, 04 Feb 2026 09:00:01 +0000 https://www.edge-ai-vision.com/?p=56668 This blog post was originally published at Renesas’ website. It is reprinted here with the permission of Renesas. In today’s automotive industry, onboard AI inference engines drive numerous safety-critical Advanced Driver Assistance Systems (ADAS) features, all of which require consistent, high-performance processing. Given that AI model engineering is inherently iterative (numerous cycles of ‘train, validate, and […]

The post Driving the Future of Automotive AI: Meet RoX AI Studio appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Renesas’ website. It is reprinted here with the permission of Renesas.

In today’s automotive industry, onboard AI inference engines drive numerous safety-critical Advanced Driver Assistance Systems (ADAS) features, all of which require consistent, high-performance processing. Given that AI model engineering is inherently iterative (numerous cycles of ‘train, validate, and deploy’), it is crucial to assess model performance on actual silicon at every step of product development. This hardware-based validation not only strengthens confidence in model engineering decisions but also ensures that AI solutions are reliable and meet the target KPI for deployment into in-vehicle AI applications through the product lifecycle.

Meet RoX AI Studio, designed specifically for today’s innovative automotive teams. With RoX AI Studio, you can remotely benchmark and evaluate your AI models on Renesas R-Car SoCs within your internet browser (Figure 1), all while leveraging a secure MLOps infrastructure that puts your engineering team in the fast lane toward production-ready solutions.

This platform is a cornerstone of the Renesas Open Access (RoX) Software-Defined Vehicle (SDV) platform, offering an integrated suite of hardware, software, and infrastructure for customers designing state-of-the-art automotive systems powered by AI. We’re dedicated to empowering products with advanced intelligence, high-performance, and an accelerated product lifecycle. RoX AI Studio enables you to unlock the full potential of next-generation vehicles by embracing a shift-left approach.

Transforming Product Engineering with RoX AI Studio

The modern vehicle is evolving into a powerful, intelligent platform, requiring automotive companies to accelerate development, testing, and optimization of AI models that enhance safety, efficiency, and in-vehicle experiences. Are you ready to take your automotive AI development to the next level? Meet RoX AI Studio, our cloud-native MLOps platform that revolutionizes this process by bringing the hardware lab directly to your browser. This virtual lab environment enables teams to concentrate on unlocking innovative capabilities, eliminating delays and expenses often associated with traditional infrastructure setup and maintenance. With RoX AI Studio, you can begin your AI model journey immediately, ensuring that your development process starts on day one.

RoX AI Studio Platform Architecture

Delve into the platform architecture of RoX AI Studio (Figure 2), mapping each component to customer-ready valued solutions.

User Experience (UX) with Web UI and API

The RoX AI Studio Web UI , serves as a web-native graphical user interface that streamlines management and benchmarking/evaluation of AI models on Renesas R-Car SoC hardware.

Web UI

Through this front-end product, users can register new AI models, configure hardware-in-the-loop (HIL) inference experiments, and conduct benchmarking and performance evaluations of their models, all within a browser environment.

API

The API bridges the Web UI with MLOps backend, facilitating robust communication and data exchange. It is designed to ensure high performance and strong security. The API consists of a broad set of endpoints that collectively enable a wide range of functions, including user management, model operations, dataset management, experiment orchestration, and HIL model benchmarking/evaluation. By decoupling the client from backend complexity, the client API enables rapid integration of new features and workflows, supporting continuous improvement and innovation for evolving customer needs.

The streamlined architecture of the RoX AI Studio Web UI and API empowers users to quickly engage with their tasks, leveraging their preferred browser for immediate access (Figure 3). This approach eliminates barriers to entry, enabling each user to start working on model registration, experiment setup, and evaluation instantly, without delays or the need for specialized client software.

UX Overview

MLOps with Workflows and HyCo Toolchain

The API endpoints in RoX AI Studio are underpinned by robust MLOps business logic, which ensures reliable execution for every incoming API request. Each experiment initiated through the platform follows a systematic and predefined sequence of steps. These steps are organized as Directed Acyclic Graphs (DAGs) and orchestrated using Apache Airflow, a proven workflow management tool.

MLOps Overview

Workflows

Apache Airflow manages the queuing, scheduling, and concurrency of experiment tasks automatically, allowing the system to efficiently handle multiple simultaneous user requests with finite computational resources on the cloud. The backend architecture leverages a suite of MLOps and third-party microservices, each deployed as Docker containers or coupled through third-party API. This design separates the execution of individual intermediate steps from the overarching control plane, which is governed by the DAG workflows. Such separation provides greater flexibility, enabling the platform to scale dynamically across distributed cloud computing environments and adapt to fluctuating user demands.

Moreover, this approach promotes more granular product development for each microservice. By supporting out-of-the-box (OOB) execution for individual components, RoX AI Studio enables rapid iteration and targeted enhancements, aligning with evolving platform requirements and user needs. Each workflow incorporates model management, data management, and experiment management, powered by Model Registry, Managed DB, and Board Manager.

HyCo Toolchain

Custom layers and operators are increasingly prevalent as AI model architecture continues to evolve. To address this opportunity, a high-performance custom compiler known as HyCo (Hybrid Compiler) is offered specifically for the R-Car Gen4 product line. HyCo has a hybrid compiler architecture, comprising both front-end and back-end compiler components, to ensure scalability and adaptability for custom implementations. At the core of this approach, TVM functions as a unifying backbone, enabling seamless integration of customizations in the front-end compiler with accelerator-specific back-end compilers. This design supports efficient compilation and optimization tailored to heterogeneous hardware accelerators within the SoC.

HyCo is seamlessly integrated into a developer-oriented HyCo toolchain, also referred to as AI Toolchain. Beyond the compiler itself, AI Toolchain provides interfaces for ingesting open-source model zoo assets as well as BYOM assets, encompassing both pre-processing and post-processing software components. This approach demonstrates how an AI toolchain can integrate with customer-specific model zoos, enhancing flexibility in deploying diverse AI workloads. Within the MLOps framework, various configurations of the AI toolchain are containerized into independent microservices. This modular approach emphasizes robust integration within MLOps workflows, allowing for the deployment of standalone AI toolchain components that can dynamically scale in cloud environments.

Infrastructure with MLOps Cloud and Device Farm

The hybrid infrastructure enables comprehensive end-to-end MLOps workflows, seamlessly delegating HIL inference tasks to Renesas Device Farm. Currently, the MLOps cloud platform is hosted on Azure, but its architecture is designed to support flexible deployment across other public or private cloud environments in the future.

Infrastructure Overview

MLOps Cloud

By utilizing a workflow-based MLOps architecture, we can securely enable multiple users within a single tenant to share computational resources, optimizing capital expenditure. This approach empowers customers to develop AI products without the need for significant individual investment for each developer. The architecture is also built to support seamless integration with private customer clouds, accommodating custom hardware configurations (such as CPU and GPU servers and shared bulk storage) alongside robust on-premises security infrastructure.

Renesas Device Farm

A secure on-premises device farm hosts multiple R-Car SoC development boards, providing the foundation for hardware-in-the-loop (HIL) inference experiments essential for AI model benchmarking and evaluation. The cloud-based Board Manager microservice efficiently handles board allocation, setup, and release, streamlining resource management and eliminating the need for direct developer involvement. The MLOps workflow leverages the device farm to execute HIL inference experiments without common delays associated with traditional board provisioning, updating, and maintenance. A robust networking architecture ensures secure HIL inference sessions for users, maintaining the integrity and confidentiality of both data and AI models.

What Advantages does RoX AI Studio bring to the customers?

  • Faster Time-to-Market: Shift-left your AI product lifecycle. Start model evaluation and iteration early, long before our silicon gets delivered to your labs!
  • Managed, Scalable Infrastructure: Forget about maintaining costly labs. RoX AI Studio delivers scale, security, redundancy, and automation out of the box.
  • Effortless Experimentation: Register your own models (BYOM), spin up inference experiments, and compare results easily—all through a simple dashboard.
  • Collaborate with Confidence: Centralized, cloud-based access lets distributed global teams work together seamlessly on model benchmarking and evaluations.

Imagine a world where your AI engineers are instantly productive, your teams collaborate without boundaries, and your prototypes move from idea to reality faster than ever before. With RoX AI Studio, that world is already here!

Sign up for a hands-on demo of RoX AI Studio on your journey to intelligent, efficient, and safe software-defined vehicles.

Shashank Bangalore Lakshman
SoC MLOps Engineering Manager

The post Driving the Future of Automotive AI: Meet RoX AI Studio appeared first on Edge AI and Vision Alliance.

]]>
Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles https://www.edge-ai-vision.com/2026/01/why-scalable-high-performance-socs-are-the-future-of-autonomous-vehicles/ Thu, 22 Jan 2026 09:00:22 +0000 https://www.edge-ai-vision.com/?p=56574 This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Summary The automotive industry is ascending to higher levels of vehicle autonomy with the help of central computing platforms. SoCs like the TDA5 family offer safe, efficient AI performance through an integrated C7™ NPU and […]

The post Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments.

Summary

The automotive industry is ascending to higher levels of vehicle autonomy with the help of central computing platforms. SoCs like the TDA5 family offer safe, efficient AI performance through an integrated C7™ NPU and chiplet-ready design. These SoCs enable automakers to more easily implement ADAS capabilities, bringing premium features to all types of vehicles, from base models to luxury cars.

Figure 1 Visualization of ADAS features for autonomous driving in a software-defined vehicle analyzing environmental data.

Introduction

How long have advanced driver assistance systems (ADAS) and autonomous driving been trendy topics? For the last decade or so, automakers at trade shows have shown consumers visions of a future with roads full of intelligent, autonomous vehicles.

We are finally closer to that vision. You likely have driven in or may even own a vehicle with features that existed only conceptually 10 years ago.

In terms of broad availability and the adoption of intelligent ADAS features and artificial intelligence (AI) capabilities, the industry is progressing through the Society of Automotive Engineers’ levels of vehicle autonomy from Level 1 to Level 2 and Level 3. This proliferation of autonomous features is currently occurring in both domain-based and central computing vehicle architectures. The next, biggest steps toward vehicle autonomy will occur in the latter, with software-defined vehicles (SDVs), as visualized in Figure 1, poised to become the standard vehicle configuration.

This emerging vehicle architecture consolidates traditional distributed electronic control units (ECUs) into powerful central computing platforms, enabling over-the-air updates, feature additions and enhanced functionality throughout a vehicle’s lifetime. SDVs use hardware as a platform and software for iterative updates, giving automakers the flexibility to continuously improve a vehicle’s capabilities and deliver new autonomous driving features without hardware changes.

SoCs for the next generation of automotive designs

At the core of central computing architectures (Figure 2) are heterogeneous SoCs that integrate a variety of IP blocks and support advanced software, such as the TDA54-Q1, the first device in the TDA5 family of SoCs.


Figure 2 Simplified overview of the central computing architecture and connected systems in a software-defined vehicle.

 

While there are multiple types of high-performance SoCs on the market, SoCs that employ a variety of computing components are more power-efficient and able to increase performance in a central computing ECU when compared to SoCs primarily based on a single type of computing element (such as graphics processing units). SoCs with a variety of computing elements simplify development, deployment and execution of software for advanced autonomous driving features because they can offload specific tasks to their specialized IP blocks, including high-performance neural processing units (NPUs) and vision processors, supported by dedicated onboard memory.

Heterogeneous SoCs such as the TDA54-Q1 bring more autonomous driving capabilities and design flexibility to more vehicles through:

  • Scalable AI performance. In terms of edge AI capabilities, TDA5 SoCs were designed using the latest automotive qualified 5nm process technology and feature integrated NPUs based on TI’s proprietary C7™ digital signal processing architecture. These technologies help deliver an efficient power envelope and scalable AI performance from 10 to 1,200 trillion operations per second (TOPS). Engineers can leverage the AI resources of these SoCs to increase vehicle responsiveness through support for multibillion-parameter large language models, vision language models and advanced transformer networks. This level of AI performance is scalable over time to meet the evolving needs of different application requirements, from supporting Level 1 features such as adaptive cruise control all the way up to Level 3 autonomy, which covers conditional driving automation or self-driving under specified conditions.
  • Safety-first architecture. TDA5 SoCs deliver a higher level of specialized performance and efficiency through a cross-domain hardware safety architecture that provides deterministic, real-time monitoring that software cannot achieve alone. Such performance enables OEMs to meet Automotive Safety Integrity Level D, the highest risk classification in the International Organization for Standardization 26262 standard. Using the latest Armv9 cores from Arm®, TDA5 SoCs feature lockstep capabilities in their application and microcontroller cores.
  • Chiplet-ready architecture. The scalability of the TDA5 SoC family isn’t limited to its processing performance; these devices also have a chiplet-ready architecture. Chiplets are an emerging semiconductor architectural design approach where individual integrated circuits serve a similar role as IP blocks in a heterogeneous SoC, allowing for the modular design of specialized chips. Built-in support for the Universal Chiplet Interconnect Express interface open technology standard enables greater scalability and adaptability of TDA5 SoCs through future chiplet extensions, offering developers a future-proof platform that can evolve with their needs.

Conclusion

Over the next decade, ADAS features will become standard and potentially even mandatory. Premium driving features will become mainstream and available for all vehicles, from entry-level base models to luxury cars. With devices like TDA5 SoCs, it’s only a matter of time.

Additional resources

The post Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
How to Enhance 3D Gaussian Reconstruction Quality for Simulation https://www.edge-ai-vision.com/2026/01/how-to-enhance-3d-gaussian-reconstruction-quality-for-simulation/ Thu, 15 Jan 2026 09:00:46 +0000 https://www.edge-ai-vision.com/?p=56354 This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Building truly photorealistic 3D environments for simulation is challenging. Even with advanced neural reconstruction methods such as 3D Gaussian Splatting (3DGS) and 3D Gaussian with Unscented Transform (3DGUT), rendered views can still contain artifacts such as blurriness, holes, or […]

The post How to Enhance 3D Gaussian Reconstruction Quality for Simulation appeared first on Edge AI and Vision Alliance.

]]>
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

Building truly photorealistic 3D environments for simulation is challenging. Even with advanced neural reconstruction methods such as 3D Gaussian Splatting (3DGS) and 3D Gaussian with Unscented Transform (3DGUT), rendered views can still contain artifacts such as blurriness, holes, or spurious geometry—especially from novel viewpoints. These artifacts significantly reduce visual quality and can impede downstream tasks.

NVIDIA Omniverse NuRec brings real-world sensor data into simulation and includes a generative model, known as Fixer, to tackle this problem. Fixer is a diffusion-based model built on the NVIDIA Cosmos Predict world foundation model (WFM) that removes rendering artifacts and restores detail in under-constrained regions of a scene.

This post walks you through how to use Fixer to transform a noisy 3D scene into a crisp, artifact-free environment ready for autonomous vehicle (AV) simulation. It covers using Fixer both offline during scene reconstruction and online during rendering, using a sample scene from the NVIDIA Physical AI open datasets on Hugging Face.

Step 1: Download a reconstructed scene 

To get started, find a reconstructed 3D scene that exhibits some artifacts. The PhysicalAI-Autonomous-Vehicles-NuRec dataset on Hugging Face provides over 900 reconstructed scenes captured from real-world drives. First log in to Hugging Face and agree to the dataset license. Then download a sample scene, provided as a USDZ file containing the 3D environment. For example, using the Hugging Face CLI:

pip install huggingface_hub[cli]  # install HF CLI if needed
hf auth login
# (After huggingface-cli login and accepting the dataset license)
hf download nvidia/PhysicalAI-Autonomous-Vehicles-NuRec \
  --repo-type dataset \
  --include "sample_set/25.07_release/Batch0005/7ae6bec8-ccf1-4397-9180-83164840fbae/camera_front_wide_120fov.mp4" \
  --local-dir ./nurec-sample

This command downloads the scene’s preview video (camera_front_wide_120fov.mp4) to your local machine. Fixer operates on images, not USD or USDZ files directly, so using the video frames provides a convenient set of images to work with.

Next, extract frames with FFmpeg and use those images as input for Fixer:

# Create an input folder for Fixer
mkdir -p nurec-sample/frames-to-fix
# Extract frames
ffmpeg -i "sample_set/25.07_release/Batch0005/7ae6bec8-ccf1-4397-9180-83164840fbae/camera_front_wide_120fov.mp4" \
  -vf "fps=30" \
  -qscale:v 2 \
  "nurec-sample/frames-to-fix/frame_%06d.jpeg"

Video 1 is the preview video showcasing the reconstructed scene and its artifacts. In this case, some surfaces have holes or blurred textures due to limited camera coverage. These artifacts are exactly what Fixer is designed to address.

Video 1. Preview of the sample reconstructed scene downloaded from Hugging Face

Step 2: Set up the Fixer environment 

Next, set up the environment to run Fixer.

Before proceeding, make sure you have Docker installed and GPU access enabled. Then complete the following steps to prepare the environment.

Clone the Fixer repository

This obtains the necessary scripts for subsequent steps:

Download the pretrained Fixer checkpoint

The pretrained Fixer model is hosted on Hugging Face. To fetch this, use the Hugging Face CLI:

# Create directory for the model
mkdir -p models/
# Download only the pre-trained model to models/
hf download nvidia/Fixer --local-dir models

This will save the required files needed for inference in Step 3 to the  folder.

Step 3: Use online mode for real-time inference with Fixer

Online mode refers to using Fixer as a neural enhancer during rendering for fixing each frame during the simulation. Use the pretrained Fixer model for inference, which can run inside the Cosmo2-Predict Docker container.

Note that Fixer enhances rendered images from your scene. Make sure your frames are exported (for example, into ) and pass that folder to .

To run Fixer on all images in a directory, run the following steps:

# Build the container
docker build -t fixer-cosmos-env -f Dockerfile.cosmos .
# Run inference with the container
docker run -it --gpus=all --ipc=host \
  -v $(pwd):/work \
  -v /path/to/nurec-sample/frames-to-fix:/input \
  --entrypoint python \
  fixer-cosmos-env \
  /work/src/inference_pretrained_model.py \
  --model /work/models/pretrained/pretrained_fixer.pkl \
  --input /input \
  --output /work/output \
  --timestep 250

Details about this command include the following:

  • The current directory is mounted into the container at /work, allowing the container to access the files
  • The directory is mounted in the frames extracted from the sample video through FFmpeg
  • The script inference_pretrained_model.py (from the cloned Fixer repo src/ folder) loads the pre-trained Fixer model from the given path
  • --input is the folder of input images (for example, examples/ contains some rendered frames with artifacts)
  • --output is the folder where enhanced images will be saved (where output is specified)
  • --timestep 250 represents the noise level the model uses for the denoising process

After running this command, the output/ directory will contain the fixed images. Note that the first few images may process more slowly as the model initializes, but inference will speed up for subsequent frames once the model is running.

Video 2. Comparing a NuRec scene enhanced with Fixer online mode to the sample reconstructed scene

Step 4: Evaluate the output

After applying Fixer to your images, you can evaluate how much it improved your reconstruction quality. This post reports Peak Signal-to-Noise Ratio (PSNR), a common metric for measuring pixel-level accuracy. Table 1 provides an example before/after comparison of the sample scene.

Metric Without Fixer With Fixer
PSNR ↑ (accuracy) 16.5809 16.6147
Table 1. Example PSNR improvement after applying Fixer (↑ means higher is better)

Note that if you try using other NuRec scenes from the Physical AI Open Datasets, or your own neural reconstructions, you can measure the quality improvement of Fixer with the metrics. Refer to the metrics documentation for instructions on how to compute these values.

In qualitative terms, scenes processed with Fixer look significantly more realistic. Surfaces that were previously smeared are now reconstructed with plausible details, fine textures such as road markings become sharper, and the improvements remain consistent across frames without introducing noticeable flicker.

Additionally, Fixer is effective at correcting artifacts when novel view synthesis is introduced. Video 3 shows the application of Fixer to a NuRec scene rendered from a novel viewpoint obtained by shifting the camera 3 meters to the left. When run on top of the novel view synthesis output, Fixer reduces view-dependent artifacts and improves the perceptual quality of the reconstructed scene.

Video 3. Comparing a NuRec scene enhanced with Fixer to the original NuRec scene from a viewpoint 3 meters to the left

Summary

This post walked you through downloading a reconstructed scene, setting up Fixer, and running inference to clean rendered frames. The outcome is a sharper scene with fewer reconstruction artifacts, enabling more reliable AV development.

To use Fixer with Robotics NuRec scenes, download a reconstructed scene from the PhysicalAI-Robotics-NuRec dataset on Hugging Face and follow the steps presented in this post.

Ready for more? Learn how Fixer can be post-trained to match specific ODDs and sensor configurations. For information about how Fixer can be used during reconstruction (Offline mode), see Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models.

Authors

, Senior Product Manager, NVIDIA Autonomous Vehicle Group
, Senior Systems Software Engineer, NVIDIA AV Applied Simulation Team
, Senior Product Manager, NVIDIA Neural Reconstruction (NuRec) and World Foundation Model Products for Autonomous Vehicle Simulation
, Product Marketing Manager, NVIDIA Autonomous Vehicle Simulation

The post How to Enhance 3D Gaussian Reconstruction Quality for Simulation appeared first on Edge AI and Vision Alliance.

]]>
Quadric’s SDK Selected by TIER IV for AI Processing Evaluation and Optimization, Supporting Autoware Deployment in Next-Generation Autonomous Vehicles https://www.edge-ai-vision.com/2026/01/quadrics-sdk-selected-by-tier-iv-for-ai-processing-evaluation-and-optimization-supporting-autoware-deployment-in-next-generation-autonomous-vehicles/ Wed, 14 Jan 2026 21:33:25 +0000 https://www.edge-ai-vision.com/?p=56515 Quadric today announced that TIER IV, Inc., of Japan has signed a license to use the Chimera AI processor SDK to evaluate and optimize future iterations of Autoware, open-source software for autonomous driving pioneered by TIER IV. Burlingame, CA, January 14, 2026 – Quadric today announced that TIER IV, Inc., of Japan has signed a […]

The post Quadric’s SDK Selected by TIER IV for AI Processing Evaluation and Optimization, Supporting Autoware Deployment in Next-Generation Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
Quadric today announced that TIER IV, Inc., of Japan has signed a license to use the Chimera AI processor SDK to evaluate and optimize future iterations of Autoware, open-source software for autonomous driving pioneered by TIER IV.

Burlingame, CA, January 14, 2026 – Quadric today announced that TIER IV, Inc., of Japan has signed a license to use the Chimera AI processor SDK to evaluate and optimize future iterations of Autoware*, open-source software for autonomous driving pioneered by TIER IV.

“We are thankful that TIER IV has chosen Quadric technology as a development tool for automotive network optimization,” noted Veerbhan Kheterpal, CEO of Quadric.

*Autoware is a registered trademark of the Autoware Foundation.


About Quadric

Quadric, Inc. is the leading licensor of fully programmable AI acceleration IP for smart devices. The Chimera processor runs both AI inference workloads and classic DSP and control algorithms. Quadric Chimera GPNPU architecture is optimized for on-device AI inference, providing up to 840 TOPS, including automotive-grade safety-enhanced versions. Learn more at www.quadric.ai.

Media Contacts
Steve Roddy – Chief Marketing Officer – hello@quadric.ai – 1-844-GPNPU00 Elaine Gonzalez – Director of Marketing Communications – hello@quadric.ai – 1-844-GPNPU00

About TIER IV
TIER IV stands at the forefront of deep tech innovation, pioneering Autoware open-source software for autonomous driving. Harnessing Autoware, we build scalable platforms and deliver comprehensive solutions across software development, vehicle manufacturing, and service operations. As a founding member of the Autoware Foundation, we are committed to reshaping the future of intelligent vehicles with open-source software, enabling individuals and organizations to thrive in the evolving field of autonomous driving.

The post Quadric’s SDK Selected by TIER IV for AI Processing Evaluation and Optimization, Supporting Autoware Deployment in Next-Generation Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
Qualcomm Drives the Future of Mobility with Strong Snapdragon Digital Chassis Momentum and Agentic AI for Major Global Automakers Worldwide https://www.edge-ai-vision.com/2026/01/qualcomm-drives-the-future-of-mobility-with-strong-snapdragon-digital-chassis-momentum-and-agentic-ai-for-major-global-automakers-worldwide/ Wed, 07 Jan 2026 22:35:47 +0000 https://www.edge-ai-vision.com/?p=56432 Key Takeaways: Qualcomm extends its automotive leadership with new collaborations, including Google, to power next‑gen software‑defined vehicles and agentic AI‑driven personalization. Snapdragon Ride and Cockpit Elite Platforms, and Snapdragon Ride Flex SoC, see rapid adoption, adding new design-wins and delivering the industry’s first commercialized mixed‑criticality platform that integrates cockpit, advanced driver‑assistance systems, and end‑to‑end AI. Decade of in-vehicle infotainment […]

The post Qualcomm Drives the Future of Mobility with Strong Snapdragon Digital Chassis Momentum and Agentic AI for Major Global Automakers Worldwide appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • Qualcomm extends its automotive leadership with new collaborations, including Google, to power next‑gen software‑defined vehicles and agentic AI‑driven personalization.
  • Snapdragon Ride and Cockpit Elite Platforms, and Snapdragon Ride Flex SoC, see rapid adoption, adding new design-wins and delivering the industry’s first commercialized mixed‑criticality platform that integrates cockpit, advanced driver‑assistance systems, and end‑to‑end AI.
  • Decade of in-vehicle infotainment and digital cockpit leadership now powers 75M+ vehicles with edge AI, supported by company’s launch of industry’s first automotive 5G RedCap modem for mission‑critical connectivity.

At CES 2026, Qualcomm Technologies, Inc. reaffirmed its leadership with growing global adoption of the Snapdragon® Digital Chassis™ solutions, while bringing agentic AI and high‑performance compute across vehicles to power the software‑defined era. With new and expanded collaborations across its automotive portfolio, Qualcomm Technologies remains the partner of choice for the automotive industry, bringing intelligent, immersive, and connected experiences to all vehicles. These collaborations further strengthen Qualcomm Technologies’ central role in the industry’s transition to AI‑driven mobility.

“Qualcomm Technologies is leading the industry forward by enabling automakers to deliver more intelligent, personalized, and safer driving experiences through the power of AI and software‑defined architectures,” said Nakul Duggal, executive vice president and group general manager, automotive, industrial and embedded IoT, and robotics, Qualcomm Technologies, Inc. “With the unmatched scale of our Snapdragon Digital Chassis footprint and deep collaborations across the global automotive ecosystem, we are well-positioned to accelerate innovation and help the industry advance toward the next generation of connected, automated mobility.”

CES highlights include:

Qualcomm and Google Lead the Next Era of Intelligent Mobility Powered by Agentic AI

Qualcomm Technologies and Google announced today the companies are expanding their long‑standing collaboration to combine Snapdragon Digital Chassis solutions with Google’s automotive software, helping carmakers bring new AI features to market faster. Building on more than a decade of joint work, the companies aim to simplify deployment of next‑generation AI experiences, helping to make vehicles smarter, more personalized, and easier to interact with through voice, touch, and visuals.

Snapdragon Cockpit Elite and Snapdragon Ride Elite Platforms Gain Strong Global Traction

Snapdragon Cockpit and Ride Elite — Qualcomm Technologies’ flagship central compute solutions for AI‑defined vehicles — are engineered to deliver high‑performance compute and accelerated AI to power embodied and agentic intelligence across cockpit and advanced driver assistance systems (ADAS). Qualcomm Technologies today highlighted new and expanded collaborations with Li Auto, Leapmotor,  Zeekr, GreatWall Motor, NIO, and Chery, bringing total design wins to 10 programs. Leapmotor also introduced its high‑performance automotive central computer powered by Cockpit and Ride Elite. The computer becomes the world’s first controller built on dual Snapdragon Elite (SA8797P) platforms. Garmin also announced today its selection of the Snapdragon Elite automotive platform to power its Nexus high‑performance computing platform.

Snapdragon Ride Flex Accelerates Global Shift to Central Compute and Intelligent Mobility

Snapdragon Ride Flex is accelerating the shift to centralized compute as the first commercialized SoC to unify digital cockpit and ADAS workloads. Already deployed in mass‑produced vehicles across eight global programs, Ride Flex is driving rapid adoption of AI‑powered cockpit and driver‑assistance features. Leading Chinese Tier-1 suppliers, including Autolink, Desay SV, Hangsheng, and ZYT, highlighted today their mass‑production plans for integrated cockpit and driver‑assistance solutions based on Ride Flex.

Advancing the Path to End-to-End Automated Driving (AD) with Snapdragon Ride

Qualcomm Technologies is aiming to accelerate the deployment of safer, more intelligent AD systems by advancing its own end‑to‑end AI algorithms and enabling leading automakers across the industry. Building on 20 design-wins for the Snapdragon Ride Platform, the company announced today it is working with top AD stack providers, including DeepRoute.ai, Momenta, QCraft, WeRide, and ZYT, to build a broad, competitive ecosystem spanning multiple AI approaches and product tiers. This momentum is driven by the Snapdragon Ride platform’s open, scalable AD architecture, which allows partners to integrate advanced stacks into production‑ready systems optimized for real‑world performance. With new collaborations, such as  ZF and Epec, and nearly one million Snapdragon Ride SoCs shipped, Snapdragon Ride continues to strengthen its position as one of the industry’s most trusted and efficient AD platforms.

Redefining Driving Experiences Through a Decade of In-Vehicle Infotainment and Digital Cockpit Leadership

With a decade of leadership for in-vehicle infotainment (IVI) and digital cockpit innovation, Qualcomm Technologies continues to set the standard for in‑vehicle experiences, which is now elevated by agentic AI for smarter, and more personalized and proactive cabins. As of June 2025, Snapdragon Cockpit Platforms with integrated AI power more than 75 million vehicles worldwide. Building on this growth, Qualcomm Technologies announced a collaboration with Toyota, which selected the next‑generation Snapdragon Cockpit Platform for the new RAV4 to deliver advanced AI-driven features that anticipate driver and passenger needs, adapt in real time, and provide intelligent in-vehicle assistance.

Driving Safer Roads with Connectivity

Qualcomm Technologies is advancing safety-rich, more connected roads with new 5G and V2X innovations. To help automakers scale 5G across more vehicles, the company introduced today the Qualcomm® A10 5G Modem-RF—its first 5G Reduced Capability (RedCap) modem—delivering low-power, low-cost, and global LTE/5G support for mission‑critical services as cars transition to advanced networks. Hyundai Mobis and Qualcomm Technologies also demonstrated progress in V2X technology with a new solution designed to enhance detection of non‑line‑of‑sight hazards—such as obstructed vehicles and two‑wheelers—enabling earlier warnings, soft braking in high‑speed scenarios, and lower‑speed emergency braking. This V2X solution is designed to improve crash avoidance and reduce harsh braking where traditional sensors have limited visibility.

Demonstrations for our Snapdragon Digital Chassis solutions are available by appointment only at the Qualcomm booth (located in the West Hall, booth #5001) throughout CES 2026 from Tuesday, January 6 through Friday, January 9. For more information, read our OnQ blog.  To learn more about the Snapdragon Digital Chassis Platform, please visit us here.

About Qualcomm

Qualcomm relentlessly innovates to deliver intelligent computing everywhere, helping the world tackle some of its most important challenges. Building on our 40 years of technology leadership in creating era-defining breakthroughs, we deliver a broad portfolio of solutions built with our leading-edge AI, high-performance, low-power computing, and unrivaled connectivity. Our Snapdragon® platforms power extraordinary consumer experiences, and our Qualcomm Dragonwing™ products empower businesses and industries to scale to new heights. Together with our ecosystem partners, we enable next-generation digital transformation to enrich lives, improve businesses, and advance societies. At Qualcomm, we are engineering human progress.

Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of our engineering and research and development functions and substantially all of our products and services businesses, including our QCT semiconductor business. Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patents are licensed by Qualcomm Incorporated.

The post Qualcomm Drives the Future of Mobility with Strong Snapdragon Digital Chassis Momentum and Agentic AI for Major Global Automakers Worldwide appeared first on Edge AI and Vision Alliance.

]]>
SiMa.ai Announces First Integrated Capability with Synopsys to Accelerate Automotive Physical AI Development https://www.edge-ai-vision.com/2026/01/sima-ai-announces-first-integrated-capability-with-synopsys-to-accelerate-automotive-physical-ai-development/ Wed, 07 Jan 2026 21:00:37 +0000 https://www.edge-ai-vision.com/?p=56423 San Jose, California – January 6, 2026 – SiMa.ai today announced the first integrated capability resulting from its strategic collaboration with Synopsys. The joint solution provides a blueprint to accelerate architecture exploration and early virtual software development for AI- ready, next-generation automotive SoCs that support applications such as Advanced Driver Assistance Systems (ADAS) and In-vehicle-Infotainment […]

The post SiMa.ai Announces First Integrated Capability with Synopsys to Accelerate Automotive Physical AI Development appeared first on Edge AI and Vision Alliance.

]]>
San Jose, California – January 6, 2026 – SiMa.ai today announced the first integrated capability resulting from its strategic collaboration with Synopsys. The joint solution provides a blueprint to accelerate architecture exploration and early virtual software development for AI-
ready, next-generation automotive SoCs that support applications such as Advanced Driver Assistance Systems (ADAS) and In-vehicle-Infotainment (IVI).

As previously announced, SiMa.ai and Synopsys are collaborating to deliver machine-learning optimized, workload-verified, power-efficient SoC architectures required for software-defined vehicles with increasing levels of autonomy and more intelligent in-cabin experiences.  Resulting from this strategic collaboration, the blueprint announced today enables customers to confidently jumpstart the design and validation of custom automotive AI SoCs, as well as “shift left” software development pre-silicon, helping reduce development costs, improve software
quality, derisk start of production, and accelerate vehicle time-to-market.

“We are pleased with how well the two teams have worked together to quickly create a joint solution uniquely focused on unlocking physical AI capabilities for today’s software defined
vehicles.” said Krishna Rangasayee, President & CEO at SiMa.ai. “Our best-in-class ML platform, combined with Synopsys industry-leading automotive-grade IP and design automation
software creates a powerful foundation for innovation across OEMs in autonomous driving and in-vehicle experiences.”

“Automotive OEMs need to deliver software-defined AI-enabled vehicles faster to market to drive differentiation, which requires early power optimization and validation of the compute platform to reduce total cost of development and time to SOP,” said Ravi Subramanian, Chief
Product Management Officer at Synopsys. “Our collaboration with SiMa.ai delivering an ML-enabled architecture exploration and software development blueprint supported by a comprehensive integrated suite of tools significantly jumpstarts these activities and enables our
automotive customers to bring next-generation ADAS and IVI features to market faster.”

The new blueprint provides pre-integrated SoC virtual prototypes, as well as an integrated toolworkflow, leveraging leading SiMa.ai and Synopsys solutions, including:

From early architectural exploration:

  • Using the SiMa.ai MLA Performance and Power Estimator™ (MPPE) tool enables automotive customers to right-size their ML accelerator design for their workloads. Customers can iterate on a wide variety of accelerator configurations quickly to identify the optimal option.
  • Synopsys Platform Architect™ is trusted by automotive companies to model automotive workloads and analyze performance, power, memory, and interconnect trade-offs at the system level before RTL, enabling informed architecture decisions early in the design cycle.

To early verification and validation:

  • With Synopsys’ Virtualizer™ Development Kit (VDK), customers can begin software development using a virtual SoC prototype before silicon is available, enabling full system bring-up within days of silicon availability and accelerating vehicle time to market by up to 12 months.
  • SiMa.ai Palette SDK simplifies deployment of complex edge AI applications, supporting any ML workflow without compromising performance or ease of use—providing a complete ML stack for automotive edge solutions. Synopsys ZeBu® emulation delivers comprehensive pre-silicon hardware/software performance and power validation to ensure a system architecture meets the needs of expected workloads.

The new joint blueprint, including supporting tool workflow, is available for early customer engagement. Join SiMa.ai and Synopsys at CES 2026 for a demonstration of the new solution.

To request a meeting, visit: https://www.synopsys.com/events/ces.html.

About SiMa.ai

SiMa.ai is a leader in Physical AI, delivering a purpose-built, software-centric platform that brings best-in-class performance, power efficiency, and ease of use to Physical AI applications. Focused on scaling Physical AI across robotics, automotive, industrial automation, aerospace & defense, smart vision, and healthcare, SiMa.ai is led by seasoned technologists and backed by top-tier investors. Headquartered in San Jose, California. Learn more at www.sima.ai.

Media Contacts
SiMa.ai
sima.ai@hoffman.com

Note: All trademarks and registered trademarks are the property of their respective owners.

The post SiMa.ai Announces First Integrated Capability with Synopsys to Accelerate Automotive Physical AI Development appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES https://www.edge-ai-vision.com/2026/01/d3-embedded-showcases-camera-radar-fusion-adas-cameras-driver-monitoring-and-lwir-solutions-at-ces/ Wed, 07 Jan 2026 20:09:29 +0000 https://www.edge-ai-vision.com/?p=56417 Las Vegas, NV, January 7, 2026 — D3 Embedded is showcasing a suite of technology solutions in partnership with fellow Edge AI and Vision Alliance Members HTEC, STMicroelectronics and Texas Instruments at CES 2026. Solutions include driver and in-cabin monitoring, ADAS, surveillance, targeting and human tracking – and will be viewable at different locations within […]

The post D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES appeared first on Edge AI and Vision Alliance.

]]>
Las Vegas, NV, January 7, 2026 — D3 Embedded is showcasing a suite of technology solutions in partnership with fellow Edge AI and Vision Alliance Members HTEC, STMicroelectronics and Texas Instruments at CES 2026. Solutions include driver and in-cabin monitoring, ADAS, surveillance, targeting and human tracking – and will be viewable at different locations within CES 2026.

Single-Camera and Radar Interior Sensor Fusion for In-Cabin Sensing

Location: LVCC North Hall, N116

At CES, we are excited to showcase the first fusion of single-camera and radar technologies on Texas Instruments’ platform for full interior sensing.

  • Performs Fusion-Based DMS and OMS in One Package
  • Enhances Child Presence Detection and Intruder Detection
  • Reduces Sensor, Camera, and Processor Redundancy and Cost
  • Addresses Euro NCAP and Regulatory Requirements in a Single System

ADAS 8MP Front Camera

Location: LVCC North Hall, N116

As well as our ADAS 8MP Front Camera Reference Design powered by the Texas Instruments TDA4A Entry.

  • Full L2 Vision Perception and ASIL Safety
  • Sony IMX728 8MP Camera with High Dynamic Range (HDR), LED Flicker Mitigation (LFM), and an AEC-Q100 Grade 2 Image Sensor
  • Fully Customizable and Production-Ready

 

Driver Monitoring System / Occupant Monitoring System RGB-IR Imager

Location: Appointment Only – Bandol 1

Introducing the DesignCore® Chroma Series Camera with a 5MP RGB-IR, dual mode (Global and Rolling Shutter) 100 dB+ HDR imager specifically designed for Driving Monitoring and Occupant Monitoring applications. Paired with D3 Embedded’s TDA4 Rugged Vision Processor, this end-to-end system leverages the RGB-IR sensor to capture infrared light, ensuring drivers are not distracted by visible illumination.

 

D3 Embedded is also excited to introduce compact, automotive-grade long-wave infrared (LWIR) thermal camera solutions in partnership with Teledyne FLIR.

  • Automotive-grade ASIL-B LWIR
  • VGA with no mechanical shutter
  • GMSL2 connectivity
  • Small & rugged (IP69K)

Perfect for electro-optical/infrared (EO/IR) imaging solutions which combine visible light and thermal imaging to provide enhanced surveillance, targeting, and tracking capabilities.

About D3 Embedded

D3 Embedded is a 100% U.S.-based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing, and in-house manufacturing services. Learn more at www.D3Embedded.com.

The post D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES appeared first on Edge AI and Vision Alliance.

]]>