Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ Designing machines that perceive and understand. Wed, 04 Feb 2026 19:16:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ 32 32 Right Sizing AI for Embedded Applications https://www.edge-ai-vision.com/2026/02/right-sizing-ai-for-embedded-applications/ Tue, 03 Feb 2026 09:00:51 +0000 https://www.edge-ai-vision.com/?p=56665 This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. We all know the AI revolution train is heading straight for the Embedded Station. Some of us are already in the driver’s seat, while others are waiting for the first movers to pave the way so we can […]

The post Right Sizing AI for Embedded Applications appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip.

We all know the AI revolution train is heading straight for the Embedded Station. Some of us are already in the driver’s seat, while others are waiting for the first movers to pave the way so we can become fast adopters. No matter where you are on this journey, one thing becomes clear: AI must adapt to the embedded application sandbox—not the other way around.

Embedded applications typically operate within a power envelope ranging from milliwatts to around 10 watts. For AI to be effective in many embedded markets, it must respect the power-performance boundaries of the application. Imagine your favorite device that you charge once a day. If adding embedded AI to a product means you now need to charge it every four hours, you are likely to stop using the product altogether.

This is where embedded AI fundamentally differs from cloud AI. In the cloud, adding more computations is often the default solution. But in embedded systems, the level of AI compute must be dictated by what the overall power and performance constraints allow. You can’t just throw more compute silicon at the problem.

There are two key approaches to scaling AI effectively for embedded applications:

1. Process Technology

At the foundational level, advanced process technologies like GlobalFoundries’ 22FDX+ with Adaptive Body Biasing offer a compelling solution. These transistors can deliver high performance during compute-intensive tasks while maintaining low leakage during idle or always-on modes. This dynamic adaptability ensures that the overall power-performance integrity of the application is preserved.

2. Alternative Compute Architectures

Emerging architectures like neuromorphic computing are gaining attention for their ability to run inference at a fraction of the power—and with lower latency—compared to traditional models. These ultra-low-power solutions are particularly promising for applications where energy efficiency is paramount and real-time response is also important.

BrainChip’s AKD1500 Edge AI co-processor, built on GlobalFoundries 22FDX platform, demonstrates how neuromorphic design can make AI practical for the smallest and most power-sensitive devices. Powered by the company’s AkidaTM technology, the chip uses an event-based approach, processing only when there’s information thereby avoiding the constant compute cycles that waste energy by reading and writing to either on-chip SRAM or off-chip DRAM as in traditional AI systems.  The co-processor performs event-based convolutions that leverage sparsity throughout the whole network in activation maps and kernels, significantly reducing computation power and latency by running as many layers on the Akida TM fabric. The diagram below shows all the interfaces, as well as the 8 Node Akida IP as the centerpiece of the AI co-processor.

The design further improves efficiency by handling data locally and using operations that cut power consumption dramatically. The result is a chip that delivers real-time intelligence while operating within just a few hundred milliwatts, making it possible to add AI features to wearable, sensors, and other AIoT devices that previously relied on the cloud for such capability.

The Akida low-cost, low-power AI co-processor solution offers a silicon-proven design that has already demonstrated critical performance metrics, substantially reducing risk for developers. With fully functional interfaces tested at operational speeds and proven interoperability across multiple MCU and MPU boards, the platform ensures seamless integration. The AKD1500 co-processor supports both power-conscious MCUs via SPI4 and high-performance MPUs through M.2 and PCIe interfaces, providing flexibility across many configurations. Enabling software development early with silicon prototypes accelerates time to market. Several customers have already advanced to prototype stages, validating the design’s maturity and readiness for deployment. As an example, Onsor Technologies’ Nexa smart glasses utilize the AKD1500 for low power inference to predict epileptic seizures, providing quality-of-life benefits for those suffering from epilepsy.

The best part of this is that the AKD1500 can be used with any low cost existing MCU with a SPI interface or an Applications processor where there is a PCIe connection available for higher performance. Adding the AKD1500 AI co-processor makes the time to market very short with available MCUs today.

Final Thoughts

As AI starts to sweep across the length and breadth of embedded space , right sizing becomes not just a technical necessity but a strategic imperative. The goal isn’t to fit the biggest model into the smallest device – it’s to fit the right model into the right device, with the right balance of performance, power, and user experience.

 

Anand Rangarajan
Director, End Markets, GlobalFoundries

Todd Vierra
Vice President, Customer Engagement, BrainChip

The post Right Sizing AI for Embedded Applications appeared first on Edge AI and Vision Alliance.

]]>
Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions https://www.edge-ai-vision.com/2026/01/nanoxplore-and-stmicroelectronics-deliver-european-fpga-for-space-missions/ Wed, 28 Jan 2026 17:00:04 +0000 https://www.edge-ai-vision.com/?p=56650 Key Takeaways: NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST Its advanced digital capability enables European customers to develop higher performance, more competitive satellites […]

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications
  • The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST
  • Its advanced digital capability enables European customers to develop higher performance, more competitive satellites and space missions

NanoXplore, the European leader in the design of SoC FPGA and radiation-hardened FPGA technologies, and STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, announce the qualification of NG-ULTRA for space applications. This radiation-hardened SoC FPGA has been designed specifically for space applications, including low- and medium-earth orbit constellations, and is set to be used in numerous satellite equipment systems, including flagship missions such as Galileo, Copernicus, and potentially IRIS².

First product certified to ESCC 9030 for the European New Space industry

This qualification marks a major industrial and technological milestone for the European space ecosystem: NG-ULTRA is the first product qualified to ESCC 9030, a new European standard dedicated to high-performance micro-circuits in flip-chip’ed on organic substrate or plastic package. This standard delivers the reliability required for space applications while enabling a transition away from traditional ceramic-packaged solutions – well suited for deep-space but heavier and more expensive – marking a key step forward for constellations and higher-volume missions.

The “new space” dynamic (constellations, Low and Medium Earth Orbits, higher volumes) is transforming requirements for onboard digital equipment and driving a shift in scale: there is a simultaneous need for greater computing power, controlled power consumption, and contained costs compatible with large-scale deployments. NG-ULTRA addresses this challenge by enabling more data to be processed directly in orbit (edge computing), thereby limiting transmission bottlenecks between space and ground.

NG-ULTRA targets strategic functions such as on-board computers, data management and routing between sub-systems, image and video processing (real-time compression and encoding), Software Defined Radio (SDR) – enabling remote evolution of communication modes, and onboard autonomy (detection, recognition, supervision).

A secure, European supply chain

Beyond performance, this program embodies a strategic ambition to secure a sovereign and sustainable European supply chain for long-duration missions by reducing critical dependencies. For NG-ULTRA, the industrial framework combines design, manufacturing, assembly, and testing capabilities across European sites, with the aim of reconciling competitiveness, volume production, and space-grade reliability.

In addition to its own R&D and design center in Paris, Grenoble and Montpellier, NanoXplore leverages various STMicroelectronics facilities in Europe, including the Grenoble R&D and design center, the 300mm digital fab of Crolles, the space-specialist packaging facility in Rennes (France), the test and reliability site in Grenoble (France) and Agrate (Italy) and additional redundant qualified sites in Europe.

Technical specifications

With an “all-in-one” SoC (System on Chip) architecture designed specifically for platform and onboard computing applications, NG-ULTRA combines a multi-core processor with programmable hardware on a single chip. This architecture allows for greater design agility, reduces electronic board complexity and component count, and optimizes latency, mass, and power consumption.

NG-ULTRA is built on STMicroelectronics’ 28nm FD-SOI digital technology platform, recognized for its advantages in energy efficiency, resistance to space radiation and advanced architecture features. Combined with a unique advanced radiation hardening technology, the NG-ULTRA is built to survive the thermal cycles, shocks, and vibrations of launch and long-term orbital life so as to ensure best in class performances and durability in the harsh space environment throughout the mission lifetime.

The NG-ULTRA has been designed to operate reliably in harsh radiation environments, offering a Total Ionizing Dose (TID) tolerance of up to 50 krad (Si) to ensure long-term performance. It also demonstrates strong resilience to single-event effects, with Single Event Latch-up (SEL) immunity tested up to 65 MeV·cm²/mg and Single Event Upset (SEU) immunity validated for Linear Energy Transfer (LET) levels exceeding 60 MeV·cm²/mg.

NG-ULTRA integrates a full SoC based on quad core Arm® Cortex® R52 and provides high computational capability (537k LUTs + 32 Mb RAM) to address the most complex onboard computer requirements.

Its streamlined architecture drastically reduces PCB complexity and system mass—two of the most critical constraints in space design. By minimizing the component count, the NG-ULTRA simultaneously lowers total power consumption and project costs while increasing overall system reliability.

In addition, the SRAM-based architecture of the NG-ULTRA enables an adaptive hardware approach, allowing for unlimited on-orbit reconfiguration. This “hardware-as-software” flexibility allows operators to update functionality post-launch, adapt to evolving communication standards, or optimize the chip for different mission phases. The NG-ULTRA thus provides a future-proof platform that extends the operational relevance of assets long after they leave the launchpad.

To facilitate adoption, NG-ULTRA is also available as an evaluation kit — a complete prototyping platform that allows to rapidly validate performance and interfaces, reduce integration risks, and accelerate software and onboard logic development prior to flight-board production.

About NanoXplore

NanoXplore is a French fabless company designing radiation-hardened FPGA components for high-reliability environments, specifically space and avionics. The company recently launched the NG-ULTRA, the world’s most advanced radiation-hardened FPGA SoC. With an international presence, NanoXplore is the European leader in the design and development of SoC FPGA technologies and a key partner to the major players in the aerospace sector.

About STMicroelectronics

At ST, we are 50,000 creators and makers of semiconductor technologies mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027. Further information can be found at www.st.com.

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support https://www.edge-ai-vision.com/2026/01/voyager-sdk-v1-5-3-is-live-and-that-means-ultralytics-yolo26-support/ Tue, 27 Jan 2026 21:32:41 +0000 https://www.edge-ai-vision.com/?p=56648 Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware. Why Ultralytics YOLO26 matters for your projects: The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and […]

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware.

Why Ultralytics YOLO26 matters for your projects:

The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and genuinely faster inference. It talks about up to 43% speed improvements on CPUs compared to previous versions. For anyone running projects on Orange Pi, Raspberry Pi, or similar setups, that’s a nice boost.

Small object detection also gets a nice bump thanks to ProgLoss and STAL improvements. If you’re working on anything that needs to catch smaller details (maybe retail analytics, inspection systems, drone footage analysis), this should be super interesting.

Ultralytics YOLO26 comes in n/s/m/l flavours across all the usual tasks: detection, segmentation, pose estimation, oriented bounding boxes, and classification. Good options for the speed vs. accuracy tradeoff based on your hardware and use case.

Bug fixes and stability improvements:

Beyond Ultralytics YOLO26, this release cleans up several issues from v1.5.2. Resource leaks in GStreamer and AxInferenceNet pipelines are fixed, segmentation faults when recreating pipelines with trackers are sorted, and there’s better performance for cascaded pipelines with secondary models.

If you’ve got systems with multiple Metis devices, there’s also a deadlock fix for setups with more than eight of them.

Get it now:

Head over to the usual spots to grab v1.5.3. If you’re already running projects on earlier versions, the stability fixes alone make this a welcome update.

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Free Webinar Highlights Compelling Advantages of FPGAs https://www.edge-ai-vision.com/2026/01/free-webinar-highlights-compelling-advantages-of-fpgas/ Mon, 26 Jan 2026 22:36:11 +0000 https://www.edge-ai-vision.com/?p=56570 On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: Edge AI system developers often […]

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page:

Edge AI system developers often assume that AI workloads require a GPU or NPU. But when cost, latency, complex I/O or tight power budgets dominate, FPGAs offer compelling advantages.

In this talk we’ll explore how FPGA serve not just a compute block, but as a system-integration and acceleration platform that can combine tailored sensor I/O, signal processing, pre/post-processing and neural inference on one device.

We’ll also show how to map AI models onto FPGAs without doing customer hardware design, using two two practical on-ramps—(1) a software-first flow that generates custom instructions callable from C, and (2) a turnkey CNN acceleration block.

Using representative embedded-vision workloads, we’ll show apples-to-apples benchmarks. Attendees will leave with a decision checklist and a concrete “first experiment” plan.

Mark Oliver is an industry veteran with extensive experience in engineering, applications, and marketing. A native of the UK, Mark gained a degree in Electrical and Electronic Engineering from the University of Leeds. During a ten year tenure with Hewlett Packard, he managed Engineering and Manufacturing functions in HP Divisions both in Europe and the US before heading up Product Marketing and Applications Engineering at a series of video related startups. Prior to joining Efinix, Mark was Director of Worldwide Storage Accounts at Marvell, heading up Marketing and Business Development activities.

To register for this free webinar, please see the event page. For more information, please email webinars@edge-ai-vision.com.

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
Meet MIPS S8200: Real-Time, On-Device AI for the Physical World https://www.edge-ai-vision.com/2026/01/meet-mips-s8200-real-time-on-device-ai-for-the-physical-world/ Mon, 26 Jan 2026 14:00:17 +0000 https://www.edge-ai-vision.com/?p=56621 This blog post was originally published at MIPS’s website. It is reprinted here with the permission of MIPS. Physical AI is the ability for machines to sense their environment, think locally, act safely, and communicate quickly without waiting on the cloud. In safety-critical scenarios like driver assistance or industrial robotics, milliseconds matter. That’s why MIPS’ […]

The post Meet MIPS S8200: Real-Time, On-Device AI for the Physical World appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at MIPS’s website. It is reprinted here with the permission of MIPS.

Physical AI is the ability for machines to sense their environment, think locally, act safely, and communicate quickly without waiting on the cloud. In safety-critical scenarios like driver assistance or industrial robotics, milliseconds matter. That’s why MIPS’ edge-first approach focuses on ultra-low latency, low power, and cost-efficient inference delivered by its Atlas portfolio—and specifically the S8200 “Think” subsystem.

What is MIPS S8200 software-first neural processing unit?

MIPS S8200 is a scalable, RISC-V–based NPU designed for autonomous edge platforms. It combines tightly coupled AI engines with RISC-V application cores to accelerate both vector and matrix workloads, supporting modern frameworks (PyTorch, TensorFlow) and scaling from tens to hundreds of TOPS via coherent cluster tiling, while targeting higher TOPS/W efficiency than legacy architectures for edge deployments. In the MIPS Atlas portfolio, MIPS S8200 is the decision engine that enables multi-modal inference on device. MIPS positions S8200 under the “Think” pillar of the “Sense, Think, Act, Communicate” workload so customers can build complete physical-AI stacks with predictable latency and safety.

Why on-device AI at the edge?

Sending sensor data to the cloud and waiting for inference increases latency, risks privacy, and consumes power, which is unacceptable when a vehicle must brake now, or a robot must intercept a falling object with human-like (or better) reflexes. On-device AI lets platforms react in milliseconds under tight thermal and battery constraints. From a systems perspective, dedicated NPUs deliver inference far more power-efficiently than GPUs while freeing general purpose processors for other tasks, ideal for battery or thermally-limited endpoints.

Key Use Cases Enabled by MIPS S8200

1) Automotive ADAS & Autonomous Perception (Front Camera + 360°)

Modern vehicles aggregate feeds from multiple cameras to build a bird’s-eye view (BEV) around the car. Leading models like BEVFormer1 fuse spatial and temporal cues with transformer architectures, enabling robust perception for lane structures, vehicles, and pedestrians—even in low visibility. S8200’s transformer-friendly design and vector/matrix acceleration help run BEVFormer-class workloads and concurrent tasks (e.g., drive policy) in parallel, meeting stringent latency budgets.

  • Front-camera ADAS: rapid detection/classification for forward collision warning, lane keeping, and traffic-signal understanding.
  • Full-surround perception: camera fusion to detect adjacent vehicles/pedestrians with faster-than-human reaction times.
  • Concurrent decision-making: drive policy modules run alongside perception to determine acceleration, braking, and lane changes.

2) Industrial Robotics & AMRs

Factories, warehouses, and mobile robots are evolving beyond fixed paths to human-interactive, task-adaptive behavior. These systems use vision-language-action (VLA) models: listening to natural language, understanding intent, locating the target, and safely manipulating it with appropriate force or speed, and path planning in real time. MIPS S8200 brings multi-modal inference to the edge so robots can operate autonomously without cloud round-trips, preserving privacy and uptime.

3) Healthcare, Agriculture, and Smart Manufacturing

MIPS S8200’s multi-modal capabilities enable diverse edge scenarios: predictive maintenance & quality control in smart factories; medical imaging assistance and monitoring at the point of care; precision farming (pest detection, crop monitoring) and autonomous implements. These are among the target verticals MIPS highlights for physical AI at the edge.

Open & Modular: Built for “Any Model, Past, Present; and Future”

Teams need freedom to optimize their models, and MIPS’ open approach leans on RISC-V (an open, extensible, instruction set architecture) so implementers can add custom instructions to benefit the workload (e.g., accelerating softmax in transformer attention) and co-design the software and hardware together. On the software side, MIPS embraces MLIR and the IREE ecosystem to modularize the compiler/runtime via dialects, making it easier to plug in optimizations, target diverse accelerators, and keep the toolchain transparent. MIPS Atlas Explorer lets teams model workloads, predict performance, and identify bottlenecks before hardware is fixed, allowing designers to prioritize use-case performance over raw TOPS.

Why S8200 for Product & Engineering Teams

  • Edge-first performance: deterministic latency for safety-critical actions in vehicles and robots.
  • Scalable efficiency: coherent cluster tiling from 10 TOPS to 100s of TOPS
  • Future-proof: designed to run convolutional and transformer workloads, including BEVFormer-class perception and VLA models without locking into proprietary stacks.
  • Open ecosystem: RISC-V + MLIR/IREE for customizable, transparent optimization pipelines.
  • Faster decisions: Atlas Explorer to de-risk design choices before tape-out and/or platform freeze.

The Bottom Line

As AI moves from cloud demos to real machines that navigate streets and factory floors, the winners will be platforms that sense-think-act at the edge. MIPS S8200 gives teams a practical path to deploy multi-modal, transformer-class AI locally—with the open tooling and simulation-first workflow engineers need to hit their latency, power, and safety targets. This shift also addresses a looming labor gap: U.S. manufacturing could face ~2.0–2.1M unfilled jobs2 by ~2030, increasing the need for automation that is safe, flexible, and easy to deploy – the autonomous edge with Physical AI built on MIPS.

Footnotes

1 – BEVFormer (ECCV 2022) arXiv: https://arxiv.org/abs/2203.17270

2 – Manufacturing labor gap (NAM/Deloitte): https://nam.org/2-1-million-manufacturing-jobs-could-go-unfilled-by-2030-13743/

The post Meet MIPS S8200: Real-Time, On-Device AI for the Physical World appeared first on Edge AI and Vision Alliance.

]]>
The Next Platform Shift: Physical and Edge AI, Powered by Arm https://www.edge-ai-vision.com/2026/01/the-next-platform-shift-physical-and-edge-ai-powered-by-arm/ Mon, 26 Jan 2026 09:00:15 +0000 https://www.edge-ai-vision.com/?p=56597 This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. The Arm ecosystem is taking AI beyond the cloud and into the real-world As CES 2026 opens, a common thread quickly emerges across the show floor: most of what people are seeing, touching, and experiencing is already built on Arm. Arm-based […]

The post The Next Platform Shift: Physical and Edge AI, Powered by Arm appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm.

The Arm ecosystem is taking AI beyond the cloud and into the real-world

As CES 2026 opens, a common thread quickly emerges across the show floor: most of what people are seeing, touching, and experiencing is already built on Arm. Arm-based platforms power the devices and systems behind the product and technology demos, including intelligent vehicles navigating complex environments, robots interacting with humans, and immersive XR devices blending the digital and physical worlds.

These mark a broader inflection point for AI as it becomes increasingly sophisticated, moving from perception to action in the real world. As NVIDIA CEO Jensen Huang put it in his CES 2026 keynote, “the ChatGPT moment for physical AI is here.” And it’s happening on Arm.

Built for the real world: Edge-first design and proven software ecosystem

As AI moves into the physical world it must operate under real-world constraints. This next phase is defined by systems that can respond instantly, run efficiently, and operate reliably in the physical world. That transition demands compute that is designed for predictable, low-latency performance, extreme power and thermal efficiency, and continuous local inference. Just as critical, safety and security must be foundational, not layered on after deployment.

This is where edge-first platforms become essential, with Arm uniquely positioned. Arm delivers both unmatched energy efficiency and the world’s largest software developer base, making it the natural platform for building and scaling physical and edge AI systems globally. From operating systems and middleware to AI frameworks and developer tools, partners like NVIDIA and Qualcomm have developed their technologies on Arm over decades. That maturity means innovation can move faster, scale more broadly, and deploy more safely as AI transitions from digital intelligence to physical intelligence in the real world.

The next frontier: AI that moves

At CES 2026, NVIDIA outlined its vision for robotics, with on-stage demos of robots powered by its new physical AI stack. NVIDIA unveiled open robot foundation models, simulation tools, and edge hardware – including Jetson Thor that is built on Arm Neoverse – to accelerate AI that can reason, plan, and adapt in dynamic environments. Partners including Boston Dynamics, Caterpillar, LG Electronics, and NEURA Robotics showcased robots trained on NVIDIA’s full physical AI stack that leverages the Arm compute platform and deeply established software ecosystem spanning automotive, autonomous and robotics.

Qualcomm is further advancing its robotics portfolio with the new Dragonwing IQ10 robotics processor for advanced use cases like industrial robots, autonomous mobile robots (AMRs), and humanoid systems. Qualcomm’s robotics portfolio runs on the Arm compute platform, delivering energy-efficient robots and physical AI at the edge.

These robotics announcements build on pre-existing technologies pioneered across automotive, an industry that Arm has enabled for decades. Much like robots, AI systems in vehicles already sense their environment, make split-second decisions, and act safely in the physical world. As robotics evolves, it will increasingly mirror the complexity, safety requirements, and system architecture of modern vehicles. Many of the companies shaping the future of automotive will also design the robots of tomorrow, like Rivian. With the entire automotive industry already building on Arm, the transition from cars to robots is a natural one.

In automotive at CES 2026, NVIDIA debuted their Drive AV Software in the all-new Mercedes-Benz CLA. The AV stack’s in-vehicle compute and Hyperion architecture is powered by Arm Neoverse-based NVIDIA DRIVE AGX Thor. Meanwhile, Qualcomm’s Snapdragon Digital Chassis continues to expand, and is now adopted by global automakers transitioning to AI-defined vehicles. These platforms are builton Arm’s compute efficiency and consistent software ecosystem across infotainment, advanced driver assistance systems (ADAS), and in-vehicle AI.

Scaling intelligence from edge to cloud

Beyond robotics and automotive, we’re continuing to see momentum for Arm-based platforms both in the cloud and at the edge.

NVIDIA’s new Vera Rubin AI platform includes six new chips, two of which – Vera and Bluefield-4 – are built on Arm. Bluefield-4, a DPU powered by the Arm Neoverse V2-based Grace CPU, delivers up to six times the compute performance of its predecessor, transforming the DPU’s role in rack-scale inference and enabling new optimizations such as a new AI inference specific storage solution.

At the developer level, NVIDIA is pushing the frontier with powerful local AI systems. Developers can take advantage of the latest open and frontier AI models on a local deskside system, from 100-billion-parameter models on DGX Spark to 1-trillion-parameter models on DGX Station. Both platforms are powered by the Arm-based Grace Blackwell architecture, delivering petaflop-class performance and enabling seamless development that can scale from desk to data center.

On the personal computing front, the Windows on Arm AI PC portfolio is expanding into the mainstream, enabling OEMs to scale solutions to the mass market, extend battery life, and close the gap with legacy x86 systems.

Arm is the compute foundation powering CES 2026

What connects NVIDIA, Qualcomm, and a global ecosystem of innovators? Arm’s scalable, energy-efficient architecture.

CES 2026 is already demonstrating that the Arm compute platform powers data centers, robots, vehicles and countless edge devices, including:

  • NVIDIA’s accelerated platforms, from cloud to edge;
  • Qualcomm’s mobile, AI PC, XR/Wearables, and automotive systems; and
  • Nuro’s driverless fleets and Uber’s cloud infrastructure.

A prime example is the Nuro-Lucid-Uber partnership. Nuro’s latest driverless platform, built on the Arm Neoverse platform, enables efficient, real-time edge AI in autonomous Lucid Gravity SUVs. These vehicles, featuring NVIDIA DRIVE Thor and Arm Neoverse V3AE, deliver Level 4 autonomy with safety-critical reliability. Uber, meanwhile, is scaling on Arm-based Ampere servers to lower power use while increasing cloud density, illustrating Arm’s pivotal role from cloud to car.

Why ecosystem scale wins

CES 2026 sends a clear message: AI is now becoming embedded in the world around us. Making the physical and edge AI era a reality isn’t about individual chips or product launches; it requires full-stack ecosystem scale. This means:

  • Software portability across devices;
  • Developer familiarity and productivity;
  • Long product lifecycles with stable platforms; and
  • Standards-based innovation across industries.

The next platform shift isn’t defined by model size, but by intelligence that can operate autonomously, adapt in real time, and scale efficiently from cloud to edge. It’s about systems that are designed from day one to learn continuously, distribute decision-making, and perform within real-world constraints.

Arm provides the common compute foundation that makes this possible – trusted, scalable, and optimized for efficiency. That’s why Arm shows up everywhere at CES 2026 and wherever physical AI is taking shape.

The post The Next Platform Shift: Physical and Edge AI, Powered by Arm appeared first on Edge AI and Vision Alliance.

]]>
STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras https://www.edge-ai-vision.com/2026/01/stm32mp21x-its-never-been-more-cost-effective-or-more-straightforward-to-create-industrial-applications-with-cameras/ Fri, 23 Jan 2026 09:00:03 +0000 https://www.edge-ai-vision.com/?p=56583 This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the […]

The post STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics.

ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the STM32MP25x and its 1.35 TOPS NPU, and the STM32MP23x, which targeted industrial AI applications, the new STM32MP21x lowers the barrier to entry by still offering DDR4/LPDDR4 alongside DDR3L and the same Ethernet controllers with time-sensitive networking as the other members of the series. Consequently, teams looking to use an MPU in an industrial setting can now do it while keeping their costs even lower, whether with Linux or bare-metal software.

The contradictions pulling MPU designs apart

Power vs. efficiency

The world of embedded Linux is complex because it operates under very tight constraints. On the one hand, teams choose Linux because they need something far more powerful and extensive than a traditional real-time operating system can provide. However, the same application can significantly benefit from running some of its operations on a bare-metal system, which is why the ability to run an RTOS on ST MPUs since the STM32MP13 has been so successfulSimilarly, while teams need the computational power of an MPU, they face power-consumption and cost constraints that can make designing systems challenging.

Computational throughput vs. ease of transition

Engineers face a significant gap when transitioning to the MPU world. Usually, that happens when they have reached the limits of what’s reasonable to run on a microcontroller and must adopt a significantly more powerful device and embedded Linux. Unfortunately, the industry doesn’t always provide an MPU that makes this move easy, as it forces designers to deal with a massive bill of materials and development costs. That’s why the STM32MP21x sets a new standard for affordability, as its bare-metal capabilities mean that teams can port some of their existing applications for an even smoother transition. Moreover, they even get a modern DDR4/LPDDR4 controller with DDR3L backward compatibility to future-proof their system.

The modern solutions to make MPU designs more accessible

A flexible memory controller

The new STM32MP21x comes with a memory controller supporting 16-bit DDR4/LPDDR4 and DDR3L. Teams wishing to replace their STM32MP13x while keeping their legacy DDR3L can swap the MPU with minimal adjustments. Conversely, teams looking to adopt a more modern  architecture without substantially increasing their costs now have an alternative that will serve them for years to come. It also gives teams much more flexibility to weather the volatility of the memory market, since engineers can work with a broader range of memory types. And since the STM32MP21x operates with all memory generations at the same frequency, and the industrial applications are very rarely limited by the RAM bandwidth, the performance difference remains minimal or even imperceptible.

 

 

A resourceful architecture

To make the STM32MP21x even more practical, we made it pin-to-pin compatible with the STM32MP23x and the STM32MP25x using a 10 mm x 10 mm package. It also uses the same Cortex-M33 as the other STM32MP2 devices, making it nearly effortless to use our M33-TD implementation in our OpenSTLinux distribution across all STM32MP2s. The new STM32MP21x also handles the same wide junction temperature range (-40 ºC to 125 ºC) and targets the same SESIP Level 3 certification. It also comes with dual Gigabit Ethernet ports with time-sensitive networking, and multiple interfaces, including a CSI-2 for camera pipelines. Put simply, offering a cost-effective solution didn’t mean sacrificing important features for industrial markets.

The next steps to jump on the bandwagon

More cost-effective image processing

Thanks to its architecture, engineers can use the STM32MP21x in an application that captures data from an image sensor and cleans it up before sending it to another MPU with a neural processing unit. It helps spread the computational load while reusing a lot of the work that goes into these microprocessors. Similarly, thanks to its peripherals and security features, teams can use the STM32MP21x for processing sensor data at the edge while meeting the ever-increasing requirements imposed by governments and other regulatory bodies. Put simply, it allows many engineers to create applications that were previously too costly to conceive or lacked the proper hardware support on an MCU or competing MPU.

A Discovery Kit to get started

The best way to get started is to grab the STM32MP215F-DK Discovery Kit . It comes with a MIPI CSI-2 two-lane camera interface, one Gigabit Ethernet port with TSN support, 2 GB of LPDDR4, an M.2 connector for accessories or storage (like a Wi-Fi / BT module), and an LCD-TFT display controller for projects that require a UI. The board receives power via a USB-C 2.0 port that also transmits data for debugging and programming with ST-LINK, among other things, and a microSD card slot will help with overall storage.

In a nutshell, the STM32MP215F-DK Discovery Kit is the quickest way to experiment with capturing image or inertial sensor data and see how the STM32MP21x can impact a design. Once they move to a custom design, engineers will have the widest selection of packages, from 14 mm x 14 mm to 11 mm x 11 mm, 10 mm x 10 mm, and 8 mm x 8 mm. Once teams choose their device and configuration, they will get access to a wide range of layout examples available on ST.com to help them start with their preferred package, the PMIC (more news to come soon), and selected DRAM.

The post STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Last Mile Logistics https://www.edge-ai-vision.com/2026/01/upcoming-webinar-on-last-mile-logistics/ Thu, 22 Jan 2026 23:21:47 +0000 https://www.edge-ai-vision.com/?p=56615 On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page: Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep […]

The post Upcoming Webinar on Last Mile Logistics appeared first on Edge AI and Vision Alliance.

]]>
On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page:

Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep these machines on track, delivery after delivery, in challenging urban environments?

Join industry leaders from Point One Navigation and STMicroelectronics as we explore the unique challenges faced by engineers designing these specialized delivery robots and vehicles. Learn about the critical technologies, from microcomputing hardware and GNSS receivers to precision corrections and advanced sensor fusion, that ensure your vehicles navigate safely through complex urban terrain, GPS-denied areas, and high-density environments.

Packed with proven tips, tricks, and lessons learned from working with dozens of engineering teams in the last mile delivery world, this webinar is essential for OEMs ready to accelerate their autonomous logistics solutions.

Register Now »

Featured Speakers:

Mike Slade, GNSS Marketing Lead, Americas, STMicroelectronics

ST’s GNSS Marketing Lead for the Americas, holds a BS in EE & Mathematics and an MBA in Global Marketing. He started developing GNSS software and algorithms in 2000 for the Motorola Mobile Devices Lab’s GAM GNSS chipset designed for cellular E911 compliance. He joined the ST Teseo GNSS team in 2007, where he has done product software development, applications, strategic technical marketing, and program management.

Gabe Amancio, Head of Application Engineering, Point One Navigation

Point One’s Head of Application Engineering and has deep expertise in precision GNSS. His expertise spans technical applications, corrections, position engine integration (both hardware and software), API integration, and the critical phases of proof-of-concept scoping and testing. Prior to Point One, Gabe earned his Bachelor’s in Electrical Engineering from Cal Poly SLO and honed his skills in the semiconductor industry, focusing on sales and application engineering.

What You Will Learn:

-How to achieve continuous, centimeter-accurate positioning in challenging urban environments (e.g., urban canyons, under structures, in parking garages).
-The crucial role of STMicroelectronics’ Teseo VI GNSS technology and advanced IMUs in maintaining position accuracy.
-Leveraging Point One’s robust Polaris RTK network for reliable corrections without a local base station.
-Strategies for sensor fusion (GNSS, RTK, IMU, odometry, vision) to ensure continuity and safety in GPS-denied areas.
-Real-world examples and practical insights from successful last mile delivery OEM deployments.

For more information and to register, visit the event page.

The post Upcoming Webinar on Last Mile Logistics appeared first on Edge AI and Vision Alliance.

]]>
Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles https://www.edge-ai-vision.com/2026/01/why-scalable-high-performance-socs-are-the-future-of-autonomous-vehicles/ Thu, 22 Jan 2026 09:00:22 +0000 https://www.edge-ai-vision.com/?p=56574 This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Summary The automotive industry is ascending to higher levels of vehicle autonomy with the help of central computing platforms. SoCs like the TDA5 family offer safe, efficient AI performance through an integrated C7™ NPU and […]

The post Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments.

Summary

The automotive industry is ascending to higher levels of vehicle autonomy with the help of central computing platforms. SoCs like the TDA5 family offer safe, efficient AI performance through an integrated C7™ NPU and chiplet-ready design. These SoCs enable automakers to more easily implement ADAS capabilities, bringing premium features to all types of vehicles, from base models to luxury cars.

Figure 1 Visualization of ADAS features for autonomous driving in a software-defined vehicle analyzing environmental data.

Introduction

How long have advanced driver assistance systems (ADAS) and autonomous driving been trendy topics? For the last decade or so, automakers at trade shows have shown consumers visions of a future with roads full of intelligent, autonomous vehicles.

We are finally closer to that vision. You likely have driven in or may even own a vehicle with features that existed only conceptually 10 years ago.

In terms of broad availability and the adoption of intelligent ADAS features and artificial intelligence (AI) capabilities, the industry is progressing through the Society of Automotive Engineers’ levels of vehicle autonomy from Level 1 to Level 2 and Level 3. This proliferation of autonomous features is currently occurring in both domain-based and central computing vehicle architectures. The next, biggest steps toward vehicle autonomy will occur in the latter, with software-defined vehicles (SDVs), as visualized in Figure 1, poised to become the standard vehicle configuration.

This emerging vehicle architecture consolidates traditional distributed electronic control units (ECUs) into powerful central computing platforms, enabling over-the-air updates, feature additions and enhanced functionality throughout a vehicle’s lifetime. SDVs use hardware as a platform and software for iterative updates, giving automakers the flexibility to continuously improve a vehicle’s capabilities and deliver new autonomous driving features without hardware changes.

SoCs for the next generation of automotive designs

At the core of central computing architectures (Figure 2) are heterogeneous SoCs that integrate a variety of IP blocks and support advanced software, such as the TDA54-Q1, the first device in the TDA5 family of SoCs.


Figure 2 Simplified overview of the central computing architecture and connected systems in a software-defined vehicle.

 

While there are multiple types of high-performance SoCs on the market, SoCs that employ a variety of computing components are more power-efficient and able to increase performance in a central computing ECU when compared to SoCs primarily based on a single type of computing element (such as graphics processing units). SoCs with a variety of computing elements simplify development, deployment and execution of software for advanced autonomous driving features because they can offload specific tasks to their specialized IP blocks, including high-performance neural processing units (NPUs) and vision processors, supported by dedicated onboard memory.

Heterogeneous SoCs such as the TDA54-Q1 bring more autonomous driving capabilities and design flexibility to more vehicles through:

  • Scalable AI performance. In terms of edge AI capabilities, TDA5 SoCs were designed using the latest automotive qualified 5nm process technology and feature integrated NPUs based on TI’s proprietary C7™ digital signal processing architecture. These technologies help deliver an efficient power envelope and scalable AI performance from 10 to 1,200 trillion operations per second (TOPS). Engineers can leverage the AI resources of these SoCs to increase vehicle responsiveness through support for multibillion-parameter large language models, vision language models and advanced transformer networks. This level of AI performance is scalable over time to meet the evolving needs of different application requirements, from supporting Level 1 features such as adaptive cruise control all the way up to Level 3 autonomy, which covers conditional driving automation or self-driving under specified conditions.
  • Safety-first architecture. TDA5 SoCs deliver a higher level of specialized performance and efficiency through a cross-domain hardware safety architecture that provides deterministic, real-time monitoring that software cannot achieve alone. Such performance enables OEMs to meet Automotive Safety Integrity Level D, the highest risk classification in the International Organization for Standardization 26262 standard. Using the latest Armv9 cores from Arm®, TDA5 SoCs feature lockstep capabilities in their application and microcontroller cores.
  • Chiplet-ready architecture. The scalability of the TDA5 SoC family isn’t limited to its processing performance; these devices also have a chiplet-ready architecture. Chiplets are an emerging semiconductor architectural design approach where individual integrated circuits serve a similar role as IP blocks in a heterogeneous SoC, allowing for the modular design of specialized chips. Built-in support for the Universal Chiplet Interconnect Express interface open technology standard enables greater scalability and adaptability of TDA5 SoCs through future chiplet extensions, offering developers a future-proof platform that can evolve with their needs.

Conclusion

Over the next decade, ADAS features will become standard and potentially even mandatory. Premium driving features will become mainstream and available for all vehicles, from entry-level base models to luxury cars. With devices like TDA5 SoCs, it’s only a matter of time.

Additional resources

The post Why Scalable High-Performance SoCs are the Future of Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>