News - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/news/ Designing machines that perceive and understand. Wed, 11 Feb 2026 20:54:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png News - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/news/ 32 32 Upcoming Webinar on CSI-2 over D-PHY & C-PHY https://www.edge-ai-vision.com/2026/02/upcoming-webinar-on-csi-2-over-d-phy-c-phy/ Wed, 11 Feb 2026 20:54:05 +0000 https://www.edge-ai-vision.com/?p=56822 On February 24, 2026, at 9:00 am PST (12:00 pm EST) MIPI Alliance will deliver a webinar “MIPI CSI-2 over D-PHY & C-PHY: Advancing Imaging Conduit Solutions” From the event page: MIPI CSI-2®, together with MIPI D-PHY™ and C-PHY™ physical layers, form the foundation of image sensor solutions across a wide range of markets, including […]

The post Upcoming Webinar on CSI-2 over D-PHY & C-PHY appeared first on Edge AI and Vision Alliance.

]]>
On February 24, 2026, at 9:00 am PST (12:00 pm EST) MIPI Alliance will deliver a webinar “MIPI CSI-2 over D-PHY & C-PHY: Advancing Imaging Conduit Solutions” From the event page:

MIPI CSI-2®, together with MIPI D-PHY™ and C-PHY™ physical layers, form the foundation of image sensor solutions across a wide range of markets, including smartphones, computing, automotive, robotics and beyond. This webinar will explore the latest CSI-2 feature developments and the continued evolution of MIPI’s low-energy, high-performance physical layer transport solutions–D-PHY and C-PHY–which leverage differential and ternary signaling, respectively.

Attendees will gain insight into recently adopted capabilities such as event-based sensing and processing, as well as D‑PHY embedded clock mode. The session will also cover near-term enhancements, including dual-PHY macro support and multi-drop bus capability, along with a forward-looking view of longer-term feature developments. By the close of the webinar, attendees will understand how MIPI imaging solutions are enabling next-generation computer and machine vision applications across a wide range of product ecosystems.

Register Now »

Featured Speakers:

Haran Thanigasalam, Chair of the MIPI Camera Working Group and Camera Interest Group

Raj Kumar Nagpal, Chair of the MIPI D-PHY Working Group

George Wiley, Chair of the MIPI C-PHY Working Group

For more information and to register, visit the event page.

The post Upcoming Webinar on CSI-2 over D-PHY & C-PHY appeared first on Edge AI and Vision Alliance.

]]>
Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making https://www.edge-ai-vision.com/2026/02/production-ready-full-stack-edge-ai-solutions-turn-microchips-mcus-and-mpus-into-catalysts-for-intelligent-real-time-decision-making/ Tue, 10 Feb 2026 20:15:25 +0000 https://www.edge-ai-vision.com/?p=56811 Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering […]

The post Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making appeared first on Edge AI and Vision Alliance.

]]>
Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering with full-stack solutions that streamline development of production-ready applications using its microcontrollers (MCUs) and microprocessors (MPUs) – the devices that are located closest to the many sensors at the edge that gather sensor data, control motors, trigger alarms and actuators, and more.

Microchip’s products are long-time embedded-design workhorses, and the new solutions turn its MCUs and MPUs into complete platforms for bringing secure, efficient and scalable intelligence to the edge. The company has rapidly built and expanded its growing, full-stack portfolio of silicon, software and tools that solve edge AI performance, power consumption and security challenges while simplifying implementation.

“AI at the edge is no longer experimental—it’s expected, because of its many advantages over cloud implementations,” said Mark Reiten, corporate vice president of Microchip’s Edge AI business unit. “We created our Edge AI business unit to combine our MCUs, MPUs and FPGAs with optimized ML models plus model acceleration and robust development tools. Now, the addition of the first in our planned family of application solutions accelerates the design of secure and efficient intelligent systems that are ready to deploy in demanding markets.”

Microchip’s new full-stack application solutions for its MCUs and MPUs encompass pre-trained and deployable models as well as application code that can be modified, enhanced and applied to different environments. This can be done either through Microchip’s embedded software and ML development tools or those from Microchip partners. The new solutions include:

  • Detection and classification of dangerous electrical arc faults using AI-based signal analysis
  • Condition monitoring and equipment health assessment for predictive maintenance
  • Facial recognition with liveness detection supporting secure, on-device identity verification
  • Keyword spotting for consumer, industrial and automotive command-and-control interfaces

Development Tools for AI at the Edge
Engineers can leverage familiar Microchip development platforms to rapidly prototype and deploy AI models, reducing complexity and accelerating design cycles. The company’s MPLAB? X Integrated Development Environment (IDE) with its MPLAB Harmony software framework and MPLAB ML Development Suite plug-in provides a unified and scalable approach for supporting embedded AI model integration through optimized libraries. Developers can, for example, start with simple proof-of-concept tasks on 8-bit MCUs and move them to production-ready high-performance applications on Microchip’s 16- or 32-bit MCUs.

For its FPGAs, Microchip’s VectorBlox™ Accelerator SDK 2.0 AI/ML inference platform accelerates vision, Human-Machine Interface (HMI), sensor analytics and other computationally intensive workloads at the edge while also enabling training, simulation and model optimization within a consistent workflow.

Other support includes training and enablement tools like the company’s motor control reference design featuring its dsPIC? DSCs for data extraction in a real-time edge AI data pipeline, and others for load disaggregation in smart e-metering, object detection and counting, and motion surveillance. Microchip also helps solve edge AI challenges through complementary components that are required for product design and development. These include PCIe? devices that connect embedded compute at the edge and high-density power modules that enable edge AI in industrial automation and data center applications.

The analyst firm IoT Analytics stated in its October 2025 market report that embedding edge AI capabilities directly into MCUs is among the top four industry trends, enabling AI-driven applications “…that reduce latency, enhance data privacy, and lower dependency on cloud infrastructure.” Microchip’s AI initiative reinforces this trend with its MCU and MPU platform, as well as its FPGAs. Edge AI ecosystems increasingly require support for both software AI accelerators and integrated hardware acceleration on multiple devices across a range of memory configurations.

Availability
Microchip is actively working with customers of its full-stack application solutions, providing a variety of model training and other workflow support. The company is also working with multiple partners whose software provides developers with additional deployment-ready options. To learn more about Microchip’s edge AI offering and new full-stack solutions, visit www.microchip.com/EdgeAI. Additional information on each solution can be found at Microchip’s on-demand Edge AI Webinar Series, starting February 17.

About Microchip
Microchip Technology Inc. is a broadline supplier of semiconductors committed to making innovative design easier through total system solutions that address critical challenges at the intersection of emerging technologies and durable end markets. Its easy-to-use development tools and comprehensive product portfolio support customers throughout the design process, from concept to completion. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support and delivers solutions across the industrial, automotive, consumer, aerospace and defense, communications and computing markets. For more information, visit the Microchip website at www.microchip.com.

The post Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Industrial 3D Vision with iToF Technology https://www.edge-ai-vision.com/2026/02/upcoming-webinar-on-industrial-3d-vision-with-itof-technology/ Tue, 03 Feb 2026 18:46:13 +0000 https://www.edge-ai-vision.com/?p=56760 On February 18, 2026, at 9:00 am PST (12:00 pm EST), and on February 19, 2026 at 11:00 am CET, Alliance Member company e-con Systems in partnership with onsemi will deliver a webinar “Enabling Reliable Industrial 3D Vision with iToF Technology” From the event page: Join e-con Systems and onsemi for an exclusive joint webinar […]

The post Upcoming Webinar on Industrial 3D Vision with iToF Technology appeared first on Edge AI and Vision Alliance.

]]>
On February 18, 2026, at 9:00 am PST (12:00 pm EST), and on February 19, 2026 at 11:00 am CET, Alliance Member company e-con Systems in partnership with onsemi will deliver a webinar “Enabling Reliable Industrial 3D Vision with iToF Technology” From the event page:

Join e-con Systems and onsemi for an exclusive joint webinar on how Time-of-Flight (iToF) based 3D vision is enabling reliable perception for modern robotic applications, industrial and warehouse automation workflows.

Vision experts will discuss how industrial teams can leverage iToF sensor capabilities into deployable 3D vision solutions while addressing the perception challenges commonly faced in complex industrial environments.

Attendees will gain insights from proven customer success stories in field deployments, including parcel box dimensioning, autonomous pallet handling, obstacle detection, and collision avoidance in warehouse environments.

Register Now »

Featured Speakers:

Radhika S, Senior Project Lead, e-con Systems

Aidan Browne, Product Marketing Manager – Depth Sensing, onsemi

Key insights you’ll gain:

  • Key industrial applications driving the adoption of iToF-based 3D vision
  • Common perception challenges in industrial environments
  • Translating sensor capability into deployable robotics vision solutions
  • Proven customer success stories from field deployments

For more information and to register, visit the event page.

The post Upcoming Webinar on Industrial 3D Vision with iToF Technology appeared first on Edge AI and Vision Alliance.

]]>
Google Adds “Agentic Vision” to Gemini 3 Flash https://www.edge-ai-vision.com/2026/01/google-adds-agentic-vision-to-gemini-3-flash/ Fri, 30 Jan 2026 20:06:46 +0000 https://www.edge-ai-vision.com/?p=56735 Jan. 30, 2026 — Google has announced Agentic Vision, a new capability in Gemini 3 Flash that turns image understanding into an active, tool-using workflow rather than a single “static glance.” Agentic Vision pairs visual reasoning with code execution (Python) so the model can iteratively zoom in, crop, annotate, and otherwise manipulate an image to […]

The post Google Adds “Agentic Vision” to Gemini 3 Flash appeared first on Edge AI and Vision Alliance.

]]>
Jan. 30, 2026 — Google has announced Agentic Vision, a new capability in Gemini 3 Flash that turns image understanding into an active, tool-using workflow rather than a single “static glance.”

Agentic Vision pairs visual reasoning with code execution (Python) so the model can iteratively zoom in, crop, annotate, and otherwise manipulate an image to verify details before responding—helping reduce guesswork on fine-grained elements like serial numbers or distant text.

According to Google DeepMind, this approach follows a “Think, Act, Observe” loop: the model forms a multi-step plan, executes Python to transform or analyze the image, then appends the transformed output back into its context window to support a more grounded final answer.

Google reports that enabling code execution with Gemini 3 Flash delivers a consistent 5–10% quality boost across most vision benchmarks. The company also highlights early developer use cases, including iterative inspection of high-resolution documents (e.g., building-plan validation) and “visual scratchpad” style annotation to reduce counting and localization errors.

Beyond inspection and annotation, Agentic Vision can offload multi-step visual arithmetic to a deterministic Python environment—parsing dense visual tables, normalizing values, and generating charts (e.g., with Matplotlib) rather than relying on probabilistic reasoning alone.

Availability and next steps
Agentic Vision is available now via the Gemini API in Google AI Studio and Vertex AI, and is beginning to roll out in the Gemini app (via the “Thinking” model selection). Google says it plans to make more code-driven behaviors implicit over time, expand tooling (including ideas like web and reverse image search), and bring the capability to additional model sizes beyond Flash.

Original announcement (with full details and examples): Google’s blog post.

The post Google Adds “Agentic Vision” to Gemini 3 Flash appeared first on Edge AI and Vision Alliance.

]]>
Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions https://www.edge-ai-vision.com/2026/01/nanoxplore-and-stmicroelectronics-deliver-european-fpga-for-space-missions/ Wed, 28 Jan 2026 17:00:04 +0000 https://www.edge-ai-vision.com/?p=56650 Key Takeaways: NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST Its advanced digital capability enables European customers to develop higher performance, more competitive satellites […]

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications
  • The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST
  • Its advanced digital capability enables European customers to develop higher performance, more competitive satellites and space missions

NanoXplore, the European leader in the design of SoC FPGA and radiation-hardened FPGA technologies, and STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, announce the qualification of NG-ULTRA for space applications. This radiation-hardened SoC FPGA has been designed specifically for space applications, including low- and medium-earth orbit constellations, and is set to be used in numerous satellite equipment systems, including flagship missions such as Galileo, Copernicus, and potentially IRIS².

First product certified to ESCC 9030 for the European New Space industry

This qualification marks a major industrial and technological milestone for the European space ecosystem: NG-ULTRA is the first product qualified to ESCC 9030, a new European standard dedicated to high-performance micro-circuits in flip-chip’ed on organic substrate or plastic package. This standard delivers the reliability required for space applications while enabling a transition away from traditional ceramic-packaged solutions – well suited for deep-space but heavier and more expensive – marking a key step forward for constellations and higher-volume missions.

The “new space” dynamic (constellations, Low and Medium Earth Orbits, higher volumes) is transforming requirements for onboard digital equipment and driving a shift in scale: there is a simultaneous need for greater computing power, controlled power consumption, and contained costs compatible with large-scale deployments. NG-ULTRA addresses this challenge by enabling more data to be processed directly in orbit (edge computing), thereby limiting transmission bottlenecks between space and ground.

NG-ULTRA targets strategic functions such as on-board computers, data management and routing between sub-systems, image and video processing (real-time compression and encoding), Software Defined Radio (SDR) – enabling remote evolution of communication modes, and onboard autonomy (detection, recognition, supervision).

A secure, European supply chain

Beyond performance, this program embodies a strategic ambition to secure a sovereign and sustainable European supply chain for long-duration missions by reducing critical dependencies. For NG-ULTRA, the industrial framework combines design, manufacturing, assembly, and testing capabilities across European sites, with the aim of reconciling competitiveness, volume production, and space-grade reliability.

In addition to its own R&D and design center in Paris, Grenoble and Montpellier, NanoXplore leverages various STMicroelectronics facilities in Europe, including the Grenoble R&D and design center, the 300mm digital fab of Crolles, the space-specialist packaging facility in Rennes (France), the test and reliability site in Grenoble (France) and Agrate (Italy) and additional redundant qualified sites in Europe.

Technical specifications

With an “all-in-one” SoC (System on Chip) architecture designed specifically for platform and onboard computing applications, NG-ULTRA combines a multi-core processor with programmable hardware on a single chip. This architecture allows for greater design agility, reduces electronic board complexity and component count, and optimizes latency, mass, and power consumption.

NG-ULTRA is built on STMicroelectronics’ 28nm FD-SOI digital technology platform, recognized for its advantages in energy efficiency, resistance to space radiation and advanced architecture features. Combined with a unique advanced radiation hardening technology, the NG-ULTRA is built to survive the thermal cycles, shocks, and vibrations of launch and long-term orbital life so as to ensure best in class performances and durability in the harsh space environment throughout the mission lifetime.

The NG-ULTRA has been designed to operate reliably in harsh radiation environments, offering a Total Ionizing Dose (TID) tolerance of up to 50 krad (Si) to ensure long-term performance. It also demonstrates strong resilience to single-event effects, with Single Event Latch-up (SEL) immunity tested up to 65 MeV·cm²/mg and Single Event Upset (SEU) immunity validated for Linear Energy Transfer (LET) levels exceeding 60 MeV·cm²/mg.

NG-ULTRA integrates a full SoC based on quad core Arm® Cortex® R52 and provides high computational capability (537k LUTs + 32 Mb RAM) to address the most complex onboard computer requirements.

Its streamlined architecture drastically reduces PCB complexity and system mass—two of the most critical constraints in space design. By minimizing the component count, the NG-ULTRA simultaneously lowers total power consumption and project costs while increasing overall system reliability.

In addition, the SRAM-based architecture of the NG-ULTRA enables an adaptive hardware approach, allowing for unlimited on-orbit reconfiguration. This “hardware-as-software” flexibility allows operators to update functionality post-launch, adapt to evolving communication standards, or optimize the chip for different mission phases. The NG-ULTRA thus provides a future-proof platform that extends the operational relevance of assets long after they leave the launchpad.

To facilitate adoption, NG-ULTRA is also available as an evaluation kit — a complete prototyping platform that allows to rapidly validate performance and interfaces, reduce integration risks, and accelerate software and onboard logic development prior to flight-board production.

About NanoXplore

NanoXplore is a French fabless company designing radiation-hardened FPGA components for high-reliability environments, specifically space and avionics. The company recently launched the NG-ULTRA, the world’s most advanced radiation-hardened FPGA SoC. With an international presence, NanoXplore is the European leader in the design and development of SoC FPGA technologies and a key partner to the major players in the aerospace sector.

About STMicroelectronics

At ST, we are 50,000 creators and makers of semiconductor technologies mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027. Further information can be found at www.st.com.

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support https://www.edge-ai-vision.com/2026/01/voyager-sdk-v1-5-3-is-live-and-that-means-ultralytics-yolo26-support/ Tue, 27 Jan 2026 21:32:41 +0000 https://www.edge-ai-vision.com/?p=56648 Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware. Why Ultralytics YOLO26 matters for your projects: The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and […]

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware.

Why Ultralytics YOLO26 matters for your projects:

The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and genuinely faster inference. It talks about up to 43% speed improvements on CPUs compared to previous versions. For anyone running projects on Orange Pi, Raspberry Pi, or similar setups, that’s a nice boost.

Small object detection also gets a nice bump thanks to ProgLoss and STAL improvements. If you’re working on anything that needs to catch smaller details (maybe retail analytics, inspection systems, drone footage analysis), this should be super interesting.

Ultralytics YOLO26 comes in n/s/m/l flavours across all the usual tasks: detection, segmentation, pose estimation, oriented bounding boxes, and classification. Good options for the speed vs. accuracy tradeoff based on your hardware and use case.

Bug fixes and stability improvements:

Beyond Ultralytics YOLO26, this release cleans up several issues from v1.5.2. Resource leaks in GStreamer and AxInferenceNet pipelines are fixed, segmentation faults when recreating pipelines with trackers are sorted, and there’s better performance for cascaded pipelines with secondary models.

If you’ve got systems with multiple Metis devices, there’s also a deadlock fix for setups with more than eight of them.

Get it now:

Head over to the usual spots to grab v1.5.3. If you’re already running projects on earlier versions, the stability fixes alone make this a welcome update.

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging https://www.edge-ai-vision.com/2026/01/upcoming-webinar-on-challenges-of-depth-of-field-dof-in-macro-imaging/ Tue, 27 Jan 2026 20:33:58 +0000 https://www.edge-ai-vision.com/?p=56641 On January 29, 2026, at 9:00 am PST (12:00 pm EST) Alliance Member company e-con Systems will deliver a webinar “Challenges of Depth of Field (DoF) in Macro Imaging” From the event page: We’re excited to invite you to an exclusive webinar hosted by e-con Systems: Challenges of DoF in Macro Imaging. In this session, […]

The post Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging appeared first on Edge AI and Vision Alliance.

]]>
On January 29, 2026, at 9:00 am PST (12:00 pm EST) Alliance Member company e-con Systems will deliver a webinar “Challenges of Depth of Field (DoF) in Macro Imaging” From the event page:

We’re excited to invite you to an exclusive webinar hosted by e-con Systems: Challenges of DoF in Macro Imaging. In this session, our vision experts will discuss the common challenges associated with DoF in medical imaging and explain
how camera design choices directly impact it.

Explore how AI-driven cameras are redefining workplace and on-site safety through real-time detection and alerts for slip, trip, and fall events, PPE non-compliance, and unsafe worker behavior — ensuring smarter, safer industrial environments.

Register Now »

Featured Speakers:

Bharathkumar R, Market Manager – Medical Cameras, e-con Systems

Vigneshkumar R, Senior Camera Expert, e-con Systems

Key insights you’ll gain:

  • How limited DoF impacts certain medical applications
  • Key design considerations that influence DoF
  • Gain insights from a real-world intraoral imaging case study

For more information and to register, visit the event page.

The post Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging appeared first on Edge AI and Vision Alliance.

]]>
Free Webinar Highlights Compelling Advantages of FPGAs https://www.edge-ai-vision.com/2026/01/free-webinar-highlights-compelling-advantages-of-fpgas/ Mon, 26 Jan 2026 22:36:11 +0000 https://www.edge-ai-vision.com/?p=56570 On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: Edge AI system developers often […]

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page:

Edge AI system developers often assume that AI workloads require a GPU or NPU. But when cost, latency, complex I/O or tight power budgets dominate, FPGAs offer compelling advantages.

In this talk we’ll explore how FPGA serve not just a compute block, but as a system-integration and acceleration platform that can combine tailored sensor I/O, signal processing, pre/post-processing and neural inference on one device.

We’ll also show how to map AI models onto FPGAs without doing customer hardware design, using two two practical on-ramps—(1) a software-first flow that generates custom instructions callable from C, and (2) a turnkey CNN acceleration block.

Using representative embedded-vision workloads, we’ll show apples-to-apples benchmarks. Attendees will leave with a decision checklist and a concrete “first experiment” plan.

Mark Oliver is an industry veteran with extensive experience in engineering, applications, and marketing. A native of the UK, Mark gained a degree in Electrical and Electronic Engineering from the University of Leeds. During a ten year tenure with Hewlett Packard, he managed Engineering and Manufacturing functions in HP Divisions both in Europe and the US before heading up Product Marketing and Applications Engineering at a series of video related startups. Prior to joining Efinix, Mark was Director of Worldwide Storage Accounts at Marvell, heading up Marketing and Business Development activities.

To register for this free webinar, please see the event page. For more information, please email webinars@edge-ai-vision.com.

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Last Mile Logistics https://www.edge-ai-vision.com/2026/01/upcoming-webinar-on-last-mile-logistics/ Thu, 22 Jan 2026 23:21:47 +0000 https://www.edge-ai-vision.com/?p=56615 On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page: Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep […]

The post Upcoming Webinar on Last Mile Logistics appeared first on Edge AI and Vision Alliance.

]]>
On January 28, 2026, at 11:00 am PST (2:00 pm EST) Alliance Member company STMicroelectronics will deliver a webinar “Transforming last mile logistics with STMicroelectronics and Point One” From the event page:

Precision navigation is rapidly becoming the standard for last mile delivery vehicles of all types. But what does it truly take to keep these machines on track, delivery after delivery, in challenging urban environments?

Join industry leaders from Point One Navigation and STMicroelectronics as we explore the unique challenges faced by engineers designing these specialized delivery robots and vehicles. Learn about the critical technologies, from microcomputing hardware and GNSS receivers to precision corrections and advanced sensor fusion, that ensure your vehicles navigate safely through complex urban terrain, GPS-denied areas, and high-density environments.

Packed with proven tips, tricks, and lessons learned from working with dozens of engineering teams in the last mile delivery world, this webinar is essential for OEMs ready to accelerate their autonomous logistics solutions.

Register Now »

Featured Speakers:

Mike Slade, GNSS Marketing Lead, Americas, STMicroelectronics

ST’s GNSS Marketing Lead for the Americas, holds a BS in EE & Mathematics and an MBA in Global Marketing. He started developing GNSS software and algorithms in 2000 for the Motorola Mobile Devices Lab’s GAM GNSS chipset designed for cellular E911 compliance. He joined the ST Teseo GNSS team in 2007, where he has done product software development, applications, strategic technical marketing, and program management.

Gabe Amancio, Head of Application Engineering, Point One Navigation

Point One’s Head of Application Engineering and has deep expertise in precision GNSS. His expertise spans technical applications, corrections, position engine integration (both hardware and software), API integration, and the critical phases of proof-of-concept scoping and testing. Prior to Point One, Gabe earned his Bachelor’s in Electrical Engineering from Cal Poly SLO and honed his skills in the semiconductor industry, focusing on sales and application engineering.

What You Will Learn:

-How to achieve continuous, centimeter-accurate positioning in challenging urban environments (e.g., urban canyons, under structures, in parking garages).
-The crucial role of STMicroelectronics’ Teseo VI GNSS technology and advanced IMUs in maintaining position accuracy.
-Leveraging Point One’s robust Polaris RTK network for reliable corrections without a local base station.
-Strategies for sensor fusion (GNSS, RTK, IMU, odometry, vision) to ensure continuity and safety in GPS-denied areas.
-Real-world examples and practical insights from successful last mile delivery OEM deployments.

For more information and to register, visit the event page.

The post Upcoming Webinar on Last Mile Logistics appeared first on Edge AI and Vision Alliance.

]]>