Namuga Vision Connectivity - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/namuga-vision-connectivity/ Designing machines that perceive and understand. Thu, 22 Jan 2026 18:25:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Namuga Vision Connectivity - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/namuga-vision-connectivity/ 32 32 NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ https://www.edge-ai-vision.com/2026/01/namuga-successfully-concludes-ces-participation-official-launch-of-next-generation-3d-lidar-sensor-stella-2/ Thu, 22 Jan 2026 17:35:37 +0000 https://www.edge-ai-vision.com/?p=56578 Las Vegas, NV, Jan 15 — NAMUGA announced that it successfully concluded the unveiling of its new product, Stella-2, at CES 2026, the world’s largest IT and consumer electronics exhibition, held in Las Vegas, USA, from January 6 to 9. The newly unveiled product, Stella-2, is a solid-state LiDAR jointly developed by NAMUGA and Lumotive. In […]

The post NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ appeared first on Edge AI and Vision Alliance.

]]>
Las Vegas, NV, Jan 15 — NAMUGA announced that it successfully concluded the unveiling of its new product, Stella-2, at CES 2026, the world’s largest IT and consumer electronics exhibition, held in Las Vegas, USA, from January 6 to 9.

The newly unveiled product, Stella-2, is a solid-state LiDAR jointly developed by NAMUGA and Lumotive. In particular, Stella-2 has been evaluated as enabling more precise and proactive responses in outdoor environments by significantly improving sensing distance and frame rate compared to its predecessor. In addition to existing partners such as Infineon, LIPS, and PMD, NAMUGA also received a series of new collaboration proposals.

The key themes of this year’s CES were undoubtedly Physical AI and robotics. As demand for next-generation sensors surged across industries including robotics, smart infrastructure, and autonomous driving, NAMUGA’s 3D sensing technology and large-scale mass production experience drew significant attention as key competitive strengths. Notably, NAMUGA was recently selected as a supplier of 3D sensing modules for a global automotive robot platform.

Tangible outcomes were also achieved. At CES 2026, NAMUGA finalized the initial supply of Stella-2 samples to a North American global e-commerce big tech partner. This achievement demonstrates NAMUGA’s competitiveness, having passed the partner’s stringent technical and quality standards. Building on this supply, NAMUGA plans to explore opportunities to expand the application of 3D sensing-based solutions to the partner’s logistics robots.

Meanwhile, Hyundai Motor Group Executive Chair Euisun Chung’s visit to the Samsung Electronics booth, where he proposed combining MobeD with robot vacuum cleaners, drew considerable attention. The 3D sensing camera, a core component of AI robot vacuum cleaners supplied by NAMUGA, is a high value-added technology essential for distance measurement.

NAMUGA CEO Lee Dong-ho stated, “Through CES 2026, we were able to confirm the high level of interest and potential surrounding 3D sensing technologies among IT companies,” adding, “As NAMUGA’s 3D sensing technology continues to be adopted by global automotive and e-commerce companies, we are keeping pace with global trends in line with the advent of the Physical AI era.”

NAMUGA CEO Lee Dong-ho discussing 3D robot sensor strategies at CES 2026

NAMUGA CEO Lee Dong-ho introducing Stella-2 with Lumotive CEO Sam Heidari at CES 2026

The post NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ appeared first on Edge AI and Vision Alliance.

]]>
Namuga Vision Connectivity Demonstration of Compact Solid-state LiDAR for Automotive and Robotics Applications https://www.edge-ai-vision.com/2025/07/namuga-vision-connectivity-demonstration-of-compact-solid-state-lidar-for-automotive-and-robotics-applications/ Thu, 17 Jul 2025 08:00:57 +0000 https://www.edge-ai-vision.com/?p=54515 Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates a compact solid-state LiDAR solution tailored for automotive and robotics industries. This solid-state LiDAR features high precision, fast response time, and no moving parts—ideal for […]

The post Namuga Vision Connectivity Demonstration of Compact Solid-state LiDAR for Automotive and Robotics Applications appeared first on Edge AI and Vision Alliance.

]]>
Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates a compact solid-state LiDAR solution tailored for automotive and robotics industries.

This solid-state LiDAR features high precision, fast response time, and no moving parts—ideal for autonomous driving, obstacle detection, and robotic navigation. Its compact form factor enables easy integration into various system architectures, while its low power consumption makes it well-suited for energy-efficient applications.

The post Namuga Vision Connectivity Demonstration of Compact Solid-state LiDAR for Automotive and Robotics Applications appeared first on Edge AI and Vision Alliance.

]]>
Namuga Vision Connectivity Demonstration of an AI-powered Total Camera System for an Automotive Bus Solution https://www.edge-ai-vision.com/2025/07/namuga-vision-connectivity-demonstration-of-an-ai-powered-total-camera-system-for-an-automotive-bus-solution/ Wed, 16 Jul 2025 08:01:12 +0000 https://www.edge-ai-vision.com/?p=54512 Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates his company’s AI-powered total camera system. The system is designed for integration into public transportation, especially buses, enhancing safety and automation. It includes front-view, side-view, […]

The post Namuga Vision Connectivity Demonstration of an AI-powered Total Camera System for an Automotive Bus Solution appeared first on Edge AI and Vision Alliance.

]]>
Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates his company’s AI-powered total camera system.

The system is designed for integration into public transportation, especially buses, enhancing safety and automation. It includes front-view, side-view, and in-cabin cameras powered by AI to detect objects, monitor driver behavior, and assist with smart fleet management. This cutting-edge solution supports a wide range of automotive applications including ADAS, passenger monitoring, and city transport innovation.

The post Namuga Vision Connectivity Demonstration of an AI-powered Total Camera System for an Automotive Bus Solution appeared first on Edge AI and Vision Alliance.

]]>
Namuga Vision Connectivity Demonstration of a Real-time Eye-tracking Camera Solution with a Glasses-free 3D Display https://www.edge-ai-vision.com/2025/07/namuga-vision-connectivity-demonstration-of-a-real-time-eye-tracking-camera-solution-with-a-glasses-free-3d-display/ Wed, 16 Jul 2025 08:00:42 +0000 https://www.edge-ai-vision.com/?p=54509 Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates a real-time eye-tracking camera solution that accurately detects the viewer’s eye position and angle. This data enables a glasses-free 3D display experience using an advanced […]

The post Namuga Vision Connectivity Demonstration of a Real-time Eye-tracking Camera Solution with a Glasses-free 3D Display appeared first on Edge AI and Vision Alliance.

]]>
Min Lee, Business Development Team Leader at Namuga Vision Connectivity, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Lee demonstrates a real-time eye-tracking camera solution that accurately detects the viewer’s eye position and angle.

This data enables a glasses-free 3D display experience using an advanced IR camera system. Ideal for applications in retail, product design, interactive advertising, and entertainment, this innovative solution enhances user engagement through immersive visual interaction.

The post Namuga Vision Connectivity Demonstration of a Real-time Eye-tracking Camera Solution with a Glasses-free 3D Display appeared first on Edge AI and Vision Alliance.

]]>
NAMUGA Unveils Stella-2: Compact, Solid-state Lidar Powered by Lumotive, at Embedded Vision Summit https://www.edge-ai-vision.com/2025/05/namuga-unveils-stella-2-compact-solid-state-lidar-powered-by-lumotive-at-embedded-vision-summit/ Sat, 24 May 2025 13:54:40 +0000 https://www.edge-ai-vision.com/?p=53873 REDMOND, Wash., May 21, 2025 /PRNewswire/ — Lumotive, a leader in programmable optical semiconductor technology, today announced that NAMUGA Co., Ltd., a leading manufacturer of advanced camera modules, will debut its first solid-state 3D lidar sensor—Stella-2, powered by Lumotive’s Light Control Metasurface (LCM) technology—at the upcoming Embedded Vision Summit in California. Stella-2 brings software-defined intelligence, […]

The post NAMUGA Unveils Stella-2: Compact, Solid-state Lidar Powered by Lumotive, at Embedded Vision Summit appeared first on Edge AI and Vision Alliance.

]]>
REDMOND, Wash., May 21, 2025 /PRNewswire/ — Lumotive, a leader in programmable optical semiconductor technology, today announced that NAMUGA Co., Ltd., a leading manufacturer of advanced camera modules, will debut its first solid-state 3D lidar sensor—Stella-2, powered by Lumotive’s Light Control Metasurface (LCM) technology—at the upcoming Embedded Vision Summit in California.

Stella-2 brings software-defined intelligence, a compact form factor, and robust outdoor performance to commercial robotics and industrial automation. Applications include autonomous vacuum cleaners, lawnmowers, warehouse robots, and industrial equipment.

“NAMUGA’s launch of Stella-2 is a powerful example of how Lumotive’s Light Control Metasurface technology is enabling the next generation of intelligent, adaptable 3D sensing solutions,” said Dr. Sam Heidari, CEO of Lumotive. “We’re proud to support NAMUGA in delivering a versatile LiDAR platform that meets the real-world needs of robotics and industrial automation, and we’re excited to see our technology drive broader adoption of solid-state LiDAR across new markets.”

Stella-2: Designed for Robotics and Autonomy

  • Software-Defined Sensing: One sensor, many roles—adaptive scanning customizes field-of-view, range, and frame rates.
  • Compact and Power-Efficient: Small form factor, MIPI interface, and <15W power consumption for easy integration.
  • All-in-One Perception: Replaces multiple sensors with a single hardware unit configurable for cliff detection, obstacle avoidance, SLAM, and more.
  • Indoor and Outdoor Ready: 30m typical range, 80m max, 120° x 90° field-of-view, and HDR mode for reliable navigation in varied environments.

Available in both integratable sensor module and sealed enclosure formats, Stella-2 simplifies 3D perception for a wide range of robotic platforms.

“With Stella-2, we’re delivering a versatile and compact 3D perception solution that is ready for the diverse demands of modern robotics,” said Don Lee, CEO at NAMUGA. “Lumotive’s technology enabled us to rapidly develop a feature-rich lidar that meets the cost, power, and performance needs of commercial and industrial platforms, all with a flexible software-defined architecture.”

The launch of Stella-2 reflects the deepening collaboration between Lumotive and NAMUGA, driven by a shared mission to democratize high-performance 3D sensing through scalable, software-defined solutions. Embedded Vision Summit marks a major milestone in bringing this vision to market.

About Lumotive

Lumotive’s award-winning programmable optical semiconductors improve perception, increase computing power, and enable reliable high-speed communication in various industries. The Light Control Metasurface (LCM™) chip is a patented, software-defined photonic beamforming solid-state technology. As the first of its kind, it meets essential needs in various sectors, including 3D sensing and AI computing. Lumotive was named Fast Company’s Next Big Thing in Tech and won three CES Innovation Awards. Headquartered in Redmond, WA, with offices in San Jose, CA, and Vancouver, Canada, Lumotive is backed by notable investors, including Gates Frontier, MetaVC Partners, Quan Funds, Samsung Ventures, and Swisscom Ventures. For more information, please visit www.lumotive.com.

About NAMUGA

Founded in 2004 and based in Pangyo, South Korea’s Silicon Valley, NAMUGA has been at the forefront of innovation in the 3D sensing camera module industry. The company specializes in high-performance camera modules and 3D technology solutions, catering to top-tier markets worldwide. NAMUGA’s products play a critical role in various sectors, including mobile, micro actuator, AR/VR, mobility, security, and biomedical industries.

NAMUGA operates a state-of-the-art, 58,000-square-meter factory in Vietnam, which allows it to manage a fully integrated development and manufacturing process. This facility enables NAMUGA to meet diverse global customer needs effectively, ensuring high standards of precision, quality, and reliability. The company also holds multiple certificates in module technology R&D, highlighting its capability to deliver advanced solutions that meet the evolving demands of modern technology markets. For more information, please visit NAMUGA.com.

The post NAMUGA Unveils Stella-2: Compact, Solid-state Lidar Powered by Lumotive, at Embedded Vision Summit appeared first on Edge AI and Vision Alliance.

]]>
Key Drone Terminology: A Quick Guide for Beginners https://www.edge-ai-vision.com/2025/05/key-drone-terminology-a-quick-guide-for-beginners/ Fri, 23 May 2025 08:00:06 +0000 https://www.edge-ai-vision.com/?p=53789 This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content […]

The post Key Drone Terminology: A Quick Guide for Beginners appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity.

As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content creator, or someone working in industrial drone applications, understanding these concepts will help you better navigate the drone ecosystem.

In this blog, we break down essential drone terminology into intuitive categories and explain each term —supported by easy-to-follow infographics. Let’s get started!

Control-Related Terms

This group of terms refers to how we communicate with and control a drone.

  • Bind: The process of linking a drone to its controller.
  • Controller: The handheld device or app used to fly and navigate the drone.
  • First Person View (FPV): A perspective that lets the pilot see from the drone’s viewpoint, often via live video.
  • Return to Home (RTH): A safety feature where the drone automatically returns to its take-off location when the signal is lost or battery is low.

Control

Navigation & Positioning

Drones rely on various sensors and systems to understand their surroundings and maintain their position.

  • GPS: Global Positioning System – helps the drone understand where it is on the map.
  • Altitude: The height of the drone above ground level.
  • Yaw: Rotation of the drone left or right around its vertical axis.
  • Throttle: Controls how much power is sent to the motors, affecting height and speed.

Flight Controls

Geospatial & Sensor Technology

These terms are common in industrial, agricultural, or mapping-related drone applications.

  • Geofencing: A virtual boundary that restricts drone flight to a predefined area.
  • Ground Control Station (GCS): A computer system or tablet that manages drone flight remotely.
  • Inertial Navigation System (INS): A navigation method using internal motion sensors when GPS is unavailable.
  • LiDAR: Light Detection and Ranging – a sensor that maps surroundings using laser pulses.

Navigation Systems

Final Thoughts

We hope this guide gives you a clearer picture of the drone landscape. At NAMUGA, we specialize in developing advanced camera modules—including RGB, IR, 3D ToF, and LiDAR—optimized for drone and gimbal integration. Follow us for more insights and solutions in smart imaging technology.

The post Key Drone Terminology: A Quick Guide for Beginners appeared first on Edge AI and Vision Alliance.

]]>
A Complete Guide to SAE Autonomous Driving Levels 0–5 and Market Outlook https://www.edge-ai-vision.com/2025/05/a-complete-guide-to-sae-autonomous-driving-levels-0-5-and-market-outlook/ Fri, 02 May 2025 08:00:34 +0000 https://www.edge-ai-vision.com/?p=53563 This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As autonomous driving technology becomes increasingly commercialized, SAE (Society of Automotive Engineers) has classified driving automation levels from Level 0 to Level 5. These standards play a key role not only in guiding […]

The post A Complete Guide to SAE Autonomous Driving Levels 0–5 and Market Outlook appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity.

As autonomous driving technology becomes increasingly commercialized, SAE (Society of Automotive Engineers) has classified driving automation levels from Level 0 to Level 5. These standards play a key role not only in guiding technological development but also in shaping laws and regulations and helping consumers better understand the technology.

In this article, we explain what each SAE autonomous driving level means, highlight the differences between them, explore the current and future trends of the market, and show how NAMUGA is preparing for the autonomous driving era with its cutting-edge technology.

SAE Autonomous Driving Levels 0–5 at a Glance

The SAE levels of driving automation (IDTechEX research)

Detailed Breakdown of SAE Autonomous Driving Levels

Level 0 – No Automation

All driving tasks are performed by the human driver. Some warning systems (e.g., lane departure warning) may be present, but there are no features that actively control the vehicle.

Level 1 – Driver Assistance

Only one function is automated. For instance, adaptive cruise control (ACC) or steering assistance may be active, but the driver must keep their hands on the wheel and remain aware of the driving environment.

Level 2 – Partial Automation

The vehicle can simultaneously manage acceleration, braking, and steering. This includes systems like Tesla Autopilot or Hyundai HDA. However, the driver must always monitor the road and be ready to intervene immediately if needed.

Level 3 – Conditional Automation

In certain conditions (such as highways), the vehicle can handle all driving tasks. The driver can take their eyes off the road, but must remain on standby and respond promptly when the system requests intervention.

Level 4 – High Automation

In designated environments (e.g., specific urban routes), the vehicle can operate autonomously without a driver. Full self-driving is possible within the ODD (Operational Design Domain). Companies like Waymo (U.S.) and Baidu (China) have already commercialized such services. However, performance may be limited under adverse conditions like severe weather.

Level 5 – Full Automation

No driver seat is needed — the vehicle can fully drive itself in all environments and scenarios. It must detect and respond to all conditions (snow, rain, crash risks, etc.) without any human involvement.

Companies like Zoox are developing fully autonomous vehicles designed for Level 5 use and are currently conducting road tests.

Autonomous Driving Market Outlook

As of 2025, Level 2 (Partial Automation) dominates the autonomous vehicle market in terms of adoption and demand. This level has become the mainstream choice due to several key advantages:

  • Minimal Regulatory Hurdles: Since drivers remain actively involved, Level 2 systems are permissible in most countries without complex legal constraints.
  • Strong Cost-to-Benefit Ratio: Compared to higher-level autonomous systems, Level 2 solutions are more cost-effective while still delivering significant practical benefits.
  • Consumer Trust and Familiarity: These systems offer a smooth transition for users, allowing them to gradually adapt to autonomous features without fully relinquishing control.

Looking ahead, Level 4 and 5 vehicles are expected to enter the market in earnest around 2030, enabling fully autonomous driving under specific or even all conditions. While camera-only setups or camera + LiDAR and RADAR fusion are common at Level 2, similar multi-sensor fusion approaches are projected to be key components in higher levels of autonomy as well.

NAMUGA: Preparing for the Era of Autonomous Driving

NAMUGA owns key technologies for autonomous driving, including high-performance camera modules and 3D sensing (Time-of-Flight) solutions.

NAMUGA’s Strengths

  • Experience in developing and mass-producing in-vehicle camera modules for Google Waymo
  • In partnership discussions with global OEMs and Tier 1 suppliers
  • Product lineup that covers everything from ADAS to full autonomous driving
  • Solutions with high reliability and performance

Through its innovative vision systems, NAMUGA is shaping the future of autonomous driving together with its partners.

The post A Complete Guide to SAE Autonomous Driving Levels 0–5 and Market Outlook appeared first on Edge AI and Vision Alliance.

]]>
Understanding 3D Camera Technologies: Stereo Vision, Structured Light and Time-of-flight https://www.edge-ai-vision.com/2025/04/understanding-3d-camera-technologies-stereo-vision-structured-light-and-time-of-flight/ Fri, 25 Apr 2025 08:00:31 +0000 https://www.edge-ai-vision.com/?p=53313 This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. In the rapidly evolving field of 3D imaging, three primary technologies stand out: Structured Light, Time-of-Flight (ToF) and Stereo Vision. Each offers unique advantages and is suited for specific applications. Let’s explore each […]

The post Understanding 3D Camera Technologies: Stereo Vision, Structured Light and Time-of-flight appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity.

In the rapidly evolving field of 3D imaging, three primary technologies stand out: Structured Light, Time-of-Flight (ToF) and Stereo Vision. Each offers unique advantages and is suited for specific applications. Let’s explore each technology in detail, examine their applications and see how NAMUGA integrates them into their innovative products.

Stereoscopic Vision

How it works

Stereo Vision works by placing two or more cameras at slightly different angles to capture the same scene. The system then analyzes the disparity between the images to calculate depth information, mimicking the way human binocular vision works.

Strengths

Stereo Vision is known for its simple architecture and cost-efficiency. According to ScienceDirect, these systems can operate under ambient light without the need for a laser line or projection pattern, making them less susceptible to interference from sunlight.

In fact, stereo vision has even been used in space exploration, such as NASA’s Mars Pathfinder mission, to generate highly accurate 3D terrain maps.

Limitations

However, stereo vision systems may suffer in low-light conditions. Miniaturization is more difficult, and complex algorithms are required to match image features precisely between views, making processing more computationally intensive.

Structured Light

How it works

Structured Light systems project a known pattern (e.g., grids or dots) onto a surface. A camera captures the deformation of this pattern as it hits the object, and depth is calculated via triangulation.

Strengths

Structured Light is highly precise and offers excellent resolution. A study published on ScienceDirect highlights its capability for “accurate, rapid measurement and active control,” making it ideal for both industrial and research settings.

It performs particularly well in controlled lighting environments and can capture fine detail even on textureless surfaces.

Limitations

Performance can degrade in bright outdoor settings due to light interference, and highly reflective or transparent surfaces can distort the projected pattern.

Time-of-Flight (ToF)

How it works

ToF cameras emit infrared (IR) light and measure the time it takes for the light to bounce back from objects. This time delay is converted into depth information to generate a 3D depth map of the scene.

Strengths

ToF systems can capture high-speed 3D images in real time while simultaneously providing intensity data. This makes them ideal for robotics, industrial automation, and other applications requiring fast response times.

Recent research has shown that ToF systems can record up to 75 depth frames per second, allowing for smooth and accurate tracking of moving objects.

Limitations

Compared to structured light systems, ToF may offer lower depth resolution. Surface reflectivity and color can also affect measurement accuracy.

NAMUGA’s development milestone in 3D camera module technology

A well-known example of stereo-based 3D sensing is Intel’s RealSense camera. It captures depth by comparing two infrared images taken from slightly different viewpoints. Some models also include an IR dot projector to improve performance on low-texture surfaces — making it a hybrid approach that combines stereo vision and structured light.

Real Sense – Intel

At NAMUGA, we bring extensive development experience across all three 3D sensing technologies.

Our Titan series ToF camera modules are optimized for AR/VR glasses and robot vacuum cleaners, offering compact size and accurate depth perception for spatial understanding and gesture control. In particular, Titan is currently in mass production for Samsung’s robot vacuum cleaner, providing reliable and precise navigation through real-time 3D sensing. Meanwhile, our Pinocchio series is designed for homecare robots and portable projectors, enabling smart environmental awareness and automated functionality in compact consumer electronics.

Titan 100 & Titan 120 for wearable glasses, robot vacuum

Pinocchio for homecare robot and portable projector product specification

In addition to short-range ToF modules, NAMUGA also develops LiDAR solutions based on ToF technology.

Our Stella series is engineered for automotive and security applications, offering robust 3D perception for autonomous navigation and perimeter detection. Notably, Stella modules are designed to support both short-range and long-range sensing, making them highly adaptable for indoor and outdoor environments across industries such as smart logistics, factory automation, industrial robotics, and spatial monitoring.

Stella series for automotive and security applications

As the demand for 3D vision continues to rise across industries like augmented reality, robotics, and autonomous mobility, the importance of choosing the right sensing technology grows as well. Whether it’s the precision of structured light, the flexibility of stereo vision, or the real-time performance of ToF, each solution offers distinct value.

With a comprehensive portfolio of 3D camera modules and a track record of successful integration in mass-production environments, NAMUGA is your trusted partner for next-generation depth-sensing solutions. We remain committed to advancing imaging technology that powers smarter, safer, and more immersive experiences.

The post Understanding 3D Camera Technologies: Stereo Vision, Structured Light and Time-of-flight appeared first on Edge AI and Vision Alliance.

]]>
How Does Augmented Reality (AR) Work? https://www.edge-ai-vision.com/2025/04/how-does-augmented-reality-ar-work/ Fri, 18 Apr 2025 08:00:34 +0000 https://www.edge-ai-vision.com/?p=53190 This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As immersive technologies evolve, Augmented Reality (AR) is stepping into the spotlight. While Virtual Reality (VR) once captured the imagination with fully simulated worlds, AR is proving to be the more practical and […]

The post How Does Augmented Reality (AR) Work? appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity.

As immersive technologies evolve, Augmented Reality (AR) is stepping into the spotlight. While Virtual Reality (VR) once captured the imagination with fully simulated worlds, AR is proving to be the more practical and scalable choice — blending digital information seamlessly into the real world.

In recent years, AR has gained momentum due to two major factors:

  • Persistent connection to the physical world, allowing users to stay grounded while interacting with digital content.
  • Advancements in lightweight, wearable AR hardware, including compact glasses and sensor-packed devices — far more comfortable and versatile than bulky VR headsets.

Let’s explore how AR works, where it’s used, and how NAMUGA is helping drive the next wave of innovation.

What is Augmented Reality (AR)?

Augmented Reality (AR) is a technology that superimposes digital content — such as 3D images, text, animations, or audio — onto the real-world environment in real time. Unlike VR, which immerses users in a completely virtual space, AR enhances the physical world by adding interactive digital layers.

Because AR operates in the user’s actual surroundings, it allows for more natural interaction and can be integrated into everyday activities — from shopping and navigation to training and diagnostics.

How Does AR Work?

AR systems function through four core steps:

  1. Data Collection – Sensors and Cameras
    AR devices begin by capturing environmental data. Using built-in cameras and sensors (GPS, gyroscopes, accelerometers), the system detects the user’s position, movement, and surroundings.
  2. Environmental Mapping – SLAM Technology
    With SLAM (Simultaneous Localization and Mapping), the AR system maps the physical environment in 3D and tracks the user’s location. This enables digital objects to be precisely aligned with the real world — a key for delivering believable and accurate AR experiences.
  3. Rendering – Digital Content Generation and Placement
    Based on real-world context, the system renders digital content such as 3D models, labels, or instructions. The content dynamically adjusts to the user’s perspective, making it appear anchored in the real environment.
  4. Display – Visual Output Through Screens or Glasses
    The final AR experience is displayed through smartphones, tablets, or AR glasses. Users see both the physical world and digital enhancements overlaid, creating an interactive and immersive experience.

Real-World Applications of AR

AR is now being deployed across a wide range of industries:

Manufacturing

In manufacturing, AR delivers real-time guidance to assembly line workers, improving accuracy and productivity. When it comes to equipment maintenance, AR-based manuals overlay digital instructions directly onto machinery, enhancing both the speed and precision of repair tasks. In remote collaboration scenarios, AR enables technicians and engineers to share visual 3D models and receive expert feedback instantly, streamlining problem-solving and reducing downtime.

Healthcare

The healthcare sector is adopting AR for both medical training and patient care. Surgeons and medical staff use AR-based 3D simulations to plan procedures with greater accuracy, while anatomy visualizations aid in education and diagnostics. For patients, AR helps improve communication by visually illustrating treatment options or procedures, leading to better understanding and informed decision-making.

Retail

Retailers are using AR to elevate the shopping experience by offering virtual try-ons and previews. Consumers can see how furniture fits in their living room, try on glasses, or apply makeup virtually — all through their smartphone or AR glasses. These immersive interactions help customers make more confident purchasing decisions and significantly increase conversion rates.

Automotive

In the automotive industry, AR enhances both safety and user experience. Head-Up Displays (HUDs) project essential driving data — such as speed and navigation — directly into the driver’s line of sight, reducing distraction. Additionally, AR-powered navigation systems overlay route directions onto the real road, providing intuitive, real-time guidance that improves situational awareness behind the wheel.

Why AR is Gaining More Ground Than VR

While VR creates entirely new worlds, AR enhances the one we live in — making it more usable in daily life and work. Add to that the hardware advantage: AR glasses and devices are getting smaller, lighter and smarter, making them suitable for longer wear and real-world applications.

As industries shift toward spatial computing and immersive tech, AR is fast becoming the go-to solution for both consumer and enterprise applications.

NAMUGA: Building the Future of AR with Smarter Camera Modules

At NAMUGA, we are proud to be at the heart of the AR hardware ecosystem.

We began our journey developing advanced camera modules for smartphones, and today, we’re expanding into AR and VR with solutions that power immersive experiences across sectors.

Our camera modules:

  • Enable precise environmental mapping for SLAM
  • Support high-resolution image capture in lightweight, compact designs
  • Deliver low-latency performance ideal for real-time AR rendering

As AR hardware demands faster, smarter, and more compact imaging systems, NAMUGA is meeting the challenge — enabling everything from smart glasses to industrial headsets and consumer wearables.

The post How Does Augmented Reality (AR) Work? appeared first on Edge AI and Vision Alliance.

]]>
NAMUGA to Supply 3D Camera Module for Award-winning Intel RealSense-based Product at ISC West 2025 https://www.edge-ai-vision.com/2025/04/namuga-to-supply-3d-camera-module-for-award-winning-intel-realsense-based-product-at-isc-west-2025/ Mon, 14 Apr 2025 12:51:06 +0000 https://www.edge-ai-vision.com/?p=53208 NAMUGA to provide 3D stereo camera for next-generation smart access authentication system NAMUGA reaffirms its leadership in 3D sensing camera technology, built over more than a decade April 10, 2025 – KOSDAQ-listed company NAMUGA Co., Ltd. (190510) announced that it will supply stereo cameras, a key component of Intel’s RealSense module, for the next-generation biometric […]

The post NAMUGA to Supply 3D Camera Module for Award-winning Intel RealSense-based Product at ISC West 2025 appeared first on Edge AI and Vision Alliance.

]]>
  • NAMUGA to provide 3D stereo camera for next-generation smart access authentication system

  • NAMUGA reaffirms its leadership in 3D sensing camera technology, built over more than a decade

April 10, 2025 – KOSDAQ-listed company NAMUGA Co., Ltd. (190510) announced that it will supply stereo cameras, a key component of Intel’s RealSense module, for the next-generation biometric authentication gate system “BioAffix Gate Vision,” co-developed by global semiconductor company Intel and Ones Technology. The RealSense module performs core functions in this system, and NAMUGA is responsible for the high-precision 3D camera component used in the module.

“BioAffix Gate Vision” was officially recognized for its technological excellence by winning in the Biometric category at the New Products and Solutions (NPS) Awards, hosted by the Security Industry Association (SIA), during ISC West 2025—the world’s largest security exhibition held in Las Vegas, USA.

As a prestigious global event leading security technology trends, ISC West draws numerous security industry experts and companies from around the world.

The award-winning BioAffix Gate Vision is a smart access authentication system that uses AI-powered, high-precision 3D facial recognition technology to rapidly and accurately identify users. It integrates with various authentication methods such as BLE and smart cards, and is praised for its security and convenience, operating reliably in both indoor and outdoor environments.

The integrated Intel RealSense stereo camera module combines real-time 3D depth sensing, anti-spoofing, and AI-based object recognition features into a high-performance sensor optimized for biometric security systems.

This supply agreement further validates NAMUGA’s 3D sensing technology and manufacturing expertise, which it has developed over more than 10 years in the field of 3D camera module research and development.

NAMUGA’s 3D camera solutions are widely applicable to advanced AI devices such as robots, facial recognition systems, and autonomous vehicles, with high potential for future expansion.

Currently, NAMUGA is working closely with Intel on the RealSense project, collaborating on optical design, precise sensor alignment, and mass production optimization, while also progressing through pre-verification and production readiness for the high-precision camera modules.

A company representative stated, “It is especially meaningful that this award-winning product is directly connected to the RealSense project in which we are participating,” and added, “We plan to accelerate the commercialization of our technology and entry into the global security and biometric authentication supply chain.”

Meanwhile, 3D sensing technology is rapidly expanding into fields such as autonomous driving, industrial robotics, access security, medical diagnostics, and XR (extended reality). NAMUGA aims to continue strengthening its global competitiveness in smart camera and AI vision solutions in line with these trends.

The post NAMUGA to Supply 3D Camera Module for Award-winning Intel RealSense-based Product at ISC West 2025 appeared first on Edge AI and Vision Alliance.

]]>