The global market for humanoid robots, widely categorized as “embodied AI”, is projected to grow rapidly over the next 10 years – and is expected to see a ‘14-fold market expansion in five years’ , according to IDTechEx.
This surge is fueled by major players like Tesla and BYD, who plan to expand humanoid deployment in their factories more than tenfold between 2025 and 2026, aiming for over 25 percent cost reduction per humanoid robot.
With 2025 seen as the industry’s take-off year, IDTechEx anticipates the market for humanoid robot sensory components, including LiDAR, encoders, torque sensors, 6-axis sensors, IMUs, MEMS sensors, and cameras, to reach approximately $10 billion within 10 years.
This growth presents major opportunities for component suppliers. IDTechEx’s recent research report, Humanoid Robots 2025-2035: Technologies, Markets and Opportunities, details market opportunities, pain points, manufacturing/technical, commercial, and regulatory challenges of these components.
Sensors are essential to the functionality of humanoid robots, serving a wide range of purposes. They enable navigation and object detection (for example, LiDAR and cameras), force control (for example, torque and tactile sensors), and position and stability management (for example, IMUs).
This article focuses primarily on tactile sensors and navigation sensors, particularly LiDAR and cameras.
Tactile sensors
Tactile sensors are critical components in humanoid robots, especially in the hands, where they enable tasks such as picking and placing objects.
These sensors are complex systems that integrate inputs such as force, slip, pressure, and torque to generate data that guides the robot’s hand movements based on an object’s shape, stiffness, and softness.
As one of the most valuable subsystems in humanoid robots, tactile sensors significantly enhance grip control and object manipulation. They allow for real-time adjustment of grip forces to prevent slippage and enable object handling without relying on visual input.
Looking ahead, tactile sensors will also play a role in object identification by detecting surface properties and determining material characteristics. Several technologies underpin tactile sensing, including capacitive, optical, and magnetic sensors.
Among these, optical tactile sensors typically offer the highest precision, but such accuracy is often unnecessary for current humanoid applications. Capacitive and magnetic sensors generally provide sufficient resolution for most tasks.
IDTechEx sees flexible capacitive tactile sensors as a promising direction due to their adaptability, though their sensitivity to humidity and temperature limits their use in dynamic environments.
For more complex measurements, such as 6D force sensing, magnetic and optical solutions may still be required. Tactile sensors typically range in cost from US$0.4/mm² to US$1.2/mm², depending on factors such as volume, supplier, and technology type.
More details of tactile sensors and their commercial applications in humanoids are in the report Humanoid Robots 2025-2035: Technologies, Markets and Opportunities.
LiDAR and cameras
LiDAR and cameras are two essential sensory technologies that enable humanoid robots to perform navigation, collision avoidance, and object detection.
As of 2025, most humanoid robots use a combination of LiDAR and cameras, with Tesla’s Optimus being a notable exception, relying solely on cameras. The integration of both technologies is largely driven by the growing complexity of the environments these robots must navigate.
Vision alone often falls short in addressing real-world applications’ unpredictable and dynamic conditions.
According to Hesai, a leading LiDAR supplier, vision-based systems may be adequate in controlled environments like production lines, where tasks and lighting conditions are stable.
However, humanoid robots are increasingly expected to operate in diverse and unstructured settings, interacting with people, adapting to changing light levels, and navigating physically complex spaces.
In scenarios with bright sunlight, low light, or rapidly shifting conditions, camera-only systems can struggle with reliability and raise safety concerns in human-robot interactions.
LiDAR offers crucial complementary capabilities. It enables precise path planning, real-time 3D environmental mapping, and reliable obstacle detection – even in poorly lit environments.
These features are essential for humanoids performing tasks like placing objects into confined spaces, avoiding unexpected obstacles in dim conditions, or navigating areas where lighting is limited, such as after factory hours.
In high-risk settings like mining or tunnel exploration, LiDAR further enhances safety and accuracy, allowing robots to scan and navigate hazardous terrain with confidence.
Looking ahead, the rapid scale-up in humanoid robot deployment is expected to drive substantial growth in sensory components, with the market projected to exceed $10 billion by 2035.
As demand for dexterous capabilities increases, tactile sensors will need to be integrated cost-effectively, enabling fine manipulation without significantly raising overall system expenses.
For LiDAR and cameras, the evolving requirement for full 360-degree spatial awareness in humanoids will influence future design trends.
Unlike autonomous vehicles, which typically rely on long-range, forward-facing LiDAR for lane detection and obstacle avoidance, humanoid robots must operate in diverse environments that require comprehensive spatial coverage.
As a result, next-generation LiDAR systems will likely prioritize wide field-of-view (FOV) designs with minimal blind spots, especially suited for indoor applications where precise, all-directional awareness is critical and operational speeds are lower.