The advancement of artificial intelligence (AI) has ushered in a new era of automated robotics that are adaptive to their environments.
The field of robotics has made remarkable strides over the past few decades, yet it continues to face challenges that hinder the full utilization of its potential. Traditional robots often rely on pre-programmed instructions and restricted configurations, limiting their ability to respond to unforeseen circumstances. AI technologies—encompassing cognition, analysis, inference, and decision-making—enable robots to operate intelligently, significantly enhancing their capabilities to assist and support humans.
By augmenting robots with AI technologies within engineering systems, we can expect more ever-present applications in industry, agriculture, logistics, medicine, and beyond, allowing robots to perform complex tasks with greater autonomy and efficiency. This technological enhancement unleashes the potential of robotics in real-world applications, offering solutions to pressing medical and environmental problems and facilitating a paradigm shift towards intelligent manufacturing in the context of Industry Revolution 4.0.
With the application of AI, a research team led by Prof. Dan Zhang, Chair Professor of Intelligent Robotics and Automation in the Department of Mechanical Engineering, and Director of the PolyU-Nanjing Technology and Innovation Research Institute at the Hong Kong Polytechnic University (PolyU), has fabricated a number of novel robotic systems with high dynamic performance.
Prof. Zhang’s research team has recently proposed a grasp pose detection framework that applies deep neural networks to generate a rich set of omnidirectional (in six degrees of freedom “6-DoF”) grasp poses with high precision. To detect the objects to be grasped, convolutional neural networks (CNNs) are applied to a multi-scale cylinder with varying radii, providing detailed geometric information about each object’s location and size estimation.
Multiple multi-layer perceptrons (MLPs) optimize the precision parameters of the robotic manipulator to grasp objects, including the gripper width, grasp score (for specific in-plane rotation angles and gripper depths) as well as collision detection. These parameters are fed into an algorithm within the framework, extending grasps from pre-set configurations to generate comprehensive grasp poses tailored for the scene.
Experiments reveal that the proposed method consistently outperforms the benchmark method in laboratory simulations, achieving an average success rate of 84.46% compared to 78.31% for the benchmark method in real-world experiments.
In addition, the research team leverages AI technologies to enhance the functionality and user experience of a novel robotic knee exoskeleton for the gait rehabilitation of patients with knee joint impairment. The structure of the exoskeleton includes an actuator powered by an electric motor to assist knee flexion/extension actively, an ankle joint that transfers the weight of the exoskeleton to the ground, and a stiffness adjustment mechanism powered by another electric motor.
A long short-term memory (LSTM) network in a machine learning algorithm is applied to provide real-time nonlinear stiffness and torque adjustments, mimicking the biomechanical characteristics of the human knee joint. The network is trained on a large dataset of electromyography (EMG) signals and knee joint movement data, enabling real-time adjustments of the exoskeleton’s stiffness and torque based on the user’s physiological signals and movement conditions. By predicting necessary adjustments, the system adapts to various gait requirements, enhancing the user’s walking stability and comfort.
The integration of an adaptive acceptance control algorithm based on Radial Basis Function (RBF) networks enables the robotic knee exoskeleton to automatically adjust joint angles and stiffness parameters without the need for force or torque sensors. This enhances the accuracy of position control and improves the exoskeleton’s responsiveness to different walking postures. This data-driven approach refines the model’s predictions and improves overall performance over time.
Experimental results demonstrate that the model outperforms traditional fixed control methods in terms of accuracy and real-time responsiveness, generating the desired reference joint trajectory for users at different walking speeds.
The research from Prof. Zhang and his team reveals that AI techniques, particularly deep learning, have improved the ability of robots to perceive and understand their environments. This advancement contributes to more effective and flexible solutions for handling tasks beyond fixed configurations in standard settings.
The melding of AI and robotics not only enhances precision and accuracy but also introduces new capabilities for robotic automation, enabling real-time decision-making and continuous learning. As a result, robots can improve their performance over time, leading to extended utilization of robotics in society for future endeavors.
More information:
Dan Zhang et al, Smart Adaptation: The Fusion of AI and Robotics for Dynamic Environments (2024)
Hong Kong Polytechnic University
Citation:
Smart adaptation: The fusion of AI and robotics for dynamic environments (2025, June 9)
retrieved 9 June 2025
from https://techxplore.com/news/2025-06-smart-fusion-ai-robotics-dynamic.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.