This thesis focuses on enhancing the control mechanisms of upper-limb exoskeleton robots, specifically targeting the shoulder and elbow joints, using reinforcement learning algorithms. The primary objective is to improve the smoothness and flexibility of exoskeleton movements, making them more akin to natural human motion. Algorithm Development: Designing and implementing reinforcement learning algorithms tailored for controlling the upper-limb exoskeleton. This involves selecting appropriate reward functions, state representations, and learning methods to ensure the controller can effectively learn from its experiences. Simulation and Testing: Conducting extensive simulations to test the algorithms under various conditions and refining them based on performance metrics. This stage is crucial for identifying potential issues and improving the controller before real-world application. Real-world Application: Applying the refined algorithms to a physical upper-limb exoskeleton prototype and evaluating its performance in real-world scenarios. This involves collaborating with subjects to assess the smoothness, flexibility, and overall effectiveness of the exoskeleton's movements. Comparative Analysis: Comparing the performance of the reinforcement learning-based controller with traditional control methods to highlight the improvements in movement smoothness and flexibility.

This thesis focuses on enhancing the control mechanisms of upper-limb exoskeleton robots, specifically targeting the shoulder and elbow joints, using reinforcement learning algorithms. The primary objective is to improve the smoothness and flexibility of exoskeleton movements, making them more akin to natural human motion. Algorithm Development: Designing and implementing reinforcement learning algorithms tailored for controlling the upper-limb exoskeleton. This involves selecting appropriate reward functions, state representations, and learning methods to ensure the controller can effectively learn from its experiences. Simulation and Testing: Conducting extensive simulations to test the algorithms under various conditions and refining them based on performance metrics. This stage is crucial for identifying potential issues and improving the controller before real-world application. Real-world Application: Applying the refined algorithms to a physical upper-limb exoskeleton prototype and evaluating its performance in real-world scenarios. This involves collaborating with subjects to assess the smoothness, flexibility, and overall effectiveness of the exoskeleton's movements. Comparative Analysis: Comparing the performance of the reinforcement learning-based controller with traditional control methods to highlight the improvements in movement smoothness and flexibility.

Applying Reinforcement Learning to the Controllers of the Exoskeleton Robots to Improve their Flexibility

AHMADI, ALI
2023/2024

Abstract

This thesis focuses on enhancing the control mechanisms of upper-limb exoskeleton robots, specifically targeting the shoulder and elbow joints, using reinforcement learning algorithms. The primary objective is to improve the smoothness and flexibility of exoskeleton movements, making them more akin to natural human motion. Algorithm Development: Designing and implementing reinforcement learning algorithms tailored for controlling the upper-limb exoskeleton. This involves selecting appropriate reward functions, state representations, and learning methods to ensure the controller can effectively learn from its experiences. Simulation and Testing: Conducting extensive simulations to test the algorithms under various conditions and refining them based on performance metrics. This stage is crucial for identifying potential issues and improving the controller before real-world application. Real-world Application: Applying the refined algorithms to a physical upper-limb exoskeleton prototype and evaluating its performance in real-world scenarios. This involves collaborating with subjects to assess the smoothness, flexibility, and overall effectiveness of the exoskeleton's movements. Comparative Analysis: Comparing the performance of the reinforcement learning-based controller with traditional control methods to highlight the improvements in movement smoothness and flexibility.
2023
Applying Reinforcement Learning to the Controllers of the Exoskeleton Robots to Improve their Flexibility
This thesis focuses on enhancing the control mechanisms of upper-limb exoskeleton robots, specifically targeting the shoulder and elbow joints, using reinforcement learning algorithms. The primary objective is to improve the smoothness and flexibility of exoskeleton movements, making them more akin to natural human motion. Algorithm Development: Designing and implementing reinforcement learning algorithms tailored for controlling the upper-limb exoskeleton. This involves selecting appropriate reward functions, state representations, and learning methods to ensure the controller can effectively learn from its experiences. Simulation and Testing: Conducting extensive simulations to test the algorithms under various conditions and refining them based on performance metrics. This stage is crucial for identifying potential issues and improving the controller before real-world application. Real-world Application: Applying the refined algorithms to a physical upper-limb exoskeleton prototype and evaluating its performance in real-world scenarios. This involves collaborating with subjects to assess the smoothness, flexibility, and overall effectiveness of the exoskeleton's movements. Comparative Analysis: Comparing the performance of the reinforcement learning-based controller with traditional control methods to highlight the improvements in movement smoothness and flexibility.
Reinforcement-Learni
Exoskeleton
Deep learning
Robotic
Controllers
File in questo prodotto:
File Dimensione Formato  
Master_thesis_Ahmadi.pdf

accesso aperto

Dimensione 3.12 MB
Formato Adobe PDF
3.12 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/73641