This thesis presents the development of an automatic tuning algorithm for controller parameters in an industrial servo-positioning system using a Reinforcement Learning (RL) approach. The project, carried out in collaboration with Salvagnini Italia S.p.A., aims to automate and accelerate the process of tuning the parameters of the controllers, in particular we focused on the speed ring, typically controlled by a PI controller. Traditional manual tuning methods, often based on trial and error, require continuous operator involvement and can result in suboptimal or inconsistent outcomes. By employing RL, the goalwas to enable the system to independently optimize its control parameters, overcoming the challenge of scarce datasets in this context. The RL algorithm was initially trained on a simplified First-Order Plus Time Delay (FOPTD) model, which is commonly used to approximate electric motor behavior due to its simplicity and predictable response. While effective for early training, this model could not fully capture the complexity of the real servo system. To address this, the model was transitioned to a second-order form, taking into account resonance and anti-resonance frequencies identified through experimental testing on the real plant. This adjustment allowed the RL agent to experience a more accurate representation of the system’s temporal and frequency response, aligning more closely with real-world dynamics. TheRLapproach used in thiswork is based on the Advantage Actor-Critic (A2C) algorithm, which allows the agent to balance exploration and exploitation, learning optimal control strategies from system feedback. After training in a simulated environment, the agent’s performance was validated on a physical setup, where it consistently met predefined performance metrics across multiple test scenarios. The system also demonstrated robustness to minor adjustments in the test bench setup, indicating resilience to small environmental changes. This work addresses the unique challenges of autotuning in servo systems, where dynamics can be complex and fast. The RL-based tuning approach offers a reliable and efficient alternative to manual method, highlighting the potential of RL for broader applications in industrial automation and control systems.

This thesis presents the development of an automatic tuning algorithm for controller parameters in an industrial servo-positioning system using a Reinforcement Learning (RL) approach. The project, carried out in collaboration with Salvagnini Italia S.p.A., aims to automate and accelerate the process of tuning the parameters of the controllers, in particular we focused on the speed ring, typically controlled by a PI controller. Traditional manual tuning methods, often based on trial and error, require continuous operator involvement and can result in suboptimal or inconsistent outcomes. By employing RL, the goalwas to enable the system to independently optimize its control parameters, overcoming the challenge of scarce datasets in this context. The RL algorithm was initially trained on a simplified First-Order Plus Time Delay (FOPTD) model, which is commonly used to approximate electric motor behavior due to its simplicity and predictable response. While effective for early training, this model could not fully capture the complexity of the real servo system. To address this, the model was transitioned to a second-order form, taking into account resonance and anti-resonance frequencies identified through experimental testing on the real plant. This adjustment allowed the RL agent to experience a more accurate representation of the system’s temporal and frequency response, aligning more closely with real-world dynamics. TheRLapproach used in thiswork is based on the Advantage Actor-Critic (A2C) algorithm, which allows the agent to balance exploration and exploitation, learning optimal control strategies from system feedback. After training in a simulated environment, the agent’s performance was validated on a physical setup, where it consistently met predefined performance metrics across multiple test scenarios. The system also demonstrated robustness to minor adjustments in the test bench setup, indicating resilience to small environmental changes. This work addresses the unique challenges of autotuning in servo systems, where dynamics can be complex and fast. The RL-based tuning approach offers a reliable and efficient alternative to manual method, highlighting the potential of RL for broader applications in industrial automation and control systems.

Reinforcement Learning for Optimal Parameter Tuning in Industrial Servopositioning Systems

OLIVIERO, ALESSANDRA
2023/2024

Abstract

This thesis presents the development of an automatic tuning algorithm for controller parameters in an industrial servo-positioning system using a Reinforcement Learning (RL) approach. The project, carried out in collaboration with Salvagnini Italia S.p.A., aims to automate and accelerate the process of tuning the parameters of the controllers, in particular we focused on the speed ring, typically controlled by a PI controller. Traditional manual tuning methods, often based on trial and error, require continuous operator involvement and can result in suboptimal or inconsistent outcomes. By employing RL, the goalwas to enable the system to independently optimize its control parameters, overcoming the challenge of scarce datasets in this context. The RL algorithm was initially trained on a simplified First-Order Plus Time Delay (FOPTD) model, which is commonly used to approximate electric motor behavior due to its simplicity and predictable response. While effective for early training, this model could not fully capture the complexity of the real servo system. To address this, the model was transitioned to a second-order form, taking into account resonance and anti-resonance frequencies identified through experimental testing on the real plant. This adjustment allowed the RL agent to experience a more accurate representation of the system’s temporal and frequency response, aligning more closely with real-world dynamics. TheRLapproach used in thiswork is based on the Advantage Actor-Critic (A2C) algorithm, which allows the agent to balance exploration and exploitation, learning optimal control strategies from system feedback. After training in a simulated environment, the agent’s performance was validated on a physical setup, where it consistently met predefined performance metrics across multiple test scenarios. The system also demonstrated robustness to minor adjustments in the test bench setup, indicating resilience to small environmental changes. This work addresses the unique challenges of autotuning in servo systems, where dynamics can be complex and fast. The RL-based tuning approach offers a reliable and efficient alternative to manual method, highlighting the potential of RL for broader applications in industrial automation and control systems.
2023
Reinforcement Learning for Optimal Parameter Tuning in Industrial Servopositioning Systems
This thesis presents the development of an automatic tuning algorithm for controller parameters in an industrial servo-positioning system using a Reinforcement Learning (RL) approach. The project, carried out in collaboration with Salvagnini Italia S.p.A., aims to automate and accelerate the process of tuning the parameters of the controllers, in particular we focused on the speed ring, typically controlled by a PI controller. Traditional manual tuning methods, often based on trial and error, require continuous operator involvement and can result in suboptimal or inconsistent outcomes. By employing RL, the goalwas to enable the system to independently optimize its control parameters, overcoming the challenge of scarce datasets in this context. The RL algorithm was initially trained on a simplified First-Order Plus Time Delay (FOPTD) model, which is commonly used to approximate electric motor behavior due to its simplicity and predictable response. While effective for early training, this model could not fully capture the complexity of the real servo system. To address this, the model was transitioned to a second-order form, taking into account resonance and anti-resonance frequencies identified through experimental testing on the real plant. This adjustment allowed the RL agent to experience a more accurate representation of the system’s temporal and frequency response, aligning more closely with real-world dynamics. TheRLapproach used in thiswork is based on the Advantage Actor-Critic (A2C) algorithm, which allows the agent to balance exploration and exploitation, learning optimal control strategies from system feedback. After training in a simulated environment, the agent’s performance was validated on a physical setup, where it consistently met predefined performance metrics across multiple test scenarios. The system also demonstrated robustness to minor adjustments in the test bench setup, indicating resilience to small environmental changes. This work addresses the unique challenges of autotuning in servo systems, where dynamics can be complex and fast. The RL-based tuning approach offers a reliable and efficient alternative to manual method, highlighting the potential of RL for broader applications in industrial automation and control systems.
Autotuning
RL
Servo Positioner
Controller
File in questo prodotto:
File Dimensione Formato  
Oliviero_Alessandra.pdf

accesso riservato

Dimensione 7.96 MB
Formato Adobe PDF
7.96 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/77007