Robotic manipulation in household and industrial settings requires not only the ability to perform individual tasks but also the flexibility to execute sequences of actions based on user intent. In this work, teleoperation data is first used to train policies for individual atomic tasks, ensuring stable learning through deep reinforcement learning. These policies are then extended to support multi-task and sequential execution, enabling the robot to generalize across different task orders. To bridge human–robot interaction, the system incorporates Large Language Models (LLMs) that process natural language prompts and translate them into executable task sequences. This research presents a complete pipeline—from human demonstrations to deep reinforcement learning to natural language task specification—providing a step toward interactive and autonomous robotic assistants capable of understanding and executing complex, language-guided instructions.
Robotic manipulation in household and industrial settings requires not only the ability to perform individual tasks but also the flexibility to execute sequences of actions based on user intent. In this work, teleoperation data is first used to train policies for individual atomic tasks, ensuring stable learning through deep reinforcement learning. These policies are then extended to support multi-task and sequential execution, enabling the robot to generalize across different task orders. To bridge human–robot interaction, the system incorporates Large Language Models (LLMs) that process natural language prompts and translate them into executable task sequences. This research presents a complete pipeline—from human demonstrations to deep reinforcement learning to natural language task specification—providing a step toward interactive and autonomous robotic assistants capable of understanding and executing complex, language-guided instructions.
Autonomous Robot Action Sequence Learning with Teleoperation, Deep Reinforcement Learning, and LLM-Based Natural Language Input
CHEKKALA, JAYANTHVIKRAM
2025/2026
Abstract
Robotic manipulation in household and industrial settings requires not only the ability to perform individual tasks but also the flexibility to execute sequences of actions based on user intent. In this work, teleoperation data is first used to train policies for individual atomic tasks, ensuring stable learning through deep reinforcement learning. These policies are then extended to support multi-task and sequential execution, enabling the robot to generalize across different task orders. To bridge human–robot interaction, the system incorporates Large Language Models (LLMs) that process natural language prompts and translate them into executable task sequences. This research presents a complete pipeline—from human demonstrations to deep reinforcement learning to natural language task specification—providing a step toward interactive and autonomous robotic assistants capable of understanding and executing complex, language-guided instructions.| File | Dimensione | Formato | |
|---|---|---|---|
|
jayanthvikram_chekkala.pdf
accesso aperto
Dimensione
9.57 MB
Formato
Adobe PDF
|
9.57 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/106803