Code
01SC5224
Duration
01 October 2024 → 30 September 2028
Funding
Regional and community funding: Special Research Fund
Promotor
Fellow
Research disciplines
-
Engineering and technology
- Image and language processing
- Interactive and intelligent systems
- Human-centred and life-like robotics
- Motion planning and control
- Robot manipulation and interfaces
Keywords
Robotics
Large language models
Robotic manipulation
Multimodal Learning
Project description
This project aims to enable mobile robotic manipulation. Therefore, the robot must have robust and useful representations of the world. I focus on multi-modal representations that combine natural language with visio-tactile information, which are, during the learning process, ultimately mapped on robot actions.