Humans have a strong ability to apply learned knowledge to new situations, with compositionality being key – the ability to break things down into reusable parts. Researchers at OIST in Japan developed a brain-inspired AI model to teach a robot language and physical actions together, allowing it to generalize and understand new commands. By combining vision, movement, and language, the robot learned to follow instructions, with visual attention and working memory being crucial for accurate learning. This study sheds light on how humans and AI can learn through a combination of language and physical experience, leading to more interactive and human-like robots in the future.
Full Article





