Blog
Check out lab related updates in chronological below, or directly search for updates by their tags.
2025
Imagine a scenario where a robot is deployed in a household to pack up a box. Since there are unseen scenarios such as new shapes of the box or new items the robot has never seen before, the robot will inevitably fail at deployment. One way the robot can learn from their mistakes is through learning from corrections from humans as supervisors. This way the robot can learn directly from the mistakes without requiring large amounts of feedback while still maintaining its autonomy.
2024
With the continued integration of machines into our everyday lives, the principle of non-technical human teachers being able to effectively communicate with and efficiently train robots becomes increasingly relevant. Current and prior research has looked into how people can train a robot to complete manipulation-based tasks using different modalities or interaction types, such as demonstrations [1] and ranked preferences [2]. Alternatively, a person can monitor a robot as it attempts to complete a task, interceding to provide a \emph{correction} when they deem it necessary to modify the robot’s behavior [3,4]. For example, if a robot that is supposed to pick up a mug from the table is moving away from the table instead, a human teacher may offer assistance by correcting the robot’s motion and pushing it in the right direction. This correction should inform how the robot behaves in future variations of the task, while also implying how the robot should not behave (i.e., the behavior that prompted the teacher to intercede in the first place).