Inquisitive Robotics Lab

Projects

IQR Lab is dedicated to open-sourcing our software and infrastructure. We aim to maintain several packages, applications, datasets, and benchmarks to enable the broader robotics community to build upon our work.

Learning from Corrections

Learning from Corrections

Corrections offer an easy way for end-users to teach and collaborate with a robot, while also offering rich information about task constraints. However, these corrections reflect more than just the optimality of the robot behavior, and are subject to additional influences such as task tolerance, physical effort, and the human’s subjective expectation of whether the robot will succeed. How can robots account for these biases so they can learn more effectively from corrections?

LLMs for End-User Programming

LLMs for End-User Programming

Large Language Models demonstrate exciting reasoning capabilities that can be utilized in translating user instructions to robot actions in Human-Robot collaboration context. Yet this approach is still prone to failure due to ambiguities in user instruction or interpretation of these instructions in the process of generating actions for a robot. How can robots detect ambiguous instructions and actively ask for clarification?

Shared Mental Models

Shared Mental Models

In order for robots to be able to assist humans in diagnosing and solving problems in safety-critical situations, they need to be able to actively provide the social signals necessary to present high-dimensional data clearly, challenge a teammate’s beliefs, collaboratively assess ambiguous problems, and design solutions. We are developing Active Learning methods to enable robots to develop shared mental models with its human teammates.

Pointcloud Stitching

Pointcloud Stitching

Scalable, multicamera distributed system for realtime pointcloud stitching in IQR Lab. This program is currently designed to use the D400 Series Intel RealSense depth cameras. Using the librealsense 2.0 SDK, depth frames are grabbed and pointclouds are computed on the edge, before sending the raw XYZRGB values to a central computer over a TCP sockets. The central program stitches the pointclouds together and displays it a viewer using PCL libraries.