At Toyota Research Institute (TRI), we're on a mission to improve the quality of human life. We're developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we've built a world-class team advancing the state of the art in AI, robotics, driving, and material sciences.
To conduct cutting-edge research that will enable general-purpose robots to be reliably deployed at scale in human environments.
We envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots need to operate reliably in messy, unstructured environments. Recent years have witnessed a surge in the use of foundation models in various application domains, particularly in robotics. These "large behavior models" (LBMs) are enhancing the capabilities of autonomous robots to perform complex tasks in open, interactive environments. TRI Robotics is at the forefront of this emerging field by applying insights from foundation models, including large-scale pre-training and generative deep learning. However, ensuring the reliability of LBMs for large-scale deployment in diverse operating conditions remains a challenge.
We aim to make progress on some of the hardest scientific challenges in the safe and effective use and development of machine learning algorithms in robotics. To this end, the research mission of the Trustworthy Learning under Uncertainty (TLU) team within the Robotics division is to enable the robust, reliable, and adaptive deployment of LBMs at scale in human environments. To guarantee dependable deployment at scale in the years to come, we are dedicated to enhancing trustworthiness of LBMs through three key principles, as detailed (i) ensuring objective assessment of policy performance (Rigorous Evaluation), (ii) improving the ability to detect and handle unknown situations and return to nominal performance (Failure Detection and Mitigation), and (iii) developing the capability to identify and adapt to new information (Active / Continual Learning). Our team has deep cross-functional expertise across controls, uncertainty-aware ML, statistics, and robotics. We measure our success by advancing the state of the art through algorithmic innovations and publishing these results in high-impact journals and conferences. We value contributions of reproducible and usable open-source software.
We are looking for a driven research scientist with a strong background in embodied machine learning and a "make it happen" mentality. Specifically, we are looking for expertise across a variety of areas, including Policy Evaluation, Failure Detection and Mitigation, and Active Learning in the context of Large Behavior Models (LBMs) for robotic manipulation. Our topics of interest include but are not limited to: Multi-Modal Foundation Models, Generative Modeling, Imitation Learning, Reinforcement Learning, Planning & Control, Statistics, Uncertainty Estimation, Out-of-Distribution Detection, Safety-Aware & Robust ML, (Inter)Active Learning, and Online / Continual Learning. The ideal candidate is able to conduct research independently and works well as part of a larger research team at the cutting edge of state-of-the-art robotics and machine learning. Experience with robots is preferred, particularly in manipulation. If our mission of robust, reliable, and adaptive deployment of LBMs at scale in human environments resonates with you, reach out by submitting an application!