This new testing suite — part of the AI2-THOR update — has more than 100 physics-enabled physics-enabled rooms that have complex environments and obstructions that might get in the way of a robot arm while interacting with objects. ManipulaTHOR will enable faster training in more complex environments without needing to build real robots.
The new framework update allows these simulated robots to move in these rooms like humans and perform tasks such as navigating a kitchen, opening a fridge, or popping open a can of soda. Plus, through these features, robots are able to move objects in a room swiftly and accurately — despite many hindrances.
A lot of robots are trained to move in a very specific manner, and they find it difficult to overcome obstacles, making them unsuitable for a lot of real-life scenarios. This framework provides a way to solve such problems first in a virtual world, so some of those concepts could be applied to physical robots later.
The team has modeled this virtual arm based on the design of the Kinova Gen3 Modular Robotic Arm specification — a real-life robot. The platform allows the arm to move in six degrees of freedom. Plus, there are virtual sensors such as egocentric RGB-D images and touch sensors to better gauge the room and objects.
Roozbeh Mottaghi, research manager at AI2, said that this framework allows researchers to simulate a ton of scenarios safely and quickly:
In comparison to running an experiment on an actual robot, AI2-THOR is incredibly fast and safe. Over the years, AI2-THOR has enabled research on many different tasks such as navigation, instruction following, multi-agent collaboration, performing household tasks, reasoning if an object can be opened or not. This evolution of AI2-THOR allows researchers and scientists to scale the current limits of embodied AI.
You can learn more about the ManipluaTHOR framework here.