Robotics

Reimagining robot training with real-time 3D

AI and machine learning provide a way to move beyond traditional, fixed workflows that have historically been used by industrial engineers. Verticals like manufacturing, logistics, and extraction have relied on defining and executing rigid, inflexible tasks that cannot adapt to dynamic conditions on the worksite or ambiguous situations.

By incorporating AI into these workflows, users can create more capable processes, but training robust AI algorithms requires an extensive amount of training data. Therefore, the key question is how do users quickly generate this training data and test these new solutions?

An emerging answer to this question is simulation. Simulation enables ML researchers and industry professionals to create environments that generate orders of magnitude more data for testing and training compared to developing in the real world alone. It also ensures that a greater variety of data is available in the training process, which yields more robust algorithms for a particular task.

Credit: Unity

Some of the most influential and cutting-edge organisations worldwide are pioneering efforts to train in simulation. OpenAI famously trained a robotic hand to solve a scrambled Rubik’s Cube using only synthetic data. By training the robot in simulation, engineers could solve a complex problem that conventional robotics struggled to do, and simulation provided the thousands of hours of experience necessary to succeed.

To ensure that simulation-based training transfers to the real world, developers must build a synthetic environment that varies its physical properties so that solutions are robust in the real world. That includes tens of thousands of variations of the visual elements of the scene as well as its physics, the robot’s mass, the friction of its gripper, and how the simulated sensors in the scene interact with an object’s surface properties.

Robotics developers are pushing towards a generalisable simulation solution: a single simulator that supports training any type of robot for any task. Unity is the seminal tool in this arena; as a real-time 3D engine, it supports hundreds of different scenarios and robotic form factors with realistic physics and high-quality rendering. DeepMind has already capitalised on the power of Unity, using it to train an array of autonomous agents at scale.

Credit: Unity

It’s not just research firms that see this power. Cross Compass, a Tokyo-based robotics company, researched many available simulators before selecting our company. Our physics and superior 3D rendering redefine what robots are capable of by training in simulation before deploying to real-world production lines.

It’s hugely important to continually release new features and demonstrations in robotics to help users unlock the power of simulation for themselves. The company recently released its Object Pose Estimation demo, which illustrates how to generate visually diverse synthetic data as an efficient and effective solution for machine learning data requirements. This example shows how we generated diverse automatically labeled data in its engine to train a machine learning model. This model is then deployed in Unity on a simulated UR3 robotic arm using the Robot Operating System (ROS) to enable pick-and-place with a cube with an unknown pose.

Credit: Unity

These workflows are only scratching the surface of how simulation improves industrial outcomes. Increasing the quantity of simulations before deploying to production will lead to greater productivity and cost savings. And that could only be a good thing, no matter what industry it happens to be.

Related Articles

Back to top button