Robotics

Reimagining robot training with real-time 3D

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

AI and machine learning provide a way to move beyond traditional, fixed workflows that have historically been used by industrial engineers. Verticals like manufacturing, logistics, and extraction have relied on defining and executing rigid, inflexible tasks that cannot adapt to dynamic conditions on the worksite or ambiguous situations.

By incorporating AI into these workflows, users can create more capable processes, but training robust AI algorithms requires an extensive amount of training data. Therefore, the key question is how do users quickly generate this training data and test these new solutions?

An emerging answer to this question is simulation. Simulation enables ML researchers and industry professionals to create environments that generate orders of magnitude more data for testing and training compared to developing in the real world alone. It also ensures that a greater variety of data is available in the training process, which yields more robust algorithms for a particular task.

Credit: Unity

Some of the most influential and cutting-edge organisations worldwide are pioneering efforts to train in simulation. OpenAI famously trained a robotic hand to solve a scrambled Rubik’s Cube using only synthetic data. By training the robot in simulation, engineers could solve a complex problem that conventional robotics struggled to do, and simulation provided the thousands of hours of experience necessary to succeed.

To ensure that simulation-based training transfers to the real world, developers must build a synthetic environment that varies its physical properties so that solutions are robust in the real world. That includes tens of thousands of variations of the visual elements of the scene as well as its physics, the robot’s mass, the friction of its gripper, and how the simulated sensors in the scene interact with an object’s surface properties.

Robotics developers are pushing towards a generalisable simulation solution: a single simulator that supports training any type of robot for any task. Unity is the seminal tool in this arena; as a real-time 3D engine, it supports hundreds of different scenarios and robotic form factors with realistic physics and high-quality rendering. DeepMind has already capitalised on the power of Unity, using it to train an array of autonomous agents at scale.

Credit: Unity

It’s not just research firms that see this power. Cross Compass, a Tokyo-based robotics company, researched many available simulators before selecting our company. Our physics and superior 3D rendering redefine what robots are capable of by training in simulation before deploying to real-world production lines.

It’s hugely important to continually release new features and demonstrations in robotics to help users unlock the power of simulation for themselves. The company recently released its Object Pose Estimation demo, which illustrates how to generate visually diverse synthetic data as an efficient and effective solution for machine learning data requirements. This example shows how we generated diverse automatically labeled data in its engine to train a machine learning model. This model is then deployed in Unity on a simulated UR3 robotic arm using the Robot Operating System (ROS) to enable pick-and-place with a cube with an unknown pose.

Credit: Unity

These workflows are only scratching the surface of how simulation improves industrial outcomes. Increasing the quantity of simulations before deploying to production will lead to greater productivity and cost savings. And that could only be a good thing, no matter what industry it happens to be.

Author

  • Dr. Danny Lange

    Dr. Danny Lange is Senior Vice President of Artificial Intelligence and Machine Learning at Unity Technologies. As head of machine learning at Unity, Lange leads the company’s innovation around AI (Artificial Intelligence) and Machine Learning, focusing on bringing AI to simulation and gaming. Prior to joining Unity, Lange was the head of machine learning at Uber, where he led efforts to build the world’s most versatile Machine Learning platform to support the company’s hyper-growth. Lange also served as General Manager of Amazon Machine Learning -- an AWS product that offers Machine Learning as a Cloud Service. Before that, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale Machine Learning for Big Data. Lange spent 8 years on Speech Recognition Systems, first as CTO of General Magic, Inc., then through his work on General Motor’s OnStar Virtual Advisor, one of the largest deployments of an intelligent personal assistant until Siri. Danny started his career as a Computer Scientist at IBM Research. He holds MS and Ph.D. degrees in Computer Science from the Technical University of Denmark. He is a member of the Association for Computer Machinery (ACM) and IEEE Computer Society, and has several patents to his credit.

Related Articles

Back to top button