Showing posts with label Elon Musk. Show all posts
Showing posts with label Elon Musk. Show all posts

Monday, 28 October 2019

Robotic hand made by Elon Musk's OpenAI learns to solve Rubik's Cube

Image Source : OpenAI Blog

Last year we were amazed by the level of dexterity achieved by OpenAI's Dactyl system which was able to learn how to manipulate a cube block to display any commanded side/face.If you missed that article, read about it here.

OpenAI then set themselves a harder task of teaching the robotic hand to solve a Rubik's cube. Quite a daunting task made no easier by the fact that it would use one hand which most humans would find it hard to do. OpenAI harnessed the power of neural networks which are trained entirely in simulation. However, one of the main challenges faced was to make the simulations as realistic as possible because physical factors like friction, elasticity etc. are very hard to model.

The solution they came up with was a new method called Automatic Domain Randomization which endlessly generates progressively more difficult environments for the simulations to solve the Rubik's cube in. This ensures that real world physics gets covered in the spectrum of environments generated and hence bypasses the need to train the simulations on highly accurate environmental models.  

One of the parameters randomized was the size of the Rubik’s Cube. ADR begins with a fixed size of the Rubik’s Cube and gradually increases the randomization range as training progresses. The same technique is applied to all other parameters, such as the mass of the cube, the friction of the robot fingers, and the visual surface materials of the hand. The neural network thus has to learn to solve the Rubik’s Cube under all of those increasingly more difficult conditions.

Here is an uncut version of the robot hand solving the Rubik's cube:


To test the limits of this method, they experimented with a variety of perturbations while the hand is solving the Rubik’s Cube. Not only does this test for the robustness of the control network but also tests the vision network, which is used to estimate the cube’s position and orientation. It was found that the system trained with ADR is surprisingly robust to perturbations. The robot can successfully perform most flips and face rotations under all tested perturbations, though not at peak performance.

The impressive robustness of the robot hand to perturbations can be seen in this video:



Tuesday, 31 July 2018

Elon Musk's startup builds AI to make robotic hand move like humans

Image Source : OpenAI Blog


OpenAI is a company that was co-founded by Elon Musk in 2015 as a non profit research company that aims to discover and enact the path to safe artificial general intelligence (AGI).

One of the most remarkable features that evolution has bestowed upon us other than our brain is our hands. It is a belief among many scientists that our opposable thumbs are in fact, what allowed us to become a superior species ahead of other highly intelligent creatures like Dolphins and Elephants.

In a blog post published by OpenAI on Monday, they claim to have harnessed the power of AI and deep learning to bestow the dexterity of the human hand to robots. Their system named Dactyl is trained entirely in simulation and is able to apply this training in the real world.

Here are the examples of the complex movements the robotic hand is capable of performing:

 
Video source : blog.openai.com

The degree of freedom of a robot, to explain in a simplified way, is the number of ways it can move. In most Industrial applications a robotic arm with 7 degrees of freedom is considered quite advanced. The Dactyl trained arm of OpenAI has 24 degrees of freedom. Furthermore, it has the capability to work with partial information from its sensors and manipulate objects of different geometry.

Here is a schematic of how OpenAI trains the Robot:
Image Source : OpenAI Blog





As can be understood by the above illustration,  the robot is trained entirely in simulation. This allows it to be taught much faster. Also, the setup uses normal RGB cameras to see the object by running orientation estimation algorithms in neural networks. This means that it does not need special objects that are designed for camera tracking, to function. 



This can have amazing applications in handling objects harmful to humans. The success of the technology could be extrapolated to other movements possible by humans to one day build complete humanoid robots like the ones we saw in the Movie "I,Robot".