Please reload

Recent Posts

I'm busy working on my blog posts. Watch this space!

Please reload

Featured Posts

Robotic Manipulation - the Next Frontier

November 19, 2017

 

 

According to latest research from Markets and Markets industrial robot market size is expected to be worth over $70 Billion by 2023 growing at a CAGR of 9.60%. Thats a lot of industrial robots, driven in part by increase in demand from small and medium sized manufacturers as well as increased use of robotics in industries that traditionally have mostly relied on manual processing, such as food production. However, the big elephants in the room when it comes to rapid adoption of industrial robots have been consistent. One, robots have been designed for the automotive industry, where the exact same sequence of operations is performed repeatedly. This approach can work for automotive production lines, but not for food processing where every fruit or piece of meat is different. In other words, traditional approaches don’t support variability in production process. Second, robots deal with the physical world. This means that things change all the time. Variability is everywhere. To carefully fine tune and code the robot for every single use case and scenario is extremely hard.

 

However, that is changing. A new wave of research in universities is triggerring this change. Namely, the use of computer vision and machine learning in robotics. Robots can now increasingly deal with variability in various processes. Take for example the case of picking items in warehouses. We all know that amazon uses 1000s of robots in their warehouses. We probably even have a Terminator type image for how these robots might be. The fact is that these are very limited robots. Most of the funcationality is to traverse the large distances in the warehouse to reach the right bin and aisle ( which is a good navigation challenge in itself). But other than that, the robot is unable to pick items from the shelves on its own. This is because traditionally it is a big challenge for robots just to pick up objects. We humans are very good at this task. That’s because of the extremely evolved coordination between our motor neurons and sensory inputs ( vision or touch ). But robots have lacked those skills.

 

With the new research, it is now becoming possible to perform picking tasks on arbitrarily new objects without needing any new coding or fine tuning. The approach boils down to building powerful simulations for the picking task. Millions of 3D CAD models of random objects are created and the robot tries to pick them, virtually. Since we know Newton’s laws, we can predict when the robot is going to drop or even tear that book which it tried to pick by the front hard cover. Now, give the virtual equivalent of a hard knock to the robot whenever it makes a mistake. Do this millions of times all in simulations, and our robot is now ready for the real world. With little to no readjustment, it learns to pick things in the real world. Even we had to drop and break so many things as babies before we learned to break them only when you want to draw attention. They are getting better !

 

At Reflective AI, we develop software which will help you solve the problem of grasping. If you are a manufacturer looking for more advanced solutions to your grasping, welding or other problems, we can help. Leave us a note at info@reflective.ai. We’ll look at your case and see how we might be able to help.

 

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags
Please reload

Archive
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square