Friday, January 25, 2019
The first iteration of industrial automation utilized “blind” robots that depended on the accurate positioning of the materials to be handled. Such robots were relatively inflexible, able to adapt to new tasks only through tedious programming. The advent of machine vision helped free such robots somewhat, giving them the ability to operate under less structured conditions by using flat images to guide their operation. Now, the addition of depth information to machine vision is giving vision guided robotics (VGR) vastly greater flexibility in operation and enabling applications once considered impractical.
Machine vision for guiding robotic movement has been implemented in factory applications for many years now and is in many ways a mature technology. Smart camera systems with built-in processing and calibration capabilities, robust recognition and measurement algorithms, and proven libraries that simplify application development are widely available and continually improving. But such vision systems only deal with a two-dimensional (flat) space, restricting the information available to the robot to an object’s position within an X-Y plane and its rotation (angular position) about the Z axis. The object to be imaged needs to lie in that plane and be oriented “face up” for the robot to recognize and work with it.
The addition of depth information, however, changes things dramatically. Now the vision system can determine both an object’s position and orientation in a volume of space. The robot has access to information on six parameters: X, Y, and Z linear positions as well as the roll, pitch, and yaw angular information. The robot can recognize objects in any pose they may present over a range of distances, allowing the robot to operate with materials that are randomly oriented and positioned. Further, the robot can identify the top objects in stacks or piles, something not practical with 2D vision, and determine the distance to objects when planning its movement trajectories.
The rise of 3D machine vision for robot guidance is the result of many different advances. Cameras have gotten smaller, vision processors have gotten faster, vision software has become more advanced, and a variety of approaches for obtaining depth information have become available. The combination is making 3D vision guidance feasible for an expanding range of applications. An analysis by Markets and Markets on the 3D Machine Vision Market predicts an 11% CAGR for 3D machine vision, rising to more than $2 billion by 2022.
Some of the new applications for 3D VGR lie in the postal and logistics spaces. With 3D vision robots can tackle tasks such as parcel sorting and sizing and the loading and unloading of mixed boxes. Robotic transport can more readily navigate unstructured warehouse spaces, and materials handling robots can recognize and extract randomly oriented, mixed objects from a bin — something once only humans could manage.
With 3D vision, cooperative robots (cobots) can provide enhanced operational safety by noticing where its human operators are located and avoiding accidental contact. Combine that with the handling of mixed objects and you get a robotic assistant that can reach into a bin to extract and hand you the objects you requested. Even more exotic applications are being explored, as well. There are robotic systems under development for fruit picking in fields and orchards, for example.
The availability of 3D vision is even allowing NASA to develop robots for the unconstrained environments of space. The humanoid R2 Robonaut, already aboard the International Space Station, is being evaluated for a role in handling routine maintenance tasks as well as EVA operations, using the same tools and materials as the astronauts currently performing those tasks. (To see R2 in action, check out this video.) Here on earth, the Rollin’ Justin robot is under development with an eye toward operation in a future Mars mission.
While 3D vision provides great flexibility in application potential, however, it is not something designers can simply drop into their system. According to David Dechow, staff engineer for machine vision at Fanuc America, developers will need to take a systems approach to 3D VGR designs. Speaking in the AIA webinar Latest Innovations in Vision Guided Robotics, Dechow pointed out that the application’s needs must be thoroughly understood before beginning the vision system design. What the system will need to “see” and what it will do with that information have a significant impact on the vision system’s design requirements.
This Aspencore Special Project can help get developers started. The articles in the project, publish later, will look at yet more applications for VGR, provide a designer’s initial guide to camera selection, look at more space-related VGR, and provide an introduction to the technology choices for 3D vision as well as open source software libraries for basic vision.
Copyright © 2018 CST, Inc. All Rights Reserved