The application of deep learning and machine learning methods are beginning to transform complex image classification challenges. Simplified 3D vision-robot interfaces are facilitating high-performance 3D robot vision-guidance for quality control inspection and automated assembly with smart pick and place.
The development of scalable embedded vision systems is offering great flexibility to the machine builder, systems integrator or OEM who may want to use vision as an integral part of a process or machine.
Progress is relentless and Industry 4.0, the Internet of Things (IoT), cloud computing, together with the wider use of artificial intelligence, machine learning and many other technologies present users and developers of vision systems with big challenges in the selection of the ideal system for their respective application.
Nevertheless, the use of machine vision is not restricted to highly automated processes; it also has applications in areas where there is a high level of manual involvement. We can consider four stages of machine vision involvement.
Stage 1: Aiding manual assembly
â € <In the manufacturing sector, there are huge numbers of products that are assembled manually, relying on the skill of the operator to ‘get it right’. These products are often visually inspected by another member of staff as part of the QC process.
There are two outcomes for any faulty product/components that are produced: they are either identified at the QC stage and rejected, or they find their way through to the end customer, where they are likely to be returned as sub-standard. Either way, unless the product can be re-worked there could be a lot of waste and a potential cloud over the reputation of the manufacturer.
Even if the rejected component can be reworked, this incurs additional costs for the manufacturer. Installing a vision system to take over the inspection can significantly reduce the chances of a defective product reaching a customer, which is good for reputation, but does little to solve rework costs.
The solution is to eliminate defects at the point of manufacture, and a new vision approach has been introduced to help with this. This involves the use of a ‘human assist’ camera, which has a set of assembly instructions loaded into it. The operator follows the instructions which are displayed on a monitor.
After every action the system compares the result to the correct stored image to ensure that it has been carried out correctly and completely before the operator can move on to the next step. If an action is incomplete or if a mistake is made, it is displayed to the operator so that it can be corrected. Each step completed can be verified and recorded to provide data that can be used for assembly work analysis and traceability.
Stage 2: Integrating a manual assembly process
â € <The approach outlined above is highly effective in ensuring the correct manual assembly of a product, but is essentially a stand-alone system. It is possible to take this a step further by integrating this type of manual assembly process into a company’s overall control system.
This would allow a more sophisticated vision system to be used to assist with the manual assembly, offering a greater range of measurement and inspection tools, while using the same principle of highlighting any assembly errors on the display monitor. Assembly instructions and manufacturing data could then be downloaded to the system from a central database as required.
This approach would also allow various safeguards to be introduced such as linking an operator ID to training competency so that the system could check whether an operator logging in to begin a particular assembly was trained for that product. Similarly all inspection data including images could be transferred back to the database to provide a complete audit trail for every component assembled. The availability of more sophisticated vision tools also allows the system to accommodate new requirements as new products are brought on stream.
Stage 3: Automated machine vision inspection
â € <Automated inspection systems are used in QC applications in an enormous range of industries and processes. Whilst configurations can vary enormously, the basic premise is that the vision system is integrated into the process, where it is linked to a reject mechanism.
Products or components are inspected, often at high speed, and accepted or rejected on the basis of the measurements made. Vision systems can vary from a single-point self-contained smart camera, where all of the processing and measurement is carried out in the camera itself and a pass/fail result sent back to the reject mechanism, to PC-based systems that may feature multiple cameras and/or multiple inspection stations.
Key to the success of this approach is the ability to integrate the vision system into the process, taking into account space and other environmental considerations.
Vision systems can be retrofitted into existing processes, designed from the outset into new ones, and with the emergence of embedded vision systems, are increasingly being incorporated into OEM equipment.
Stage 4: Process control using machine vision
The use of automated vision as a QC tool significantly reduces the possibility of ‘out of spec’ product reaching an end user, but by using it in conjunction with statistical process control and feedback methods it can not only check critical measurements but also analyse trends in these measurements and make changes to the process. In this way, interventions can be made to adjust the process before any out-of-tolerance product is produced.
There is therefore a logical extension from this into Industry 4.0 where the objectives are to optimise the process using big data analytics based on the feedback from many different types of sensors that are monitoring the process. These, of course, will include simple and smart vision sensors as well as more sophisticated vision subsystems or systems.
Assessing the possibilities
â € <The four stages of vision described above give only an overview of the way that vision systems can be deployed, without doing justice to the extraordinary capabilities that the machine vision has to offer.
Applications range from the measurement of product and components during manufacturing, to the inspection of the integrity of packaging to the reading and verification of print, barcodes and labels. Measurements fall into 3 categories: 1D, 2D and 3D. 1D measurements are typically used to obtain the positions, distances, or angles of edges. 2D measurements provide a host of measurements including area, shape, perimeter, centre of gravity, the quality of surface appearance, edge based measurements and the presence and location of features.
Pattern matching of an object against a template is also an important part of the 2D armoury. Reading and checking of characters and text, and decoding 1D or 2D codes is another key activity. 3D measurement methods add height information, allowing the measurement of volume, shape, and surface quality such as indentations, scratches and dents as well as 3D shape matching.
Materials produced in continuous rolls (web) or sheet, such as paper, textiles, film, foil, plastics, metals, glass, or coatings are generally inspected using continuous line scan vision systems to detect and identify defects.
Vision plays an important role in end of line inspection by reading unique identifiers in the form of 1D or 2D codes, alphanumerics or even braille for tracking and tracing applications in industries as diverse as aerospace, automotive, food, healthcare and pharmaceutical. Human readable on-pack data, such as batch, lot numbers, best before or expiry dates are also critical for products such as food, pharmaceutical, medical devices and cosmetics.
Machine vision is also becoming increasingly important in robot applications. Industrial robots are already used extensively and with the emergence of collaborative robots and rapid developments in 3D machine vision, they are being used much more in combination, for example in vision-guided robotics or random bin-picking.
The machine vision system identifies the precise location of the object and these coordinates are transferred to the robot. Massive strides in vision-robot interfaces make this process much easier.
Machine vision technology encompasses all of the component parts of a machine vision system such as cameras, optics, lenses, frame grabbers, computers, software, cables etc. Most important is the expertise to be able to select the most appropriate components and create a solution for the specific application.
Selecting a supplier with extensive knowledge and experience that can offer tailored solutions, from configured components to vertical application sub-systems for systems integrators or the development of customer-specific solutions for OEMs, is a major consideration.
This is increasingly important when considering the development of vision systems embedded into other equipment and manufacturing processes. Many of the leading machine vision libraries and toolkits can now be ported to small, embedded processing boards, usually based on ARM architecture, offering a lower cost for higher volume applications.
Combining these processing capabilities with low cost cameras, including board level cameras, means that vision systems could be incorporated into a wide variety of products and processes with comparatively small cost overheads which were previously not viable.
In addition the exploitation of deep learning and machine learning techniques in vision applications are opening up more possibilities for organic and varying products which can also run on inexpensive embedded systems, making extremely cost-effective systems possible.