15 Jun 2024

Deep learning in machine vision

Deep learning is growing in popularity in the domain of machine vision. It enables data scientists to carefully craft programs to mimic human decision making in specific, well-defined tasks. Smart Machines & Factories takes a look at how manufacturers can easily benefit from deep learning in their machine vision applications.

As manufacturers look towards more intelligent machine vision systems, deep learning is becoming a more common technique. A report by ABI Research predicted that deep learning-based machine vision techniques within smart manufacturing will experience a compound annual growth rate of 20 per cent between 2017 and 2023, with revenue reaching $34 billion by 2023.

However, there are huge barriers to implementing machine vision solutions, due to their cost and the requirement for extensive downtime. Many manufacturers are left unable to install any machine vision solution, because it is cost prohibitive or too complex an engineering task.

The traditional vendor mechanism for deep learning machine vision means that software is sold as a package separately from the other components, all of which must be put together as a hard-engineered solution. This solution will be applicable for inspecting a single product at a single location on one single line. Even the most advanced solution equipped with deep learning will not be truly flexible.

Because the task is so complex, the manufacturer will arrange for a systems integrator to select the lighting, cameras, communication, housing and more. It will be the systems integrator that selects the deep learning software for use in the machine vision solution as manufacturers do not have the expertise in-house to set up, train and operate a traditional deep learning solution independently.

The lack of flexibility in traditional machine vision solutions means that if any change on the line does occur, the manufacturer will require expert attention from the systems integrator once again. The systems integrator will then either adjust the solution, for example by developing new lighting conditions, or may be unable to adjust it. In this case, the manufacturer must replace the visual QA solution with an entirely new one, beginning the costly, time consuming process again.

Complex training

Once built, a machine vision solution equipped with deep learning requires a cumbersome training process. The user must present hundreds to thousands, and sometimes even millions, of defective samples to the solution, so that it can learn what a defective product looks like. The integrator will have to set machine learning parameters, such as data-augmentation, network topologies and final classification thresholds.

The design, installation, set up and training process often takes several months to complete, causing extended periods of downtime or forcing manufacturers to use manual visual inspection, an unreliable and expensive method. Because of all of the challenges, the worst part of a visual QA professional’s day is dealing with the visual QA solution.

This means that the end user, the manufacturer, will hold no expectation to benefit directly from deep learning software — they have no direct interaction with the visual quality assurance (QA) solution developed for their line. The manufacturer can only hope that contemporary deep learning software enhances the quality of the visual QA solution promised to them by a systems integrator.

A deep look into the future

To change this, Inspekto has launched the first entry into the Autonomous Machine Vision market – the INSPEKTO S70 – which gives manufacturers full control over their visual QA. Yonatan Hyatt, CTO of Inspekto, the founder of Autonomous Machine Vision explained that end users can now set up a fully operational quality assurance system out of the box in under an hour. Set up requires around 20 to 30 good samples, and no defective ones: “The end user can benefit from deep learning — as part of this complete system, rather than as a separate software tool — with no requirement to configure parameters or mend with data gathering and data labelling.”

Hyatt explained that Autonomous Machine Vision systems, powered by its Plug and Inspect software, include several artificial intelligence engines, which work in tandem: “These AI engines cover all aspects of visual quality assurance on the line, including the self-setting of sensor parameters, self-adaptation and self-tuning of detection technologies to objects in the sensor’s field-of-view. The system can therefore self-adjust and there is no requirement for proof of concept development.”

The algorithm developed by the company, means that without any prior knowledge of the object nor any expertise from the operator, the system can distinguish any nuisance changes in the field of view from any material changes which constitute a defect. Changes in the object’s orientation or in the lighting conditions will therefore not be flagged up as defects.

The capabilities mean that the system can inspect any product, at any location on the line and under any environmental conditions, with applications ranging from packaging to electronics. It can be applied in all tiers, from original equipment manufacturer (OEM) to tier three and four.

In addition, according to Hyatt, Autonomous Machine Vision offers flexibility — a system can easily be moved from one point on the line to another, offering the same simple set up at the new location as before. This flexibility also means that visual QA can be performed on multiple products at the same location on the production line, with the system detecting and classifying each product as appropriate.

Autonomous Machine Vision means that deep learning machine vision is easy. It gives manufacturers everything they need to efficiently and cost-effectively use a vision system for quality assurance, gating and sorting. Autonomous Machine Vision puts deep learning machine vision directly in the hands of the QA manager, so that working with the visual QA system can be the best part of their day