High-performance image sensors, increasing data transmission rates as well as multispectral and multisensor approaches are expanding the range of applications for imaging solutions all the time. At the same time, artificial intelligence is helping to speed up evaluation of the rapidly growing imaging data volumes. In this interview, Prof. Michael Heizmann, Head of the Institute for Industrial IT at the Karlsruhe Institute of Technology (KIT), and Manfred Wütschner, Group Manager Field Application Engineering at STEMMER IMAGING AG, talk about the effects of these trends on the imaging market. They also look at the potential of modern imaging processes in Industry 4.0 and promising areas of use beyond industrial applications.
Manfred Wütschner: I’d start with GenICam—the Generic Interface for Cameras, which enables the control of cameras from various manufacturers directly in order to set features such as exposure time or speed. Since 2003, we’ve been involved in the standardization process which from 2006 formed the basis for the first GigE Vision (Gigabit Ethernet for Machine Vision) cameras. Since then, there’s no need to read operating instructions because all information regarding performance and features is documented in the camera software. GenICam and GigE-Vision are now widespread in factories, as are USB3 Vision and CoaXPress cameras which are likewise based on GenICam. Standardization on a hardware and software level was a breakthrough which has given us and our customers much greater room for maneuver. Nowadays, cameras can be replaced without having to adapt the software.
Prof. Michael Heizmann: Another milestone in addition to standardization was the transition from CCD to CMOS image sensors. The improvements in camera performance which continue to this day are also opening up new possibilities. The development of 3D sensors, active sensor systems and multispectral cameras likewise offers great potential for innovation and the aim should be to leverage this potential in the years ahead. In addition to these hardware innovations, the increasing use of machine learning (ML) software should also be mentioned.
Wütschner: The transition from CCD to CMOS was a success story, even though things didn’t go smoothly at first. Initially, the image quality was worse. But when it comes to resolution, speed and the integration of image processing into the sensor itself, this switchover signified enormous progress. The same applies to costs and availability—because there are now several dozen sensor manufacturers rather than just a handful.
Heizmann: For a number of years now, we’ve been looking at how systems can learn without human intervention. After all, this would allow us to elicit new information from image data and thus use much more data than was previously possible without having to rely on experts. This expands the range of possible applications because machine learning can help to draw the line between tolerable deviations and actual production errors more precisely. However, the use of ML requires new methods. Even proving that a system of this type functions as planned is much harder than with classic image processing systems. If such a project is to be carried out, a very large quantity of image data from good and defective parts must be available in order for the ML software to learn. In the past, we defined a set of parameters using a number of defective parts and experience. Nowadays, however, we need sufficient data which show the defect in a wide range of ways and under different conditions. It’s up to users to provide these data. That’s a real problem if errors hardly ever occur during production.
Wütschner: We’ve been working with ML systems since 2002. For many of our customers, they’re now standard. And it really is difficult if there are no or only very few bad examples. The algorithms compensate for this to a certain extent but the systems don’t work without learning examples. In certain cases, projects don’t get off the ground for this very reason. The fact that ML is a black box also causes difficulties. The system learns without human intervention. If safety-related decisions are made quickly on this basis, customers are faced with the issue of product liability—especially as it’s not always clear how these decisions were made. But once an ML solution has been successfully integrated, it solves problems which would have discouraged us from using machine vision in the past.
Heizmann: Image processing is an established part of automation technology. The closer we get to Industry 4.0—with its integration, connectivity, modularity, flexibility and individualization—the more we need machine vision. After all, it provides an insight into automated processes, documents them and is one of the keys to consistently high quality. Further development is needed in particular where products are individualized and processes are changed frequently. In many cases, today’s solutions aren’t adaptive enough. But the foundations for Industry 4.0 have been laid.
Wütschner: Machine vision and Industry 4.0 are closely linked, especially as image processing per se is connected with production systems. Nowadays, machines and systems usually communicate via the OPC-UA standard. Many imaging and component providers implemented this standard a long time ago. Machine vision thus plays an integral role in communication in connected process chains, which makes monitoring decisions and transferring image data to control centers or the remote service easier. However, some customers remain skeptical when it comes to connectivity, cyber security and data sovereignty.
Wütschner: Yes. Although many of our customers have internal know-how, the ever greater range of systems on offer is increasing complexity to the point that they’re happy to accept help from us. The individual products are becoming more and more user-friendly and can usually be connected via plug and play. It’s now more difficult to compare the technologies on offer with all their advantages and disadvantages. If customers are to cope with the variety of products on offer, they need specialist knowledge and, if possible, cross-sector project expertise. Accordingly, advice and application engineering are playing an increasingly important role in our projects. We’re also significantly developing the training on offer with our Virtual Imaging Academy in order to give our customers an idea of the possibilities offered by the relevant technologies and further train their specialists in hardware or software-related areas.
Heizmann: It has a huge influence. We’re further developing our study courses all the time. Our key aim is to provide students with a sound basic knowledge of signal and image processing so that they can actually understand new approaches rather than simply using them. Further-reaching master events usually focus on cutting-edge technologies such as 3D sensor systems, hyperspectral imaging, information fusion or machine learning. Final papers are also important because students gain a deep insight into the material during projects and use the very latest processes. During their studies, they’re given the methodical tools for lifelong learning. After all, imaging processing is developing so quickly that lifelong learning is essential.
Wütschner: I can confirm that. Since I’ve been working in image processing, I’ve been learning new things every day. The fact that our community is closely linked and spends a lot of time working with open source solutions helps us a lot. In the past, I had to do a lot of time-consuming research in specialist books. Now, useful information is often just a few mouse clicks away. And there are easily accessible solutions for virtually any problem. These include example code which can be built on with specialist knowledge.
Wütschner: So far, we’ve come through the pandemic and the shortage of components very well. This could be because we serve a wide range of “artificial vision” applications which go beyond classic industrial machine vision. These include areas such as agriculture and food, sport and entertainment, and much more. Agriculture is increasingly relying on precision farming and using data collected by sensors in a targeted manner in the fields. For example, fertilizer can be applied only to the plants that actually need it and lasers can be used to combat weeds mechanically. There is huge potential here—both in terms of sustainability and the market opportunities for image processing. In sports, we offer vision solutions for goal line technology, and player tracking in football, but the true potential is nowhere near being reached. More and more applications are linking sport with the digital world. In many cases, this is thanks to cameras. Another dynamic growth field is mobility, transport and logistics.
Heizmann: Applications where people are more closely involved with image processing are exciting. Smartphones have been a big success here. Most of their camera-based features are just for fun. But I think we’ll see one or two developments here in the area of “assisted living” for older or physically disabled people. This is challenging because properties have individual designs and can barely be standardized. And the systems used must be able to cope with pets for example running through the picture at any time. Technically and in terms of costs, machine learning is now ready for use in such markets. Autonomous vehicles are another exciting area—whether they are drones flying around or vehicles traveling along roads or in factories.
Wütschner: I see this transformation as an opportunity. Here in Germany, the sector played and still plays a central role. It will rely on machine vision in electric motor and battery production as well as body construction in order to maintain the established quality standards. There are numerous other sectors with a great future too—with huge demand for machine and artificial vision solutions in certain cases. STEMMER IMAGING embraced this diversification years ago and doesn’t depend on any one sector thanks to the wide regional and application-related product range. I’m convinced that the majority of suppliers from other areas will be able to cope with this transition. They work on a technical level which will enable them to succeed in any other sector.
Heizmann: I too believe that there are more opportunities than risks. Naturally, many machine vision applications used in the production of internal combustion engines and their components will no longer be required. But extremely high quality requirements apply for the high-voltage systems of the future too. At the moment, a lot of research projects are being carried out in order to investigate the possibilities for close inline quality checks. Machine vision solutions will be needed for safety and assistance functions and even autonomous driving too—in order to analyze and interpret the environment. I’m expecting the research carried out in the area of autonomous driving to result in numerous new developments which could be interesting for all other machine vision applications. These worlds will continue to converge in the future.
Wütschner: I agree. The boundaries between the industrial and consumer sector are visibly disappearing. We’re keeping a close eye on what’s happening in the area of sensors – and how innovative approaches in industry can help our artificial vision projects to progress.