In industrial production, laser processes ensure precision, flexibility and efficiency. Weightless and contactless laser machining can be controlled with great precision, making it ideal for the data-driven Industry 4.0 process world, especially if it can be optimized with the help of artificial intelligence (AI). Peter Abels, an expert in process monitoring and control at the Fraunhofer Institute for Laser Technology ILT Aachen, is pursuing this goal, as is Joachim Schwarz who is setting up and heading a machine learning team in the Research and Development department of Precitec Vision GmbH & Co. KG. In this interview, the two experts talk about the technological potential of AI for laser-based production and quality monitoring. They also discuss how the use of AI could influence the hardware requirements.
Peter Abels: Compared to our first conference in 2019, AI is no longer new and exotic for laser technology. Within two years, the technology has become established in the sector in spite of coronavirus. It is becoming an increasingly important tool in laser processes. People are overcoming their initial reservations. System providers are working to integrate artificial intelligence and machine learning into their products and to make series applications usable. AI has arrived on the market.
Joachim Schwarz: A few years ago, we carried out the first proof of concept tests. We started in image processing after we saw the progress being made in deep convolutional neural network (CNN) architectures such as AlexNet. We instantly saw the benefits, for example when it comes to analyzing the quality of welded seams. We were able to access a large volume of image data relating to machines all over the world and quickly made significant progress. The results were better than with classic approaches. During the next step, we applied the methodology for process emissions in the form of time series. The aim was not only to beat the previous system—we also wanted to expand its functionality in order to obtain strength predictions from process data for example. This too worked well on the basis of near-series data. We then implemented a prototype at one of our customers which runs as part of the near-series process. We were very happy with the results, so we decided to start product development. I expect that we will be able to start marketing the product next year. But naturally, AI remains a big research topic. The focus here is on the systems’ robustness during 24/7 operation, issues regarding the models’ learning and possibly follow-up learning during operation and the contrasting areas of semi-supervised and unsupervised learning, to name just a few examples. Research and development are becoming ever more closely linked and we are involved in an increasing number of projects with partners from industry and research in order to stay on the ball in this new technology.
Schwarz: In addition to the analysis of welded seams that I mentioned earlier, I see potential in classic areas such as process and quality control through the fusion of time series and images. In the future, the aim will definitely be to analyze welding processes inline and to control them on the fly if necessary. In order to do this, we really do need to recognize all relevant information in the signals and include this information in such a closed control loop. This would minimize quality fluctuations—and thus the quality assurance outlay. With AI, we have a promising tool which will help us to achieve this goal.
Abels: I would turn the question around. Are there areas of application where artificial intelligence is of no interest? I cannot think of very many. AI technology offers enormous opportunities in all areas of materials processing as well as in measuring systems. At our conference, an automotive manufacturer talked about how it uses data from a globally distributed monitoring system to optimize the laser welding process at each of its sites. When combined with other components in this way, the individual laser system benefits from the AI-based, continuous learning processes of all others: machine learning spanning the whole world which is almost reminiscent of the way we humans learn during our entire lifetimes. This, together with the closed control loops that Mr. Schwarz just mentioned, gives me ideas on numerous other levels too, whether it be statements regarding materials, batches or the productivity of processes. A very wide range of possibilities is opened up. And if they are well made, AI systems are simple and can be adapted quickly. We at the ILT are working to leverage this potential. To date, I have not seen a laser process where AI would not be useful.
Abels: Yes, AI can certainly help us when it comes to optical issues. My group at the ILT specializes in process sensor systems and system technology. Our ten-strong team includes specialists from a range of areas including mechanical engineering, IT, electrical engineering, physics and medical technology. These various perspectives are important in order to optimize AI processes given our institute’s range of different process-related requirements. Interestingly, we looked at neural networks in depth back in the 1990s as a way of assessing welding processes. At the time, the technology was not sufficiently developed and the performance of the computers available was nowhere near sufficient. We were happy if we achieved networks with between 30 and 50 nodes. If we compare that to what we can do today, we have come on light years.
Schwarz: I can confirm that. I too researched neural networks at a Fraunhofer Institute in Stuttgart. We had 3-layer networks in the Stuttgart neural network simulator. It was one of the first simulators available. At the time, applications in industrial production were unheard of. We were much quicker with classic analytical processes. But it remained an option in the back of my head—and when AlexNet arrived I realized that the time had come for AI systems. What we need now is to be able to access as broad a data basis as possible. In a globally learning system, data from machines all over the world need to come together. This is the only way to generalize models with the necessary quality. There is some resistance here, which, to a certain extent, is understandable. After all, no company likes to share its detailed process data.
Schwarz: Yes, this is certainly conceivable. For example, we have the problem of having to look through the welding optics in coaxial applications. For image processing, this of course is not ideal. It would be interesting to optimize the images here with the help of AI to extract more information than the optics provide. However, this is a balancing act, where we need to avoid adding non-existing information to the image. This is not a problem in the consumer domain, where AI can correct poorly exposed photos by drawing on available sample image data from the Internet. But in the industrial process, data must not be added. The topic of sensor fusion is also exciting—as a way of monitoring the same process simultaneously in different spectra, for example process emissions plus camera images or data from hyperspectral cameras. Here, AI will be very useful for data fusion purposes. In the future, we may be able to achieve more precise process control with less hardware. And where very complex lighting was needed in the past, we may be able to reduce the lighting needed by automatically extracting features via AI.
Abels: I agree. It is important not to lose information or add non-existent image data. The systems can see more in image data than the human eye can anyway. In the future, AI will also be able to evaluate image data extremely quickly because it does not need to understand a process in detail. It will be important for example to present the algorithm with carefully selected features in welding processes—whether it be the weld pool or keyhole size and seam width—on the basis of which it can consider how to come up with an overall picture that contains all the information it needs. With the multisensory approaches just mentioned, AI will arrive at a usable result much more quickly than is possible with classic image processing methods. The learning processes do not have to understand everything in detail if we teach them to specifically scan through image data for the features that we have identified as relevant to process and ultimately product quality using our human empirical knowledge.
Abels: Some training is always necessary, but the methods are getting better all the time. Especially as we can vary them and teach the AI certain basic patterns in advance. Essentially, the evaluation in a CNN is matched against previously defined classifiers that convert the results into "classes". Many millions of parameters can be set in such a CNN. With the classifiers, however, far fewer qualified parameters can be used. The better I preconfigure the AI, the easier it is for me to select the classifiers for the task at hand. It is like with face recognition: If I want to recognize a person from a large variety of faces, this is of course more difficult than if I limit the selection beforehand to the faces relevant for the search and only compare the face I am looking for to them.
Schwarz: Exactly. We try to restrict the variations by not defining classes on all possible scales. At the same time, checking plausibility constantly is crucial. We are also in the process of using unlabeled data from tests and series production startups for our AI in order to base our generalized models on as broad a data base as possible. This way, our customers can rely on the feature extractor and only need to train the very back layers of the AI. The previous problem of having to train AI in-house with specialists for the series process is evidently becoming smaller and smaller.
Schwarz: It has already been mentioned—first and foremost, we do not have the data. To change this, as many customers as possible will need to realize that their data are needed to optimize AI systems and take advantage of the great opportunities for more efficient, error-free industrial production. We are fortunate that we work with some very open-minded pilot customers and the necessary trust already exists between us. If access to process data is only allowed in a particular plant, this leads to a huge amount of extra work. The cloud provides the infrastructure for secure data exchange. Now the willingness to use this infrastructure must grow. I also see a challenge in the fact that there are currently two software teams in development. One to develop the typical control and process software, the other for machine learning. But integrating the classifiers into the system and setting up the necessary data handling infrastructure requires close cooperation between the two teams. This has an impact on the development time, especially as it is becoming increasingly difficult to find suitably qualified specialists in sufficient numbers.
Abels: Data availability and robust processes for evaluating the results are essential if AI technology is to make progress. To make quick progress, we need bold companies that, above all, recognize opportunities and are not dissuaded from sharing their process data by what I consider to be manageable risks. Thanks to our strong industrial base in Germany and Europe, our strong mechanical engineering sector and our leading photonics industry, we have everything needed to make AI a major success story. However, this requires targeted research funding on a national and European level. After all, even the most promising new technologies initially harbor risks of setbacks that companies cannot bear alone. The fact that funding for photonics research in Germany has been cut precisely in this exciting transformation phase is not the signal we would have wished for in the global race towards AI-assisted laser manufacturing.