top of page
  • Writer's pictureSebastian Bluemer

Check the AM Playbook for Artificial Intelligence - Process Monitoring

Updated: Jun 12, 2023

Standing for hours in front of the process chamber and observing the process. Discussing the process and the result. What does the spatter pattern look like, is the layer application sufficient, do you need to change something in the parameters? This is what process monitoring in front of an LPBF machine looked like in 2011, and I'll be honest, it helped me immensely in understanding the process.

In 2023, there are already many possibilities for in-situ process monitoring in additive manufacturing processes. Just to name a few in the area of LPBF, melt pool monitoring, OT systems, powder bed cameras, laser line scanners etc..

After all, we want to know as quickly as possible, how does the expected result of the build job look like and what process events occurred during the part manufacturing. Maybe we can intervene during the build process and prevent the build from failing?

What can we do with all the images, OT and melt pool data after the build?

In the past, enthusiastic students would sit down checking the data and try to identify correlations and derive relationships to process phenomena. Heat maps and powder bed images were analyzed down to the smallest detail, and attempts were made to interpret and evaluate the signals in the Process Monitoring system resulting from an anomaly based on a large number of test jobs.

Very time-consuming and not very promising, but essential for developing automated evaluation. After all, we must first understand what phenomena can occur in the process before we can begin to further optimize the evaluation methodology and make it more efficient.

Today, machine learning algorithms are often used to evaluate process data. So, how does machine learning actually work and how can it be applied to AM process data evaluation?

Installation of hardware for the monitoring of process data:

The first step is to select an appropriate monitoring method. I chose the powder bed camera because it can be used universally for all powder bed processes in AM. Using a powder bed camera, it is important to ensure that the system is optimally calibrated, that the interface to the digital architecture works, and that the data input is consistent. The powder bed camera primarily provides images that show anomalies either in the powder bed itself or on the part being manufactured. Examples include raised edges on the part, horizontal lines or craters in the powder bed, weld spatter on the part, etc.

The frequency of the image input can be set to be either process related (after exposure or recoating) or time related (e.g. every 2 seconds).

Collect and structure data:

After ensuring consistent data input, the data must be collected. The first step is to create a data lake that continuously stores data from a large number of jobs. The stored process data is therefore referred to as input data. For the application of a machine learning algorithm, it is also helpful to provide output data (labels or targets) to give the algorithm a data basis for the next training step and to define the optimization goals in advance.

In the case of our AM process monitoring, this means that we still have to provide the algorithm with assistance and, above all, clearly define what is an anomaly in the process and what is not. We give it target values to optimize towards.

ML algorithm - Training:

In the third phase of training, the ML algorithm derives initial correlations based on the input data and makes predictions. It also makes initial decisions. Is the streak in the powder bed a horizontal line that influences the process, or is it negligible (not reaching limits, artifact) and the algorithm does not need to report it? In this phase, it is particularly important to provide the algorithm with a large amount and variety of data so that it can make its decisions based on many different process images. Images of one job are definitely not enough, we should rather think in dimensions of X>50 jobs to generate a representative result.

Time to further optimize the algorithm:

The fourth phase is used to optimize the ML algorithm trained so far. For this purpose, the predictions generated by ML are compared with the known output data. If there are still large deviations for some categories, the algorithm must be further optimized and adjusted if necessary. The focus is on continuous optimization towards representative process results. New labels or targets should be avoided; the focus is exclusively on the previously defined target values and the ML algorithm is further optimized in this direction.

Evaluate the generated algorithm:

In the last phase (Evaluation), the trained algorithm is evaluated on a test data set. The test data set is already known and the resulting process anomalies have already been investigated. It is used to evaluate the performance of the ML algorithm and to decide whether the ML algorithm can be used in a first beta phase in daily operations. If the algorithm is successfully evaluated, it will be deployed in the existing data architecture.

This makes our life much easier because we are now using a digital AI tool to review and evaluate the daily process data coming in from our AM machines. In my opinion, this is a game changer for additive manufacturing, because as humans we are not able to handle this large amount of data and we need process characterization tools to achieve a certain reliability in our processes and quality of our parts manufactured.

The rapid development of digital tools such as ML algorithms is fascinating to me. These tools will help us a lot in the future to handle our AM process data and will be the basis for the next step of AM process control.

182 views0 comments


Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page