This study presents a cost-effective and high-precision Machine Learning (ML) method for predicting the melt-pool geometry and optimizing the process parameters in the laser powder-bed fusion (LPBF) process with Ti-6Al-4V alloy. The ML models incorporate five key features, including three process parameters (laser power, scanning speed, and spot size) and two material parameters (layer thickness and powder porosity). The target variables are the melt-pool width and depth, that collectively define the melt-pool geometry and give insight about the melt-pool dynamics in LPBF. The dataset integrates information from an extensive literature survey, computational fluid dynamics (CFD) modeling, and laser melting experiments. Multiple ML regression methods are assessed to determine the best model to predict the melt-pool geometry. 10-fold cross-validation is applied to evaluate the model performance using five evaluation metrics. Several data preprocessing, augmentation, and feature engineering techniques are performed to improve the accuracy. Results show that the “Extra Trees Regression” and “Gaussian Process Regression” models yield the least errors for predicting melt-pool width and depth, respectively. The ML modeling results are validated by comparing with the experimental and CFD modeling results. The most influential parameter affecting the melt-pool geometry is also determined by the Sensitivity Analysis. The processing parameters are optimized using an iterative grid search method employing the trained ML models. The proposed ML framework offers computational speed and simplicity, which can be implemented in other manufacturing techniques to comprehend the critical traits.

This content is only available via PDF.
You do not currently have access to this content.