By: Rachel Johnson, Principal Product Manager, MathWorks
The digitalisation of manufacturing means modern factories are more complex than ever, with thousands of connected sensors generating continuous streams of data. While humans have historically relied on manual data inspection, it is now more challenging than ever for operational leaders to detect anomalies before issues arise
Manufacturing leaders are therefore looking for smarter systems that can predict problems before they happen to ensure smooth operations with fewer disruptions. A 2025 Deloitte study found that 86% of manufacturing executives think smart factory solutions will be the primary drivers of competitiveness over the next five years.
Incorporating AI into manufacturing is a strategic necessity. A growing number of engineers, armed with a deep understanding of the systems they design and operate, are turning to AI-based anomaly detection solutions.
Most leaders are in sync with this shift, reflected in projections that the global AI-in-manufacturing market is will reach $34.1 billion in 2030 at a compound annual growth rate (CAGR) of 42.1% . Meanwhile, another study shows AI use across industries stood at 48% in India in FY2024, with manufacturing alone rising from 8% to 22% in just one year.
How AI works for Anomaly Detection
Integrating AI into manufacturing processes may be complex, but the potential rewards in terms of efficiency, cost savings, and competitive advantage are immense. Firstly, for organisations new to AI, the first stepis to define what constitutes an anomaly. For example, BMW engineers realized they needed to distinguish between a true defect (a crack or missing part) and a pseudo-defect (harmless dust). Identifying the issue and determining the solution resulted in a 60% reduction in defects and led to cost savings of over $1 million per year at their Spartanburg plant.
The next step involves data gathering and preparation. AI effectiveness depends almost entirely on the quality of incoming data. Sensor readings, environmental conditions, maintenance logs and operational parameters must be collected, cleaned, and structured for analysis. This stage often requires as much effort as the modelling itself. Flawed or incomplete data will undermine any AI system’s effectiveness.
Once data is adequately prepared, engineers must decide which AI techniques to apply. Broadly, they must choose between supervised learning, where labeled examples of normal and anomalous behaviour exist, and unsupervised learning, which discovers unusual patterns without explicit labels. Supervised methods are used when labeled faults are available, while unsupervised approaches are effective where anomalies are rare or not collected.
Feature Engineering and Advanced Techniques
AI models are only as good as the data they learn from. Feature engineering is the process of extracting useful quantities from raw data, which can help AI models learn more efficiently from the underlying patterns. While experienced engineers often already know the types of features that are important to extract from the sensor data, Predictive Maintenance Toolbox™ in MATLAB provides interactive tools for extracting and ranking the most relevant features in a dataset to enhance the performance of supervised or unsupervised AI models.
Tools like the Classification Learner in MATLAB® help engineers experiment with multiple machine learning methods at once to see which model performs best, as Mondi Gronau did to predict potential failures in plastics manufacturing machines. The trained model can predict whether a new chunk of sensor data is normal or anomalous.
Some types of data, such as images or text, benefit from deep learning approaches that can extract patterns automatically without requiring explicit feature extraction. Combining time series and image-based anomaly detection has helped some companies to identify faults in underground power cables using deep learning. While these deep learning approaches are powerful, they also require larger training datasets and computational resources.Validation, Testing and Deployment
Before an AI model can be used in operation, it must be validated and tested. Engineers usually split the data into three parts: training, validation, and test sets. Training and validation data are first used to align the model parameters during the training phase, and additional test data is used after the model is trained to determine its performance on unseen data. Engineers also evaluate the model using performance metrics, such as precision and recall, and fine-tune to meet the needs of the specific anomaly detection problem.
Deploying AI into live operations requires precise planning. Decisions around where models run depend on factors like latency, computational needs and integration requirements. Well-designed pipelines ensure that incoming data is properly formatted, preprocessed, and communicated to the AI system, while APIs allow model predictions to feed directly into maintenance, workflows, and decision systems.
The benefits of AI-enabled anomaly detection in smart factories extend well beyond early problem detection. According to a Deloitte Survey, the value of smart manufacturing has been realized, with respondents reporting up to 20% improvement in production output, 20% in employee productivity, and 15% in unlocked capacity.
Ford leveraged AI-driven digital twins to streamline vehicle manufacturing. Virtual replicas of its models allow the company to track and optimise production across design, assembly, and factory operations, while also informing improvements in efficiency and customer experience.
Manufacturers are reporting measurable improvements in uptime, reduced maintenance costs, fewer defects, and higher throughput. AI is no longer just an enabler of competitive edge for a select few companies; it’s becoming the backbone of smart factories across the manufacturing sector.