Predictive maintenance is on everyone's lips. Predictive quality less so - but it is the concept that has the greater immediate benefit for most manufacturing companies. The question of which concept is the right one is often a false one: they do not compete. They work on the same database and complement each other in a way that does more together than either concept alone.
And yet the confusion is common. Predictive maintenance is used as a generic term under which quality predictions are also summarized. Or both concepts are packed into one AI project without clarifying what the goal is - and why the database that is sufficient for PM may not be sufficient for PQ.
This article clarifies: What is the difference between predictive maintenance and predictive quality? Where do they overlap? When do I need which concept - and how do I combine the two sensibly?
THE MOST IMPORTANT FACTS IN BRIEF
|
BRIEFLY SUMMARIZED
|
Both concepts belong to the family of predictive analytics in manufacturing - they use historical data and real-time process data to make predictions that replace reactive action with predictive action. But they focus on fundamentally different target variables.
|
Predictive maintenance |
Predictive quality |
|
|
|
Common database: Machine sensor time series, process parameters, production order metadata |
|
Predictive maintenance replaces two older maintenance strategies: reactive maintenance (repair when something is broken) and preventive maintenance (maintenance according to a fixed schedule, regardless of the actual condition). PM uses sensor data to continuously assess the condition of a machine or component and predict the optimum maintenance time.
The core of PM is condition monitoring: Vibration sensors, temperature sensors, current sensors and acoustic sensors provide continuous signals about the machine condition. Anomaly detection and degradation models use these to calculate a forecast: How much remaining service life does this component have?
PM IN PRACTICE: TYPICAL FIELDS OF APPLICATION
Bearing diagnostics: Vibration signature analysis detects bearing damage 2-6 weeks before failure
Tool wear: current consumption and vibration patterns during milling show wear progression
Hydraulic systems: Pressure curve analysis detects seal wear and pump fatigue
Electric motors: current spectrum analysis (Motor Current Signature Analysis) detects winding damage
Screwdriving systems: Torque profile analysis detects calibration loss and gear fatigue
Predictive Quality goes one step further than Statistical Process Control (SPC): Instead of monitoring whether a process is getting out of control (SPC), PQ predicts whether a specific component will leave the end of the process with good quality values - while the process is still running.
This enables two types of intervention: proactive process adjustment (the running process parameter is corrected before a defect occurs) and early ejection (the component is ejected after an early process step before further value is added to an already bad component).
PQ IN PRACTICE: TYPICAL FIELDS OF APPLICATION
Screw assembly: waveform analysis of the torque curve predicts whether the screw connection meets the specification - even before the final test takes place
Injection molding: pressure curve and injection profile predict dimensional accuracy and surface quality
Welding processes: Current-voltage curve predicts weld seam quality - without destructive testing
Machining: Cutting force profile predicts surface quality and dimensional accuracy
Heat treatment: Temperature profile predicts hardening result - accelerates release decision
The confusion between PM and PQ often arises because both use the same technical infrastructure - sensor data, time series analysis, machine learning models. But the differences in goal, model architecture and consequences of action are significant.
| Dimension | Predictive maintenance | Predictive quality |
| Observation object | Machine / component (time-related) | Component / process cycle (component-related) |
| Target variable | Remaining Useful Life, Probability of failure |
Quality characteristic: Dimension, strength, surface, function |
| Prediction horizon | Hours to weeks | Seconds to minutes (real time in the process) process) |
| Label data for training | Historical failures, maintenance events, degradation curves |
Historical test results, component-related with process parameters linked |
| Model type |
Regression models, survival analysis, LSTM for time series |
Classification models, random forest, gradient boosting, neural networks |
| Consequence of action |
Plan maintenance order, order spare parts, adjust shift |
Correct process parameters, eject component, increase inspection frequency |
| Latency of the reaction |
Response acceptable in hours to days |
Response in seconds to minutes Necessary |
| Main beneficiary | Maintenance, production planning | Quality assurance, production control |
| Regulatory relevance | Conditional (OEE verification, maintenance documentation) |
High (IATF 16949, ISO 9001, product liability) |
Key difference in one sentence: PM predicts when the machine will have problems. PQ predicts when the component will have problems. Often both happen at the same time - a tool that wears out produces worse parts before it fails. This is the basis for the strongest synergies.
The good news for manufacturing companies starting out with predictive analytics: The database needed for PM is largely the same as that needed for PQ. If you set up the infrastructure for one, you also create the prerequisites for the other.
| Data catergory | Relevance for PM | Relevance for PQ | Common requirement |
| Machine sensor time series |
Very high - core of the PM model |
High - Machine condition d influences component quality |
High sampling rate, synchronized with production order |
| Process parameters (torque, pressure, temp.) |
High - Degradation indicators |
Very high - direct input variables for PQ model |
Accurately assigned to components, not just Layer average |
| Tool ID and runtime | High - tool wear is a PM issue |
Very high - tool condition determines component quality |
Training data feature |
| Production order metadata |
Medium - for |
Very high - Labeling of the quality data |
Unique link process date ↔ Component ID |
| Test results & Measured values |
Low - only indirect for PM |
Very high - Laboratory data for PQ model |
Accurate component, promptly after production process |
| Maintenance and repair history |
Very high - Label data for PM model |
High - Context feature for quality fluctuations ing |
Timestamp documented accurately |
The decisive common denominator is the precise linking of process data. For PM, it is often sufficient to view machine data over time. For PQ, the linking of each process data point with the component ID is mandatory - because the model must learn which combination of process parameters led to a good or bad result.
Practical consequence: If you start with PM and implement component-specific time stamps right from the start, you have already covered 70-80% of the costs for PQ. The remaining 20-30% is the linking of inspection results as label data - this is the critical PQ-specific requirement.
In practice, we often see that companies start a PM project and then realize: The database we have built up is also sufficient for quality predictions. This is no coincidence - it is architecture. If you think in terms of components right from the start, you build once and reap twice.- Amadeus Chief Technology Evangelist, CSP Intelligence GmbH
The greatest efficiency gains are not achieved when PM and PQ are run as separate projects, but when the findings of one concept automatically flow into the other. Four synergy patterns are particularly effective in practice.
SCENARIO: Tool wear → Quality assurance |
|||
|
PM signal PM model: Tool wear is approaching a critical threshold value (RUL < 20%) |
PQ signal PQ model: Quality variance is already measurably increasing - parts still remain within the tolerance band, but trend is visible |
Combined measure Automatic tightening of the PQ tolerance limits + increased inspection frequency until tool change. At the same time: Tool change is brought forward. |
Result Rejects close to zero. Tool is changed shortly before quality problem, not afterwards. |
|
SCENARIO: Hydraulic pressure drop → Injection molding process quality |
|||
|
PM signal PM model: Hydraulic system shows pressure loss trend - seal wears out |
PQ signal PQ model: Injection pressure profile deviates from reference curve - wall thickness variance increases |
Combined measure PQ model adjusts reference profile to the current hydraulic condition. PM triggers maintenance order. Production continues - with closer PQ monitoring. |
Result No production stop, no scrap peak. Maintenance takes place within the planned window. |
|
SCENARIO: Bearing temperature increase → Surface quality machining |
|||
|
PM signal PM model: Bearing temperature increases continuously - cooling deteriorates |
PQ signal PQ model: Cutting force curve shape changes - surface roughness is still within the tolerance range, but correlation known |
Combined measure PQ model switches to high-frequency test mode. PM escalates to maintenance. Bearing is replaced in the next maintenance window |
Result Surface quality assured. No unplanned downtime. Full transparency of quality risk during the transition phase. |
|
SCENARIO: Maintenance event → Model recalibration |
|||
|
PM signal PM model: Maintenance completed - tool new, bearing replaced, calibration performed |
PQ signal PQ model: Historical quality data before maintenance is no longer representative of current machine condition |
Combined measure Automatic segmentation of training data: Post-maintenance data is weighted separately. PQ model recalibrates to new baseline. |
Result PQ model remains accurate after maintenance intervention. No manual intervention in the model configuration necessary. |
The question "Do we start with PM or PQ?" often assumes that a company already has the database required for both. In practice, the maturity level of the data infrastructure is the decisive factor - not the strategic preference for PM or PQ.
|
Level 1: Reactive |
Level 2: Preventive |
Level 3: Predictive |
Level 4: Integrated |
|
PM Maintenance after failure. No condition monitoring. |
PM Schedule-based maintenance. Initial sensor data (temperature, vibration) available. |
PM PM models active. Maintenance triggering based on status data |
PM PM and PQ share a database. PM findings flow automatically into PQ models |
|
PQ Quality check at the end of the process. Rejects are detected, not prevented. |
PQ SPC active. Process parameters are recorded, but not linked to the exact component. |
PQ PQ models active. Early detection of quality deviations in the process. |
PQ PQ models recalibrate themselves after maintenance events. Complete closed loop. |
|
Database No structured machine data. Maintenance tickets in Excel. |
Database Time series available, but not component-related. Test results in a separate system. |
Database Component-specific linking of process data and test results. Real-time data flow. |
Database Standardized data platform. Automatic model recalibration. Real-time feedback. |
Recommendation: Stage 2 → 3 is the critical transition. Anyone who does not yet have a component-specific data link at level 2 should retrofit this first - before the first PM or PQ model is trained. A model with a poor database is worse than no model.
There is no universal answer - but there are clear indicators that determine the sensible starting point. The decision depends on three factors: Where is the biggest current pain point? What data is already available? And who is the main driver internally?
| Situation | Entry recommended | Reason |
| High unplanned downtimes, maintenance under pressure, machine data already available | → PM first |
Fastest ROI. Database for expansion to PQ is built up in parallel. |
| High reject rates, quality costs increase, inspection effort too high | → PQ first | Direct business case. Process data is collected - PM can follow suit. |
| Both: downtime and rejects equally problematic, database available | → Integrated project | Establish a uniform database. Train PM model and PQ model in parallel. |
| No structured database available | → Database project first | No model without data. Precise component sensor data infrastructure is a prerequisite for both. |
|
IATF audit requires proof of quality, but no immediate rejection problem |
→ PQ as a compliance driver | PQ provides proof of quality and early warning documentation - usable for regulatory purposes. |
| OEM demands proactive quality measures as a delivery condition | → PQ with PM substructure | → PQ with PM substructure |
We regularly encounter these mistakes in practice - at companies that start with PM or PQ and underestimate the common pitfalls when starting data-driven projects.
The machine is equipped with vibration sensors. The data is entered into a time series database. The PM model is trained. Then the question arises: Can we also do PQ? - No. The sensor data is not component-specific. You don't know which measured value belongs to which part. The database has to be rebuilt.
Solution: Link each process data point with a component ID or production order ID right from the start. The additional effort is minimal. The subsequent added value for PQ is considerable.
A PQ model needs historical quality results as label data - linked to the process data of the production process with component accuracy. If you don't have this data set, you don't have a training set. The model cannot learn what 'good' and 'bad' mean.
The link is often missing: inspection results are in the QMS, process data in the MES - and nobody has the key that connects the two. The solution is not an AI issue, but a database architecture issue.
Solution: Set up component ID as a common key in QMS and MES before the first model is trained. Build up at least 6-12 months of historical data before training starts.
PM goes to maintenance, PQ goes to quality assurance. Two projects, two budgets, two data silos. The result: two separate platforms with overlapping databases that never enrich each other. The strongest synergies (error 4 above: PM signal triggers PQ adjustment) are therefore impossible.
Solution: PM and PQ share a common data platform - even if the models and user groups are different. The data infrastructure is a joint investment, not a departmental matter.
'We equip all 47 machines with sensors and train a PM and PQ model for all of them. Six months later: The amount of data is overwhelming, no model is productive, the data quality is inconsistent and the project team is exhausted.
Solution: Identify the pilot machine: the machine with the highest risk of rejects or downtime. Build up the database there, train the first model, prove the ROI. Then scale up.
Predictive maintenance (PM) predicts when a machine or component needs to be serviced in order to prevent a failure. The object of observation is the machine. Predictive quality (PQ) predicts whether a specific component will meet the quality requirements - while it is still being manufactured. The object of observation is the product. Both use machine data and process parameters as input variables, but with different target variables: PM is aimed at machine availability, PQ is aimed at reducing rejects.
The decision depends on where the biggest pain lies. If unplanned machine downtime is the biggest problem, start with PM. If high scrap rates or quality costs dominate, start with PQ. In both cases, the data basis is the prerequisite. If no structured, component-specific process data is available, the database project is the first step - regardless of whether PM or PQ is the long-term goal.
The decision depends on where the biggest pain lies. If unplanned machine downtime is the biggest problem, start with PM. If high scrap rates or quality costs dominate, start with PQ. In both cases, the data basis is the prerequisite. If no structured, component-specific process data is available, the database project is the first step - regardless of whether PM or PQ is the long-term goal.
Predictive quality in manufacturing is the ability to use process data to predict whether a component will meet the quality requirements - even before the final inspection takes place. The model learns from historical data which combination of process parameters has led to good or poor quality results. In practice, this enables two types of intervention: early process correction (the current process is adjusted) and early rejection (the component leaves the process before further value is added to an already poor part).
You need three categories of data for a functional PQ model: Process parameters per production run (component-specific, not as a shift average), inspection results per component as label data (linked to the process data by a common component ID), and sufficient historical data points for model training (typically at least 500-1,000 production runs with good/bad labels). If the component-specific link between process data and inspection results is missing, no PQ model can be trained.
SPC and PQ pursue a similar goal - detecting quality deviations at an early stage - but using different methods. SPC monitors statistical process parameters (mean value, variation) and sounds the alarm if the process gets out of control. PQ goes further: it predicts whether the specific component will be good for each individual production run based on the current parameter characteristics. SPC is rule-based and interpretable. PQ is model-based and requires training data. Many companies use SPC as the first stage and PQ as an extension.
The technical training time for an initial PQ model is typically hours to days. The actual time required is for data preparation: linking process data and test results with component accuracy, checking data quality, creating a training data set. In well-structured environments, this takes 2-6 weeks. In environments without component-specific data links, infrastructure projects must first be completed - this can take 3-9 months.
Yes - predictive quality is one of the central applications in the context of Industry 4.0 and smart manufacturing. It combines the three core elements of I4.0: data acquisition from production (sensors, MES), data integration (component-specific linking across system boundaries) and data utilization (AI model as real-time decision support). Predictive quality is therefore not a theoretical concept, but a measurable, ROI-positive application - provided the database is available.