Skip to content
article-banner

Priming the Data Pump for Predictive Maintenance

Serdar Uckun

Serdar Uckun,

Chief Technology Officer

priming-the-data-pump-for-predictive-maintenance-sec-img-1

AI and machine learning models are demonstrating incredible success in solving all types of problems. The end results get all the publicity, but there is often less attention on the arduous data engineering involved in getting the data ready for a new application. This is a particular challenge for predictive maintenance applications, which often stall or outright fail owing to data issues.

Although predictive maintenance (PdM) has been around for years, companies have been slow to adopt it. Only about 24% of companies have adopted AI or IoT tools to improve predictive maintenance, according to a recent survey conducted by Novity and BAO to process industry operators. One big challenge is the length of time required to gather sufficient data needed for creating accurate predictions. The long lead time and mediocre accuracy often do not lead to the cost savings that justify the expense of running a predictive maintenance program. The top two adoption concerns cited by the survey were time required to financially break even on PdM investments and total cost of ownership.

Industries can streamline many of these problems by adopting predictive maintenance solutions that have already addressed the data engineering issues required to understand how equipment such as pumps and motors are likely to fail. Data maintenance and infrastructure are important processes that need to be addressed in order to build the most accurate algorithms.

Priming the data pump for predictive maintenance requires not just engineering the data but also capturing the optimal physical characteristics required for more accuracy and advanced warning of pending problems. This includes data types such as temperature, pressure, vibration, and actuating speed from the appropriate locations in a machine.

Predictive maintenance data challenges

The various challenges in setting up the appropriate data pipeline are often collectively referred to as the cold start problem. This includes inventorying existing data sources, adjusting the collection rate and resolution, adding new sensors, integrating data into the data science platform, and validating the results.

Another big challenge is that equipment in the most efficient factories does not fail that often. However, for less efficient operations, when failures do occur, the costs can be massive. A recent survey found that more than 40% of plant managers are experiencing unplanned downtime more than five times per month and more than half are experiencing unplanned downtime over three times per month. 20% of those experiencing unplanned downtime shared that each event can cost between $10,000 to $50,000 and 7% are experiencing downtimes that cost between $50,000 to over $200,000 for each event.

When equipment is failing regularly, it is easier to develop models based on performance, but expensive high throughput equipment tends to fail less regularly, which is a good thing for day-to-day operations.

The lack of failure data can make it challenging to create and improve diagnostic models. The problem is that it takes a lot of data to illustrate how equipment fails so that it can be analyzed by a predictive maintenance model. It can take another stretch of time to evaluate how well the model works after deployment. For example, if a machine fails every quarter, it may take years to validate the model’s performance.

The right data trumps more data

Most plant operators have accumulated large stacks of data. But it is often not at the proper resolution of granularity or time-frequency that would be useful for predicting failure modes.

For example, many plants have invested in a data historian, a combination of hardware and software tools that store data from plant control systems. These connect to the network and manufacturing plants and help keep a log of ongoing operations, such as when valves are opened and closed.

One challenge is that the historian logs data from sensors built for controlling the overall plant operations, but likely will not include sensors optimized for making the most accurate assessment of equipment health. For example, accurate predictive maintenance models generally require dedicated sensors for new data sources such as vibration and acoustic analysis.

A second challenge is that the data historian typically will only summarize data for traditional operational reporting but cannot predict failure horizons. Furthermore, the historian may summarize the data, compress it, or clean it up in other ways that make it even less helpful for predictive maintenance use cases. For example, we know that many types of valves get sluggish long before an outright failure occurs, so it is possible to develop a highly accurate model of when a valve will stop working based on how fast it opens or closes. A typical data historian may only collect a reading every other second, which is too coarse to pick up the very subtle and hard to detect changes indicative of a potential failure. A more accurate model requires taking readings that capture changes at hundredths of a second.

Data science meets physics

Different equipment types such as pumps and motors have dominant fault modes. They are more likely to fail in very specific ways. For example, pumps are more likely to fail due to a bearing fault or the blowout of a seal. Sophisticated physics simulations can improve the ability to predict when these types of equipment are likely to fail and how they will fail ahead of time. This can give operators more time to take preventive action.

Adopting a physics-based approach to complement the machine learning models makes it essential to weave the physical principles of how equipment should work and, more importantly, how it fails into the predictive maintenance models. A physics-based approach also enables forward predictions that a pump will fail within two months with a high degree of certainty.

A pre-trained physics model makes it much easier to tune a predictive maintenance application with less data. In some ways, this is like Google’s BERT AI model for natural language processing. Google has invested considerable data science resources and computing power to develop these highly robust models that work for many natural language comprehension scenarios. Developers can then take these models and adapt them to new use cases for specific domains, such as a restaurant ordering bot, with a minimal amount of retraining on new data.

Similarly, in the predictive maintenance realm, it is possible to pre-train sophisticated physics models that capture the signs that the different types of equipment are likely to fail. These can be fine-tuned for specific equipment characteristics such as common and extreme operating speed, throughput, and schedule. These can lead to highly accurate predictions with only a month of new data to calibrate the general physics model to the specific equipment in the field.

Aligning the data pipeline

In many predictive maintenance projects, data scientists may work with the operations team for a year or more to develop the pipeline of data and capture the characteristics of enough faults to train the models. Teams can see results much faster by starting with a model based on physics first principles rather than neural network weights. The secret lies in creating the appropriate balance between physics models and data science models to generate more accurate predictions.

This kind of process typically starts by evaluating the data in place and how the collection process works. It is also essential to assess all the different ways data is preprocessed before staging it for the predictive maintenance system.

Another good practice is to work with the operations team to assess which equipment causes the most problems in terms of failure and impact to prioritize the ROI. It is often the case that a small percentage of the total equipment creates the most problems. Teams can then focus their attention on this equipment to identify what data is available and what kind of models will yield the best ability to predict the common failure modes.

Physics-based models can prime the pump by allowing enterprises to see more accurate and timely predictions with far less data. Enterprises that prime the data pump using pre-tuned physics models, such as the Novity TruPrognostics™ engine can see accuracies as high as 90% and with a much shorter lead time for getting the data pipeline in place. These models can solve the cold start problem by reducing the data required to begin generating accurate predictions. This saves not only time but can reduce the time required to break even on investment since it can begin generating results more quickly.