As industries move towards Industry 4.0, Anais Dotis-Georgiou, Developer Advocate, InfluxData, makes the case for predictive maintenance – enabled by better control and manipulation of data.
While GenAI has captured interest over the past few years, ML remains the most advanced branch of AI.
In simple terms, ML uses algorithms to identify trends in data and predict when similar patterns will occur. Perhaps not surprisingly, financial services institutions were among the early adopters of this technology, but it is now used in a wide variety of verticals. Some of the most noteworthy applications are in the manufacturing sector, where it helps to reduce machine downtime massively by predicting when equipment failure might occur.
Broadly speaking, a manufacturer has two options when using ML for predictive maintenance. One is to take a moderated approach that relies on human intervention (essentially double-checking with an in-person inspection), and the other is to take a more automated approach using “deep learning.” At its most basic, deep learning enables software to define its normal operation and isolate anomalies without prompts and self-teach without human moderation.
For global manufacturers – estimated to lose almost $1.5 trillion every year due to unplanned downtime – the benefits of an ML-assisted deep learning approach can be game-changing.
Deep learning, deep automation
Deep learning accurately identifies historical patterns by ingesting and analyzing incredibly large and complex time series datasets (essentially metrics collected over time). These patterns are often ones that humans miss or are incapable of spotting. A time series-optimized database can also ingest and learn from unlabeled or unstructured data – meaning it can quickly adapt to changing environments and new scenarios.
This ability to quickly adapt and learn has led to deep learning for predictive maintenance increasingly being used for automated anomaly detection, which is the process of identifying abnormal or unusual behavior or patterns in data that indicate a potential problem or failure. Understandably, running anomaly detection accurately and at scale requires a lot of data and a lot of different types of data to paint the most vivid picture of what’s happening within a piece of equipment.
Some data that feeds deep learning includes:
Acoustic: Experienced mechanics will often listen to an engine in operation to diagnose potential issues. With digital ears, however, it is possible to listen to sound outside human hearing ranges. Using ultrasonic analysis means that errors can be detected much earlier and based on minute changes that human ears cannot identify.
Infrared: Infrared analysis is essentially a measure of temperature across a system. For data centers and facilities that operate within relatively tight thermal controls, the ability to automate temperature monitoring can not only identify issues with a server but also help pinpoint right down to the circuit board level a component that might be failing. Rather than replacing entire units, individual failing components can be replaced, massively reducing the cost of repairs. For data centers, this process can also act as a failsafe for temperature sensors on a circuit board, which can themselves fail under certain conditions.
Fluid analysis: By examining data such as temperature, viscosity, contamination levels, and particle content in lubricants and coolant fluids, organizations can detect signs of wear, overheating, and potential mechanical failures. This allows them to prioritize maintenance for the most at-risk equipment. It is one of the best indicators for firms to predict which equipment must be attended to first.
Visual inspection: Cameras and AI image scanners enable near real-time inspections, though concerns remain about the accuracy of fully automated visual assessments. Despite these challenges, the sheer volume of inspections can improve overall reliability. Automated alerts can also prompt timely human intervention from maintenance teams.
As manufacturing accelerates ever faster towards what has been dubbed Industry 4.0, the push to leverage data across these different areas is resulting in the wide-scale deployment of sensors. Through the use of sensors, ML in manufacturing doesn’t need a single, fixed location. Instead, it is possible to apply predictive maintenance to remote or distributed ecosystems such as vehicle fleets, industrial machinery, construction, power generation and grid management. Implementation might disrupt established workflows for the teams that manage these resources, but the gains are worth it.
Upfront investment, an ounce of prevention
The upfront costs of investing in ML might seem high, particularly if you have systems and processes in place that work well, but once finalized, an ML system will run 24/7 and become increasingly more effective over time. Using a time series database designed specifically to store and analyze high-resolution datasets over long periods will result in more precision and the identification of longer-term trends and patterns in equipment behavior.
Taken a step further, given that some of the more advanced deployments of ML in manufacturing are effectively self-calibrating, a scalable ML approach will be more cost-efficient and more effective than relying 100% on manual intervention. The upfront cost represents 90% of the initial investment vs running the system and storing the data. This figure also doesn’t account for the savings from reduced downtime and reduction in TCO from optimizing staffing and repairs.
While it has many virtues, predictive maintenance works in tandem with preventative and reactive maintenance, which remain an absolute necessity.
For manufacturing, it is the ounce of prevention that saves the pound of cure most maintenance teams are used to applying. As industries move towards Industry 4.0 this is just one of the technologies, enabled by better control and manipulation of data, that will improve productivity for workers and efficiency for organizations.