Factory manufacturing is highly automated today. Yet wear and tear on the mechanical parts of the equipment can cause production to be suspended. Monitoring the condition of machines and predicting their failures can save manufacturing companies considerable sums. Implementation of AI can bring automation to this area as well.
To monitor machine health and predict machine failures we analyze sounds emitted by the machines. It is a non-invasive and promising direction to detect potential machine failures in time. The structure-born audio sensors can detect vibration of a few Hz up-to ultrasonic frequencies. Indeed, few companies currently have experts with trained ears to listen to the machines for diagnosis. Unfortunately, these experts are costly and they also have limited capabilities (audio hearing range, etc.).
To compensate for such a lack of manpower and on-site capabilities, in Neuron Soundware, we use broad frequency range sensors to record machine sounds and machine learning (ML) techniques to automate and improve early detection of potential machine failures from the sounds with no diagnostician needed at the premises.
Letting the machine learning algorithms detect/predict the failures brings several challenges to be solved. Most of the ML research focuses on tasks with a lot of well-labeled samples, where the goal is to classify a newly given sample into one of the predefined classes.
Unfortunately, these methods are of very limited use in predictive maintenance as we typically get from a customer very few, or even none, recorded failures at the time when we are deploying ML model for a new machine.
In these cases, we can use novelty detection methods to identify anomalies in the machine operation. The algorithm, after the initial calibration, says whether the newly observed sound is similar to nominal sounds – the sounds considered to be the sound of the heathy machine – or new sounds possibly caused by the early stage of machine failure.
One of the key questions to answer is how to measure the performance of the algorithm without sounds of the broken machines. In this article, we describe how we can address the problem of missing sounds of broken machines. So we can be sure that we create models which actually are useful.
In principle, we can use several methods to generate these missing sounds, which allow us to use classification methods as anomaly detectors.
The first possible solution is to simulate the machine sound emission. Good simulation can be then also used to generate sounds of different failures with high fidelity.
Obviously, this approach can be computationally challenging using fine-element methods. It is not scalable due to the need to build detailed digital models of the machines including all potential environmental parameters. Therefore, this approach is being used for example in nuclear power plants to simulate turbines and their faulty states, and similar cases where this effort pays off.
An easier and more scalable approach is to use existing sounds that could be similar to the sounds of broken engines (e.g., the sound of a drill, cracking, squeaking) and combine those with the sound of the healthy engines. The advantage of this approach is that we can control the ratio of the normal and anomalous sound and select what type of sounds would represent the issues. That allows us to control the model sensitivity.
The key aspect is to make sure that artificially combining two sounds are as realistic sound combinations as possible. The big advantage is also to have a large database of anomalous sounds, not just Google AudioSet.
The model detects not only the anomaly score but also the intensity of augmented sound mixed with the nominal sound. This way we made sure the full functionality of the model to certainly detect the real anomaly when it happens.
The previous method relies on possessing the sounds to augment with (drill, cracking, squeaking). The model accuracy can be improved by using actual sounds of failures. Let’s suppose we train a model for a fresh new 1MW gas generator with healthy sound. Since we already recorded similar generators during normal operation and before their different failures, we can extract the knowledge of how the sound changes before the failure.
Although each generator type and even individual asset sounds differently, the effects of failures on the sound are often very similar. We can combine the sound of failures with the sound of a healthy generator to get a very realistic sound of its failure.
Neuron Soundware is gradually building a database of tens of TB of audio and many sound failures for different types of machines and using them during the training of new models.
The Anomaly detection approach to detect failures allows immediate monitoring of a variety of assets. We have automated the process of algorithm setup with data augmentation and transfer learning in the case of all common machines (generators, bearings engines, motors, etc.). The validation framework controls and the anomaly detector. To be sure it works correctly.
In the case of a compressor, the system was carefully monitoring the asset for more than a year without any false alerts. A customer can check the condition of the machine from anywhere, access the machine data remotely via our nShield platform. In the event of a suspicious sound change, the service team receives a notification and can check the device in time.
Just recently, we have detected several anomalies above the threshold. A couple of days later, the abrasion/scratching of the piston and cylinder were identified and led into the asset repair.
As our database audio of failing machines grows every day by millions of samples, we can apply transferred learning from different machine failure datasets. Also, we can detect more and more particular anomalies on the machine every day.
Join us on such an exciting journey, we will be very happy to discuss with you how we can deploy our technology in your case and guard your machines.
 Wu, Shuang. “Engine sound simulation and generation in a driving simulator.” (2016).
 arxiv.org/abs/1509.01692: Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning