This project was developed (via a collaboration between AirFrance company and the engineers of the Polytech-Sorbonne School, us). To experiment our own industrial project, AirFrance came to us with a problem to resolve.
Airport track machines are very expensive and very hard to maintain. So, to reduce the time where the machines are off-track and to avoid any possible material damage, AirFrance has decided to opt for predictive maintenance. AirFrance wishes, on one hand, to predict the failures, so then they can avoid them, and on the other, wants the machines to be on-track as long as possible. By using this technology, AirFrance would reduce the maintaining costs of the machines and would increase their battery life.
How Does It Work?The thermal camera and the ultrasound microphone record their data every then 10 min. They are both connected to the Raspberry Pi. With the Artificial intelligence program, the system is able to say if the data "look" normal or not thanks to its training (machine learning). Then, the controller sends that information, via the Sigfox module, to a backend we made, using Ubidots.
The system also stores the IR images and the spectrograms captures by the camera and the microphone on the SD card of the Raspberry, so the user can recuperate those data for further purpose.
The system is powered by an extern Li-Ion battery with a capacity of 20000 mAh.
The system is placed or attached inside the machine body, such as this one bellow.
The whole point of this project is to be able to predict hardware malfunctions before they actually happen. In order to achieve this, we have decided to employ machine learning, which is capable of spotting much smaller differences that humans may overlook. We've used the basic Python machine learning toolset (Python, NumPy, Keras) in order to train two CNNs (Convolutional Neural Networks) on 8000 samples of data (images captured by the thermal camera and spectrograms of the ultrasound microphone). After being trained, these neural networks can predict if the pictures fed to it correspond to malfunctioning hardware: it produces two probabilities which correspond to the degree of certainty of the model. Typically its success rate is over 98%, so if fed an image of malfunctioning hardware it will respond with, for example, 98.7% certainty that it's malfunctioning, and 0.03% certanty that it's not.
The architecture of the CNNs is as follows:
1. The input layer (of dimensions equal to the outputs of the IR camera and the US microphone)
2. A 2D convolutional layer with 32 neurons and a kernel size of 3x3
3. A 2D convolutional layer with 64 neurons and a kernel size of 3x3
4. A max-pooling layer of dimensions 2x2
5. A 25% dropout layer
6. A flattening layer that converts the 2D data to 1D
7. A hidden 128-neuron layer
8. A 50% dropout layer
9. A 2-neuron output layer (because we are using one-hot vectors)
The activation function is ReLu for all the hidden layers and softmax for the output layer. The optimizer is Adam, and the loss function is categoricalcrossentropy. The models have been trained over 2 to 5 epochs with 4000 total data points each.
The Backend:We transmit the data to the Sigfox backend. We can send it 140 messages a day. It is sufficient because we are sending a data every 12 minutes, so 120 messages a day. We send a pourcentage, based on the chance of the machine to be more or less damages.
In order to process this simple information, we choose to use a simple free backend, Ubidots. We configured the Sigfox call-backs so then the data can be send automatically to the Ubidots server.
Comments
Please log in or sign up to comment.