We could decrease losses caused by fire if we know what caused it. This could prove quite challenging in the field and therefore being able to identify what is the source could prove to be highly beneficial.
Once we know the source and properties of the fire, the most efficient way can be determined to tackle the incident. Because techniques as well as agents differ by the different fire class, we could identify the class and from that suggest the most efficient suppression which will yield in shortening remediation time as well as to significant savings.
Technique of operationOnce on the incident site, the drone operator will flies the drone over the fire. Either by using area scan or manually flying. Once the drone is airborne, it's sensors unit analyses air taken to the "flying chemistry lab" and wirelessly transmits results together with other drone telemetry such as GPS location. These data are analysed, results communicated back via telemetry link and visualised.
The key implementation steps- Mapping materials to the fire-class and extinguisher agent
- Sensors unit development (electronics and housing/mounting)
- AI camera training and integration
- Telemetry protocol and drone's FMU Integration
- Burning various material to build training and validation dataset
- Machine learning model training
- Integrating ML model to perform inference on received data
- Results display
- The system should work without internet connectivity (ie. the model works on the flight control computer or it could be ported to edge TPU deivce such as Google Coral that could be mounted on the drone itself, turning it into self-contained device)
- Limit mechanical moving parts - ie. particle matter sensor with small mechanical will be prone to faults
- Prefer absolute value (calibrated) sensors over the relative measurements
The system collects data from chemical sensors and from spectrometer. These are all used at the same time or just parts to infer the fire class and the agent. The availability of having multiple measurement which can be fused in various ways is helpful for experimentation as sometime less can be more.
The Pixy2 AI camera is used as firebrigade detector in the area.
ML engine listens to mavlink data stream, and makes predictions which are broadcasts it back, making results available to any other component which might be interested. In this scenario it's drone companion computer which sends it over bluetooth link to rapid IOT which is used as the in field display.
The overall system has 5 key components with multiple submodules
1. Drone with FMU and all additional components required for the full operation. In addition to these, FMU:
- interfaces with IR high temperature sensor over I2C using modified FMU firmware
- connects via mavlink over serial to the companion computer which hosts the sensors
- forwards the companion computer messages to the telemetry radio
2. Companion computer with sensors board which hosts sensors that are connected via I2C. The controller runs measurements and in each cycle takes the values and send them on serial using mavlink protocol and on debug serial using CSV format which can be used for stand-alone debugging and data collection in lab environment. The board contains it's own voltage regulator for the microcontroller as well as for sensors.
3. Flight control computer receives telemetry from the SIK radio using mavlink router (which also forwards locally generated messages between clients). This allows multiple consumers to connect and work independently with the messages:
- QGroundControl for the flight control, visualisation. Also it can act as data viewer
- Data logger - storing data into CSV file or forwarding it to database for later processing (ie. training)
- Fireclasser - trained model subscribing to mavlink messages which are used as input to the algorithm and
- Data viewer - display for the custom data and custom messages
4. Fireclasser(s) is/are the key component(s) which loads the trained model and executes on every complete sensors dataset to infer the the fire class/agent. The system communicates with GPU, however can be run without or using edge TPU. Since the input and output is mavlink, it can be moved to the drone and
5. Model Development environment consisting of the collected datasets, jupyter notebook, runtime kernel that allows execution on GPU to speed up training and tensorboard for evaluation of various configurations. The collected datasets are manually labelled before use in the training. Once the model is trained, it's saved and used in fireclasser.
Bellow is final set of components used in the build. From top, left to right:
- ESP32 to be used as companion computer
- Multichemistry sensor SGX MICS-6814 with 3 internal sensors. Reducing gases sensor detects CO and additional gases can be derived such as H2S, Ethanol, Hydrogen, Ammonia, Methane, Propane, Iso-butane. Oxidising gases is based on NO2 and also allows detection of NO and contributes to hydrogen. Finally 3rd sensor detects NH3 and helps detect Ammonia, Ethanol Hydrogen, Propane and Iso-butane. Not all calculated values are used, as other sensors measures them. This breakout is specific, as it has already tiny processor which does the A/D conversion and some calculations, which makes interfacing with it breeze, in contrast to other modules which have just ADC giving you only raw values.
- USB to RS232 converter (any supporting both 3.3/5V levels is suitable)
- Power regulator providing 3.3V and 5V
- Spectrometer AS7265x measuring 18 frequencies from UV to IR covering spectrum from 410nm to 940nm. The hypotesis is that module in quite naive setup in some simple chamber can still provide detection capability from the fumes.
- Sensiron SCD30 measuring CO2 using NDIR (nondispersive infrared sensor)
- Sensiron SGP30 this is marketed as VOC and eCO2 sensor, however underlying underlying is MOX gas sensor measuring H2, Ethanol and TVOC
Additional components have also been tried, but discarded as unsuitable.
As companion computer, while I was waiting for components to arrive, I have decided to use the Espressif ESP32 (partially as I wasn't sure if I'll use the rapid iot as companion computer or display - so the bluetooth availability was of interest). Due to stock availabilty of the dev board, I simple ordered the smt version which then requires few resitors and caps as per the reference design. Then I simply soldered it on cutted sop 28pin breakout board. The issue was that it was difficult to access the pins on breadboard. Later I managed to get hold of ESP32 Dev module which also works in this setup, however has same issue being too wide. Later I discovered ESP32 pico kit, which should work, and fits nicely on breadboard, which I'd recommend to try instead.
Having breakout from SMT solved, let's build it the 1st prototype.
Breadboard PrototypeFirst add the power module. It's configured to provide 2 voltage rails, of 3.3V for some sensors and 5V for others. Make sure to configure right voltage before powering it on. It is also used to power the ESP32. The buttons and resistors are required for restart and switching unit to programming mode. (not required with dev kit or pico kit as they are already present)
Once we have the 2 power rails, connecting sensors is quite straightforward as only additional requirement is to add i2c rail SDA and SCL.
One thing to watch for - some of the sensor breakout boards require 5 Volts others only 3.3 Volts (as shown in the diagram)
In order to load program, the Serial adapter needs to be connected (already present on esp32 kit boards). The communication with FMU is via Serial 2. For testing the connection, I have used supplied rapid iot mouting board as breakout for the serial and GND. Later I have cut the wires and soldered them as required.
With the sensors connected, time to quick check, by running I2C detect, to verify it's all sound. It should look like this:
With completed prototype, this is how the overall development setup then looks like.
However this kind of assembly is not ready to fly, so time to build v2 prototype, and figure out how to neatly mount it onto the drone.
Mechanical assemblyThe design idea for mechanical mounting, was to create a module that could be a snap-in attachment to the existing assembly; building on the design of the battery holder. This would create easily replaceable and service-able self-contained module.
First prototype is made from the postal tube. It's split to create two compartments, one that would be sealed and one that would allow air to flow through. The image shows first test of the concept with battery only and top awaiting sensors.
These sensors would require custom mounting plate which will work as chambers separator that can be simply slided in. I have built a base on which the development modules are mounted. All the connections are guided down to the bottom part where the companion computer is hosted together with the battery.
When I have been doing first measurement trials, the readings remained same for quite a significant amount of the time after the measurement was taken. This was caused by the smoke staying in the chamber. Therefore I have augmented the design and added the fan, to force air through the chamber.
AI Camera and IR sensorI have decided to mount the pixy ai camera and the IR sensor directly on the dron plate, which is sufficiently positioned not to obscure the view.
IR sensor is module in the px4 build system and thus is part of the FMU build.
Arduino on ESP32The code is broken to multiple files, with each breakout board as individual file/class that use manufacture's provided libraries. The main sketch is therefore fairly trivial and demonstrates the read loop nicely:
void loop() {
mavlink.send_hearbeat();
mavlink.time_tick();
pixyHatDetector.sendData(mavlink, 70);
printInfo("Spectra");
spectroscope.takeMeasurements();
spectroscope.sendData(mavlink, 10);
spectroscope.printReadings();
printInfo("H2/Ethanol/TVOC");
chemH2Ethanol.sendData(mavlink, 30);
chemH2Ethanol.printReadings();
printInfo("NH3/CO/NO2/C3H8/C4H10/CH4");
chemNO2NH3CO.sendData(mavlink, 40);
chemNO2NH3CO.printReadings();
printInfo("CO2/SensTemp/RH");
chemNDIRCO2.sendData(mavlink, 50);
chemNDIRCO2.printReadings();
mavlink.send_int(1, "Measured", 60);
rapidIotDisplay.ensureConnected();
rapidIotDisplay.showCrewCount(pixyHatDetector.getCrewCount());
mavlink.proxyDataTo(rapidIotDisplay);
Serial.println();
delay(500);
}
MavlinkIn order to use mavlink in the Arduino, or in other project, the code needs to be built. Once the header files are generated they can be easily added as library. In the current implementation I have used named values (int and float) for communication, however this could be easily extended with new definitions of structures that could describe sensors in more fluid way.
The flow of the data from devices is that each message is sent individually and at the end marker is sent to identify, that all values have been read and transmitted. In case there was some missing/ not-yet completed reading, the values are cached in order to provide always full set of values.
After that, the link is checked for incoming messages and results are forwarded to the rapid iot display over the bluetooth link.
Once the code is running on the device, we can check that the data are reported correctly to using QGroundControl:
The pixy2 AI camera wasn't that suitable for conversion to spectrometer, however when trying to understand where it has it's strengths I realised that there might be a sweetspot for it. Some of the extinguishers are unsuitable when people and thus crew is present. Also the crew has hard hast which are usually very high-visibility which ideally fits with the pixy2 intended use.
I used the pixymon to define various colors of hats. These data are then acquired by i2c from companion computer and send to base station as well as to field display.
https://docs.pixycam.com/wiki/doku.php?id=wiki:v2:installing_pixymon_on_linux
cd pixy2/scripts
./build_pixymon_src.sh
# add udev rules
cd ../src/host/linux/
sudo cp pixy.rules /etc/udev/rules.d/
# run pixymon
#cd ../../../build/pixymon/
cd build/pixymon/
./PixyMon
Arduino library for interacting with pixy2 doesn't work out of box on arduino, Both cpp and header files of ZumoBuzzer and ZumoMotors needs to be deletedField Display via bluetooth
I have defined 4 bluetooth characteristics:
- Inferred Fire Class (from ML model)
- Inferred Suppression Agent (from ML model)
- Crew count (from pixy2)
- Colour indication (of the agent as per common colour scheme)
When device is powered on, the status screen shows. Once connection is established, it's automatically switched to results screen which displays all received values. At the same time, LED is updated accordingly.
The actual screens then looks like this:
caveat: The drone acts as client to rapid iot, which is server. It would be preferable for drone to act as beacon/broadcasting master and anyone could observe data. This is easy in ESP, however the atmosphere doesn't have built in module, and thus one would have to be developed first.
Data CollectionIn order to understand what data is necessary to collect, I have devised simple materials to classes allocation with couple of examples. This provides basis on labelling of the training data.
note: this is not professional advice - there is no guarantee of correctness and it's not advice which extinguisher to use for which material
Time to collect some burnable materials...
and burn them
The data collected are captured with the same data path as is used to make predictions. For that I have written simple script which takes mavlink data, filters relevant sensory captures and stores and prints them on the screen in CSV fomrat. Using standard unix redict, I save them to the csv file.
fireclasser$ ./make_csv.py > data_file.csv
Using this method I have collected multiple captures:
samples$ ls -1
data_alcohol.csv
data_candle_red_day_01.csv
data_candle_red_smoke_01.csv
data_candles_lightup_01.csv
data_candle_white_day_01.csv
data_candle_white_smoke_01.csv
data_gas_buthane_metane_mix.csv
data_match_wood_burn_01.csv
data_match_wood_burn_smoke_01.csv
data_oil_butter_02.csv
data_oil_butter.csv
data_oil_rapeseed_02.csv
data_oil_rapeseed.csv
data_oil_sunflower.csv
data_paper_cardboard.csv
data_paper_printing_02.csv
data_paper_printing.csv
data_paper_tissue.csv
data_room_night_01.csv
data_room_on_rainy_day.csv
data_room_on_rainy_day_with_fan.csv
data_room_post_measurement_01.csv
data_room_post_measurements_02.csv
data_room_post_measurements.csv
data_steel_wool_02.csv
data_steel_wool_03.csv
data_steel_wool.csv
data_wax_candle_05.csv
data_wood_fir_needles_01.csv
data_wood_fir_needles_02.csv
data_wood_fir_needles_03.csv
header.csv
Every file is then manually labelled (one column added in the end with the ID of class that the material belongs to as per table above). Once labelling is complete, they are merged into single CSV which contains all data.
For the practicality, I used 2 set of files:
1. Office development - which only contains room, wood from matches and candles
2. Complete set - which contains all samples of multiple materials for multiple classes that I managed to gather so-far.
Model TrainingFor the training, I have used jupyter notebooks running on the PC with tensorflow using GPU for computations with following docker image.
docker run --gpus all \
-v $(realpath ~/nxp_dronekit/keras):/tf/tensorflow \
-it -p 8888:8888 \
tensorflow/tensorflow:latest-gpu-jupyter /bin/bash
# once it's running start jupyter (or omit bash and it will start self)
export LANG=C.UTF-8
export LC_ALL=C.UTF-8
source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-rootchrala@otto
Once jupyter server is running, notebook from the source code can be opened in the web browser. Now let's look on key bits in there.
First let's load data and check the data we have.
csv data shape: 1173 rows of samples, with 31 sensors in 7 classes
Actually, we don't have samples as I wasn't able to capture them. Also it looks like we have quite imbalanced sample set, which requires more data to be captured. Let's see what the data looks like in spectrum measurements.
In detail:
And from the chemistry sensors, which looks a bit more promising
and the details:
Now we need to split the set into training one and testing one.
Model DesignThe model will user either all sensory data or part, such as spectroscopy only or chemistry only. It's then fed to matching input layer, followed by hidden layer which produces output weights, to which we apply softmax to obtain one value. It would be possible to obtain multi-label result, which could then be displayed. However intent was to keep operation as simple as possible, so the single label output is preferable. Multi-label solution would be possibly interesting to identify materials that contributes to the fire.
This can be defined as
at which point is ready for training
First take on model with all data achieved accuracy of 0.71
From this view, we can see that it's failing quite heavily on the gasses. Also, since we don't have any electrical, it shifted the class from the cooking oils, so some dummy data might have to be added. In comparison to room evaluation model, this is quite low matching.
I have modified dataset to include some fake data for electricals, so the matrix is now complete; and another run helped quite a bit with the gasses.
We can try another round with longer training, which keeps slowly getting better.
Additionally, we could review tensorboard, to identify where the model stops improving and aid with refining the model. Inside the docker container we could run following from the directory with jupyter notebook
tensorboard --logdir logs/fit --bind_all --port 8888
Especially when we zoom in, it can be seen how different number of steps and models differ
With the model trained, we can now run the inference and send the results back.
The drone can be switched before or after to preserve a battery for longer flight.
Running the System on PCOn your PC start the router first, otherwise QGroundcontrol takes over the usb ports (or has to be configured explicitly). All other programs on the computer will connect to it, instead of using the USB of the radio.
mavlink-router$ ./mavlink-routerd /dev/ttyUSB1:57600 -e localhost
Then start the QGroundControl. which will automatically connect on the well known port.
nxp_dronekit$ ./QGroundControl.AppImage
And start the fire-classing engine
~/nxp_dronekit/keras$ sudo docker run --gpus all \
-v $(realpath ~/nxp_dronekit/keras):/tf/tensorflow \
-it -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter \
/bin/bash
/tf/tensorflow/fireclasser# ./make_prediction.py
(Optionally) you can start the logger into CSV file, onto the screen, or debug and print all messages.
./make_csv.py
./make_csv.py > file.csv
./log_messages.py
Then Observer results, either on console, rapidIOT or in QGroundControl:
Comments