Currently, COVID-19 is rampaging in the entire world disrupting various economies and countries. One of the most and first affected industries include the travel and aviation industry. The main reasons being - diversity, high density and varied travel history of people at an airport. One of the most common solutions implemented at airports to screen people were thermal imaging cameras - They could easily predict elevated body temperatures and flag people with a possibility of fever for further testing. However, these thermal cameras had lots of problems -
- Insane cost: These cameras cost thousands of dollars, especially the cameras which were specifically calibrated for EBT applications.
- High dependency: Heavily dependant on ambient conditions and distance between the subject and the camera itself.
- Could detect EBT only: The body temperature of a subject can be elevated for a multitude of reasons, fever being just one of them. This meant that further screening was quite commonly needed. They cannot be used to detect any kind of definite symptoms or the virus itself.
- Less applicability: Cameras which were specially made for EBT detection had very less applicability beyond that, making them a very costly affair for what they had to offer
I have been working with thermal imaging for almost a year now, and it is a field with absolutely spell-binding potential. Thermal imaging is much more capable of doing and finding things beyond what we currently use it for, if correctly utilized. Looking at the sophisticated thermal cameras that were being used around the world in the lieu of the pandemic, I felt that their use was being grossly undermined. So I started researching and finding ways to use thermal imaging to -
- Detect some COVID-19 symptoms beyond simple fever by correlating the various minute thermal variations they induce in the human body with the thermal map captured by the camera.
- Reduce the overall cost of the device by using embedded thermal cameras (rather than expensive solutions) that are integrable with our own custom hardware.
- Increase the accuracy of the thermal camera by taking into consideration all the various environmental and distance related factors that affect the thermal values of the camera, and form relations and functions to correct the subsequent error due to these factors.
- Related to COVID-19 symptoms, cough and fluids that accumulate in the lungs during a respiratory infection can affect the emissivity of that region, which can be detected by the camera by varying its own emissivity.
- Secondly, due to lack of oxygen, the combustion rates of a lot of organs are affected, including the heart which is forced to pump blood more vigorously. Hence this strain on various organs can also be encapsulated by the medium of thermal imaging as the combustion rates of any organ affect the thermal values in its region.
- If we can capture images of the face, chest and stomach regions, we can easily monitor the 4 organs most affected by the virus, namely - heart, lungs, brain and liver.
- For this, we need high resolution closely taken images of these regions of the subject. The environmental factors and distance will be recorded simultaneously while taking the images and used later to compensate for the error due to those factors in the thermal measurements.
- All data will be stored as highly accurate thermal values in a CSV file, so that even minute changes can be captured.
This project aims at developing a device that can capture high-quality data for any kind of COVID-19 related data analysis that uses thermal imaging technology. Currently, no such open source COVID-19 dataset exists that contains very high quality closely taken images of the chest and forehead region that can be used for medical purposes.
To reduce the overall cost of the device, I researched quite a few possible options:1. Thermal Camera sensor: Most of the thermal camera sensors that can be embedded with a single board computer and have a small form factor have a very very low resolution, like 8x8 (AMG8833), 32x24 (MLX90640), etc. These are not useful for any kind of medical applications. On researching further, I found only two devices:
- FLIR LEPTON 3/3.5 series: 80*60, 160*120 resolutions available, +/- 5 to 10 degC of accuracy. Costs around 300$ with its breakout board and can cost around 450$ with shipping and taxes in India.
- SEEK mosaic series: 200*150, 320*240 resolutions available, +/- 5 degC of accuracy. The starter kit for the 320*240 costed me around 730$ with shipping and everything. Due to the high accuracy and resolution, I decided to go forward with this.
2. Single Board Computer: My main choices were
- Jetson Nano: Unfortunately, my camera is not compatible with Jetson Nano
- Balena Fin: It had a lot of features that were useful for my application, and I could easily integrate my camera with it. Hence I went with the Balena Fin.
3. Sensors: My main choices for the sensor boards were:
- Silicon Labs Thunderboard Sense V2 kit: Contains many sensors, easy to use but the firebase integration has been removed from it. I chose to go with it because it had more sensors and costs only 20$. I integrated an SR-04 as well so that I could measure the distance between the subject and camera.
- On Semiconductor RSL10-SENSE_DB_GEVK: Easy to use, but lesser sensors and pins. Costs 50$ without taxes and shipping.
Hence, my entire electronics costed me around 950$ including shipping and everything. Thanks to Balena and Silicon Labs, I got a sponsored Balena Fin and Thunderboard for my project. Hence I ended up saving around 220$ out of that.
Manufacturing and Case Designing:I got a 3D casing for the model designed, and 3D printed it locally, costing me roughly 30$ in India. The designing was done on Solidworks and the following things were kept in mind:
- Easily editable for customization.
- 3D printable.
- As much as possible, the internal electronics are protected from the outside environment.
- A Balena Fin or a Jetson nano or a RPi; any SBC can be easily placed and used inside this device. Studs and holes can be easily used to firmly place any SBC inside the device
- The Thunderboard is given enough exposure to sense the outside environment. However, the light sensor might require some more exposure to correctly sense the ambient/UV light intensity.
- Looks good and sleek, with access to all ports.
You can find all the design files in the attachment Section. Here are some rendered images:
Once the design was ready, we manufactured it on a 3D printer locally. The parts came out nicely, and everything fit perfectly except for a few changes that I had to do to make it easier for me to power the device and use the Balena and Thunderboard. I also made a couple of holes to screw studs and the cover to the base.
I also had a wooden piece made which I screwed to the base to integrate it with a tripod that I had.
Finally, putting everything together -
Hence, we now have a fully assembled and functional high-resolution thermal imaging device, developed in under 1000$. If produced and developed as a product, it can be easily manufactured at a price much less than 600$ and in an even smaller form factor.
Now that we are ready with all the electronics and Hardware, lets move on to integrating it.
Integration of individual Hardware ComponentsThis device contains 4 main components -
- Balena Pi: Main SBC
- Seek thermal camera starter kit: Main thermal imaging Device
- ThunderBoard Sense V2: Main sensor board
- Ultrasonic SR-04 sensor: Distance Sensor
The Softwares that are necessary to integrate these components are -
- Raspbian: OS for Balena Fin V1.0
- Cmake: Software necessary to integrate the starter kit with the Balena Fin.
- MATLAB: Necessary for remote controlling the Balena fin and importing the collected thermal values for further processing. Also used to deploy codes and machine learning models that run when the Balena Fin wakes up, developed on the collected thermal images by MATLAB.
- Simplicity Studio: Used to code the Thunderboard
- Visual Studio (Optional, in case you don't want to use an SBC or Linux OS and wish to integrate with a Windows system)
- Medium One (Much better option, but paid) or ThingSpeak or Firebase (currently, Silicon Labs have removed their integration with Firebase hence I do not recommend using this) to save data from the Thunderboard to an online server.
So lets begin!
CMake integration with RaspbianFirst, we need to install Raspbian Stretch, since that is the only version that is supported by MATLAB's raspberry Pi support Hardware package. You can use the Balena etcher tool to flash the OS into the Balena Pi using the micro USB. The OS can be downloaded from the official site or by simply google searching it.
After flashing the OS and setting up the Balena Fin, we need to install a few packages:
First, install the latest version of make from this site for Linux - Cmake
For me, it was the cmake-3.18.0.tar.gz file. Extract it on the Desktop.
Install the latest version of Cmake as follows after opening a terminal instance from the location of the extracted folder:
$sudo apt-get install libssl-dev
$sudo apt-get install qt5-default (installs Qt version 5.7.1)
$sudo apt-get install qtcreator
./bootstrap --qt-gui
$sudo make
$sudo make install
$cmake --version #to check the cmake installation
$cmake-gui #to open the Cmake GUI
You need the latest version of qt to be able to run the Cmake GUI. After that, download the Linux SDK for the thermal imaging camera attached in the code section. Extract that on the Desktop too.
Install the SDK and rules for the camera as follows after opening a terminal instance from the location of the extracted folder:
#install camera rules
$sudo cp driver/udev/10-seekthermal.rules /etc/udev/rules.d
$sudo udevadm control –reload
#install dependencies
$sudo apt-get install libusb-1.0.0-dev
$sudo apt-get install libsdl2-dev
#copy necessary headerfiles and libraries
$sudo cp lib/arm-none-linux-gnueabihf/libseekware.so.3.4 /usr/lib
$sudo cp include/seekware/seekware.h /usr/include
#navigate to the folder containing example codes
$cd bin/arm-none-linux-gnueabihf
#check and provide execution permissions
$ls -l seekware-sdl
$chmod +x seekware-sdl
#run an executable
$./seekware-sdl
You should be able to see the camera running and displaying captured images using an SDL window. To be able to run your own custom executables, you need to copy seekware.so.3.4 library from lib/arm-none-linux-gneaubihf folder and rename it as seekware.so as the compiler won't be able to find it otherwise. Do not delete the originally copied seekware.so.3.4 library as all executables will search for that file before running.
Once these two packages are downloaded on your SBC at the desktop, we can modify the source files, use the make GUI to build them into UNIX makefiles, and then make the makefiles to create standalone executables.
Once this is done, navigate to the build folder, and make the makefile present in it. This will create individual executables for all the examples. To edit the source code of any example, navigate to the src folder of the example where you will find a .c file (like seekware-sdl.c). Open it in Geany or any text editor cum IDE and edit it. After that, open Cmake and follow the above process to generate executables of the edited source code.
This completes our interfacing with the camera. A pdf document in the SDK folder elaborates all the various useful functions for this SDK that can be used to create custom codes and edit the examples as per our applications.
MATLAB integration with RaspbianIt is a very simple process, just install the MATLAB Support Package for Raspberry Pi Hardware on MATLAB from the Add On manager. Exit the setup as it is only applicable for SD Card based SBCs, and Balena Fin has an eMMC. To install the MATLAB package on the Balena fin, type the following on the CLI:
$sudo apt-get update
$sudo apt-get install matlab-rpi
Once this is done, Open MATLAB, connect the rpi object in matlab by entering the following on the MATLAB command line:
mypi = raspi('IP_address','pi','password');
Replace the inputs with your IP address, Username and Password. You should get something as follows:
mypi =
raspi with properties:
DeviceAddress: '192.168.29.93'
Port: 18734
BoardName: 'Raspberry Pi 2 Model B'
AvailableLEDs: {'led0'}
AvailableDigitalPins: [4,5,6,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]
AvailableSPIChannels: {'CE0','CE1'}
AvailableI2CBuses: {'i2c-0','i2c-1'}
AvailableWebcams: {'/dev/video10','/dev/video11','/dev/video12'}
I2CBusSpeed: 100000
Supported peripherals
You can check this site for various support guides and functions: Reference List
ThingSpeak and ThunderBoard Integration with Balena:The Thunderboard sends data to an online server like ThingSpeak and the Balena can either retrieve data from there or connect with it over Bluetooth. The medium One integration can be found here: Medium One. The ThingSpeak python file is attached in the code section. Refer to this site to see how to connect RPi to the ThingSpeak network. Once it is connected, you will see plots of your data on the ThingSpeak network that you can use and download to see how these factors affect your thermal camera's readings. To interface the Ultrasonic sensor with the Balena, refer to this guide here. You can use the MATLAB code generator to generate the code for the interfacing.
This completes our integration of all devices with each other. Now anyone can use all this to make their own thermal imaging applications:
Future Scope:This project was a finalist in the Safe India Hackathon, conducted by the Orissa State Govt. in India along with other institutes. This project has since been developed a lot and we are now at the brink of developing the world's first open-source thermal Imaging COVID-19 dataset.
We have applied to a local well-reputed hospital in Mumbai, Maharashtra, India. We are waiting for permission to scan actual COVID-19 patients and compare them with un-infected patients to check for places and areas where they show thermal changes (like the chest and face region as mentioned earlier). Many doctors and surgeons are in touch with us to help us identify features that can detect some unique symptoms related to the COVID-19 disease. Once the initial data is collected, further analysis and model training will be performed on it. The data will be open source and collected in accordance with FDA approved guidelines like the inclusion of calibration methods as a reference while taking the images.
This is a novel approach and will benefit all open source developers and deep learning enthusiasts who want to develop models on thermal imaging data but are unable to due to lack of hardware. I hope that my instructions are lucid and anyone will be able to build their own thermal imaging device using these instructions. This will help in making thermal imaging technology much more convenient to adopt, process and deploy.
In case you want to know more or support this initiative, please feel free to comment or contact me.
Comments