Traffic accidents remain a leading cause of fatalities worldwide. These incidents stem from various factors, including driver behavior, road conditions, vehicles, and other support systems that are not adequately met. According to Law Number 22 of 2009 concerning traffic and road transportation, a traffic accident is an unforeseen and unintentional event on the road involving vehicles, with or without other road users, resulting in human casualties and/or property damage (Government of the Republic of Indonesia, 2009).
Vehicle-related factors contribute significantly to accidents on highways. In 2019, vehicle-related accidents on the Trans Java highway accounted for 25.15%, while on the Batang-Semarang highway alone, they constituted 8.45%. Tire-related issues, particularly tire blowouts, were the predominant vehicle factor contributing to traffic accidents on the Semarang-Batang highway in 2019, comprising 85.71% of the total 21 vehicle-related accidents. Ahmad Wildan, based on traffic police data, stated that out of 265 cases on the Cikampek Highway from January to March 2017, 18% to 23% were attributed to tire blowouts.
Currently, the majority of tire pressure checks are conducted manually, which proves to be ineffective due to reduced efficiency and human involvement over extended periods. At highway toll booths, where many vehicles stop for payment, an automatic and offboard (non-contact) method is needed for accurate tire pressure measurement. Therefore, a sustainable solution aligned with the Sustainable Development Goals (SDGs), particularly the goal of achieving good health and well-being, especially in road safety, is essential.
Previous research has explored tire pressure monitoring using sensors such as the MPX5700AP. Another approach involved monitoring tire conditions using TPMS sensors and connecting them to the Thingview-Thingspeak application via ESP32. However, these solutions are less effective for highway toll booth applications due to their onboard systems, limited to individual vehicles. Given the successful application of Convolutional Neural Networks (CNN) in surface tire classification tasks with a 92% accuracy, employing a tire pressure classification system becomes highly feasible.
Solution We ProposeWe are students from Diponegoro University has developed a prototype for automatic flat tire detection using the convolutional neural network method. This innovation allows the assessment of tire condition and temperature for vehicles entering highways. The application of machine learning enables real-time detection of vehicle tire conditions at highway entrances. The offboard (non-contact) placement of this system at toll booths allows a single device to serve multiple vehicles. The system provides warnings and information to both drivers and toll booth operators if the vehicle's tire condition is deemed unsatisfactory.
Several aspects have been developed compared to previous findings, as follows:
- Implementation of an offboard (non-contact) device at highway entrances, allowing it to be utilized for multiple vehicles.
- Integration of machine learning using the convolutional neural network method, enabling continuous training with diverse datasets to enhance accuracy for various tire types.
- Incorporation of a thermal cam sensor to precisely measure the temperature of vehicle tires.
- Inclusion of a Graphical User Interface (GUI) that provides real-time displays of camera output, thermal cam sensor data, class labels (tire conditions), accuracy levels, temperature values, and audible alarms.
The system block diagram initiates when the system is turned ON. Subsequently, the camera captures images of the vehicle's tires and their temperatures, continuously sending this data to the Single Board Computer (SBC) Jetson Nano. The camera data is then received by the SBC, where it undergoes processing using artificial neural network algorithms, while the temperature sensor data is compared to a predetermined set point. The AMG8833 sensor utilizes I2C to transmit data and can measure temperatures from 0°C to 80°C with an accuracy of +/- 2.5°C (4.5°F). The device then determines the tire pressure and temperature conditions. The LCD monitor displays the tire pressure and tire temperature conditions and provides information if the air pressure or tire temperature is outside the specified range. Subsequently, the system issues a warning to the driver upon entering the toll booth.
Hardware System Overview
The hardware components of the product are divided into three main parts. The input components consist of the camera module and the thermal camera. The control component comprises the Nvidia Jetson Nano, and the output component consists of the LCD monitor and the audio output.
Software System Overview
In the development of this product's program, Edge Impulse is utilized to train a neural network model using the Convolutional Neural Network (CNN) algorithm. Gray-scale images of tires with a resolution of 96×96 pixels are captured using a camera. Meanwhile, the tire temperature is measured by the thermal camera and transmitted to the Single Board Computer (SBC). The tire images pass through the neural network model, which classifies them as "normal, " "abnormal, " or "undetected." Simultaneously, the temperature sensor data is compared to a predetermined set point. The LCD monitor then provides information about air pressure and tire temperature based on the determined target class for the image and temperature. Whenever abnormal tire pressure or temperature conditions are detected, the audio output becomes active to alert the user.
Setting Up The Edge Impulse ModelI collected images based on several tyre pressure variations, namely, 36-38 Psi for full category, (25 psi, 15 Psi and 10 Psi) for deflated category with the distance between the camera and the tyre during data collection of 40-50 cm. The final dataset consists of 1, 775 images with 818 flat category images, 405 full category images and 197 no_tire category images.A copy of dataset images collected for this project is hosted here.Step 1: Create a New Project
Go to the Edge Impulse website and create a new project.
Step 2: UploadtheDataset
In the dashboard view, enter the Data Acquistion menu then Add data.
In the data upload menu in upload mode press select folder then in labe select insert label with label name according to the input folder starting from flat, full and finally no-tire labels.
After uploading the data, the view on the dashboard will be as shown below where the dataset has been automatically divided into 80% training data and 20% testing data.
Step 3: EdgeImpulseDesign
In the "create impulse" view, change the image data to 96x96 size, this is used to optimize the accuracy of transfer learning blocks.
next add "processing block" and select the most recommended one which is Image this is used to Preprocess and normalize image data, and optionally reduce the color depth.
then in the learning block view select transfer learning to Refine the image classification model already trained on your data. Good performance even with a relatively small image dataset and afterwards save the impulses.
Step 4: Image Feature Generation
Because the dataset used is a grayscale image, in this menu change the color depth to greyscale then save parameters and generate features.
Step 5: Transfer Learning
On this menu, click 'Choose a different model' and then 'Add' for the MobileNet V2 96x96 0.35 option. The MobileNet V2 model is specifically designed to work on embedded devices and achieve high accuracy in image classification tasks.
Click 'Start Training' to begin the training process. The output console will show the training progress, including the validation accuracy after each training period.
After completing the model training, an accuracy of 96% was obtained, which means that the trained model has a very good accuracy rate.
Step 5: ModelTesting
Navigate to the "Model Testing" section in the menu. On the Model Testing page, you will find a comprehensive list of images from the test split of the dataset. Initiate the classification process by selecting 'Classify All.'
After the completion of the process, the test results will be presented, showcasing the overall accuracy score along with a detailed confusion matrix.
Step 1: Install the SDK on the Jetson nano
For this task, I used Edge Impulse's Linux Python SDK. Configuration details are available in the Edge Impulse SDK documentation, but I'll provide a summary here. Firstly, install the SDK on the Jetson nano by running the following command.
1. Install a recent version of Python 3 (>=3.7).
2. Install the SDK Jetson nano
$ sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-pip
$ pip3 install Cython
$ pip3 install pyaudio edge_impulse_linux
3. Download the model file via:
edge-impulse-linux-runner --download modelfile.eim
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
Step 2: Connect Sensor AMG8833
The AMG8833 sensor has 4 important pinouts (VCC, GND, SDL, and SCA) that are used to transmit data. This sensor can measure temperatures from 0°C to 80°C (32°F to 176°F) with an accuracy of +/- 2.5°C (4.5°F).
Due to our prototype's requirement to discern temperature variations between the right and left sides of the tire, it is essential to employ two AMG8833 sensors, necessitating the use of 2 pairs of I2C. In the provided illustration, the Jetson Nano pin configurations for utilizing two AMG8833 sensors are outlined as follows:
AMG8833 Right Sensor:
VCC (pin 4), GND (pin 6), SDA (pin 3), SCL (pin 5)
AMG8833 Left Sensor:
VCC (pin 2), GND (pin 9), SDA (pin 27), SCL (pin 28)
After the sensor is installed, the next step is to download and install the packages outlined in the Adafruit guide and the libraries used.
sudo apt-get install -y build-essential python-pip python-dev python-smbus git
git clone https://github.com/adafruit/Adafruit_Python_GPIO.git
cd Adafruit_Python_GPIO
sudo python setup.py install
sudo apt-get install -y python-scipy python-pygame
sudo pip install colour Adafruit_AMG88xx
The next step involves creating a Python file called amg.py (click to download the file), which includes a script designed to take the temperature output from both sensors and present it in pixel scale.
Step 3: Connect Sound
Audio output uses a pair of 3 watt 4 ohm speakers and involves the use of a Df Player Pro Mini MP3 player. Connection is made between the MP3 player and the Jetson Nano via a USB to Type-C cable.
Next, create a sound.py file that contains the program to activate the audio. We use three audio files, each used to provide a flat tyre warning, an abnormal tyre temperature warning, and an alarm. here is the code of the sound.py file.
from pydub import AudioSegment
from pydub.playback import play
kempis_file = "/home/jetson/PIMNAS/Kempes.mp3"
suhu_file = "/home/jetson/PIMNAS/suhu.mp3"
alarm_file = "/home/jetson/PIMNAS/alarm.mp3"
def play_audio(file_path):
audio = AudioSegment.from_file(file_path)
play(audio)
Step4:Main Program
In the main program, there is integration of all components, such as audio as output, AMG8833 to monitor temperature, and a camera used to classify tyre pressure. In addition, a GUI created using Tkinter is included so that all information can be conveyed clearly. To view the entire main programmer, please follow the link below: main.py.
Here is a concise overview of the main.py program content. The script encompasses three crucial functions:
1. Initialization and Graphical User Interface (GUI) Configuration:
This section focuses on initializing the program and setting up the graphical user interface. Key elements include the creation of the main window using Tkinter, the establishment of frames and canvases, and the configuration of labels and buttons to organize and present information.
if __name__ == "__main__":
root = tk.Tk()
root.geometry("1440x900")
root.title("Gray Background Window")
root.attributes('-fullscreen', True)
root.bind('<Escape>', exit_fullscreen)
frame = tk.Frame(root, bg="#424141")
frame.place(relwidth=1, relheight=1)
canvas1 = tk.Canvas(frame, width=660, height=750, bg="#252222")
canvas1.place(x=50, y=50)
canvas2 = tk.Canvas(frame, width=660, height=511, bg="#252222")
canvas2.place(x=730, y=50)
canvas3 = tk.Canvas(frame, width=660, height=221, bg="#252222")
canvas3.place(x=730, y=580)
AFTD = tk.Label(frame, text="AUTOMATIC FLAT TIRE", font=("Verdana 30 bold"), bg="# 252222", fg="white")
AFTD.place(x=810, y=80)
# ... (Other GUI elements)
Alarm_button = tk.Button(root, text="ALARM", width=20, height=7, command=alarm_button_click)
Alarm_button.place(x=1170, y=620)
alarm_on = False
def LEFT():
left_thread = threading.Thread(target=main_left, args=(heatmap_label1, camera_label1, 0))
left_thread.daemon = True
left_thread.start()
def RIGHT():
right_thread = threading.Thread(target=main_right, args=(heatmap_label2, camera_label2, 1))
right_thread.daemon = True
right_thread.start()
LEFT()
RIGHT()
root.mainloop()
2. main_right and main_left Functions:
The main_right and main_left functions are central to the program's functionality. main_right orchestrates object detection and classification for the right camera, continuously capturing frames, utilizing a pre-trained model, and updating the GUI with real-time results, including a heatmap. main_left mirrors this process for the left camera, adapting the logic accordingly.
def main_right(heatmap_label, cam_label, videoCaptureDeviceId):
# ... (Initialization and camera configuration for the right camera)
for res, img in runner.classifier(videoCaptureDeviceId):
# ... (Object detection process and handling classification results)
if (show_camera):
# Displaying heatmap image and camera frame on GUI
amg = Image.fromarray(image)
amg = ImageTk.PhotoImage(image=amg)
heatmap_label.configure(image=amg)
heatmap_label.image = amg
frame = cv2.resize(img, (340, 340))
cam = ImageTk.PhotoImage(image=Image.fromarray(frame))
cam_label.config(image=cam)
cam_label.cam = cam
time.sleep(0.1)
if cv2.waitKey(1) == ord('q'):
break
next_frame = now() + 100 # ... (Cleanup and finalization after completion)
def main_left(heatmap_label, cam_label, videoCaptureDeviceId):
# ... (Initialization and camera configuration for the right camera)
for res, img in runner.classifier(videoCaptureDeviceId):
# ... (Object detection process and handling classification results)
if (show_camera):
# Displaying heatmap image and camera frame on GUI
amg = Image.fromarray(image)
amg = ImageTk.PhotoImage(image=amg)
heatmap_label.configure(image=amg)
heatmap_label.image = amg
frame = cv2.resize(img, (340, 340))
cam = ImageTk.PhotoImage(image=Image.fromarray(frame))
cam_label.config(image=cam)
cam_label.cam = cam
time.sleep(0.1)
if cv2.waitKey(1) == ord('q'):
break
next_frame = now() + 100 # ... (Cleanup and finalization after completion)
3. Supporting Functions:
The supporting functions contribute to the program's overall coherence. Functions such as update_heatmap_label ensure continuous GUI updates, enhancing the real-time visualization of detection results. toggle_alarm_color manages the dynamic color change of the alarm button, providing a visual indicator of the alarm status. Additionally, alarm_button_click responds to user interaction by invoking toggle_alarm_color, initiating color toggling and triggering an audio alarm. These supporting functions collectively enrich the user experience and contribute to the program's comprehensive functionality.
def update_heatmap_label(label):
while True:
if heatmap_image is not None:
label.configure(image=heatmap_image)
label.image = heatmap_image
time.sleep(0.1)
def toggle_alarm_color():
global alarm_on
if alarm_on:
Alarm_button.configure(bg="white")
else:
Alarm_button.configure(bg="red")
print("Alarm ON")
play_audio(alarm_file)
alarm_on = not alarm_on
def alarm_button_click():
toggle_alarm_color()
Enclosure and Holder DesignFor further prototype development, it is advisable to expand the dataset by including variations of tire types from different vehicles such as trucks, buses, and various other car models. Additionally, in the advanced stages of development, integrating the prototype with highway toll gate systems should be considered to optimize its performance.
Comments