Violation of protective passenger boarding rules, and other accidents including Sleeping Child problem on the school bus.
Sleeping Child problem: Accidents caused by the driver’s carelessness that did not check the child who did not get off the bus asleep
1-2. Why this should be solvedAccording to the “Serim law", a law to strengthen safety standards for school bus that took effect in 2015, a teacher who will help children get on and off the bus should ride with them to help children wear seat belts and get on and off. However, various safety accidents are occurring due to the failure to properly comply with this.
1-3. A safety incidentOn January 25, 2022, a 9-year-old child got off at an academy van on Jeju Island and left without checking that the child's clothes were stuck in the vehicle entrance and was hit by a car and died. In the academy car, only the passenger was registered, but the passenger was not on the bus at the time of the accident.
<Reference : https://news.kbs.co.kr/news/view.do?ncd=5383007>
1-4. Why this causes an issue to campus or normal our lifeDespite the existence of the Serim Act, there are many school buses where teachers have not been hired due to labor costs, and the penalty under the Road Traffic Act is only 300, 000 won, which is relatively less than labor costs. In addition, according to the "Children's School Bus Statistics" submitted by the National Police Agency, the number of children's school buses per person is wide by region from at least 18 to 194 and the number of traffic patrols is far short compared to school buses, so the management and supervision of passengers' boarding obligations is poor.
<Reference : https://www.yeongnam.com/web/view.php?key=2021122101000255>
<Reference : http://www.sisajournal-e.com/news/articleView.html?idxno=260788>
1-5. Who is the target customer for this serviceThe target is for facilities that report and operate children's commuting vehicles. This is a system that can prevent accidents that may occur in children's commuting vehicles, and if an accident occurs in children's commuting vehicles, it will be possible to prove whether the operator fulfills his/her obligations.
2. Model Diagram1. To verify that the passenger is present on the bus and is a designated person, use the camera and LED after the bus leaves to determine if the person is registered in the database.
2. The moving path of the bus is transmitted to the application through a GPS sensor. In addition, the bus's departure and arrival information is identified and a push alarm is transmitted.
3. To prevent accidents when disembarking, the LED in front of the driver lights up when weight is detected through the load cell and turns off when weight is not detected
4. When the bus arrives at its final destination, the buzzer rings, and the driver goes to the back of the bus to turn off the buzzer, checks if there are any children left, and presses the button to turn off the buzzer.
5. Send a push alarm to the application if the load cell detects weight on the seat even after the bus arrives
3. System Architecture & Resource BrowserThe overall resource diagram is shown below.
Monitoring Entity (ME) is developed using the SimpleHTTPServer module. It defines the relationship between Raspberry Pi and Mobius servers, and Mobius servers and applications. In other words, ME is able to monitor the sensor data and operate the device by sending commands to the actuator, and respond with appropriate data when an application requests data.
ME continuously monitors the sensor data values sent to the Mobius server, determines whether certain conditions are met, and creates a content instance in a container that registers Thyme as a <nu> to enable the actuator to operate.
The roles to be played in the backend server are as follows:
- Respond real-time bus location and child arrival information
If you send a GET request with a sid (Student Id) value to ME, it returns in json format whether the student with that sid got off the bus, how many minutes later he/she will arrive if he/she is still on board, the departure time of the bus, and the current bus’s gps coordinates. Frontend processes this data to output useful information for applications.
- Respond a fellow passenger and driver information
When a GET request is sent to ME with a bid (Bus Id) value, it returns fellow passenger and driver information to the bus with that bid in json format. Frontend processes this data to output useful information for applications.
- Register a token for device(smartphone) of parent
On initial launch, app sends a device token to the server so that ME can send push alarms to students' parents. Then the above registration process will proceed.
ME stores the device_id information of the student with the id and utilizes it when sending a push alarm.
- Notify for app about departure and arrival of the bus
When the bus leaves, or when data is detected by the weight sensor when the bus is stopped driving, ME sends a push alarm to all registered devices.
Also, when it arrives at a particular station, ME sends a push alarm to the device_id registered with the child getting off at the next stop.
- Request for confirmation about that the fellow passenger on board
If the current coordinates of the bus are more than 70 meters away from the starting point, determine that the bus has departed, and add a CIN ('1') to /bus/fellow/camera_comm
. This is a request to the camera for face recognition, which is sent to Thyme, retransmits the CIN value to the TAS connected by a web socket to Thyme, and finally, through the TAS, face recognition begins.
- Request for camera re-shooting
If face recognition is not correct, CIN('0') is added to the /bus/fellow/camera_data
container. If ME received this value through notification, request that re-shooting the camera by adding CIN('1') to /bus/fellow/camera_comm
container
- Request for Sleeping Child Check
If it is determined that the bus is closed, add CIN('1') to /bus/real/buzzer
. Then, via Thyme and TAS, the buzzer is turned on. The buzzer is coded to turn off only when the fellow passenger or driver presses the button next to the buzzer.
The expo framework is being used, and react-native is used with base Mapview of react-native is used to show the map
When the gps icon below is pressed, it is implemented to move to the bus-shaped marker using the animateToRegion method.
The expo also provides a push notification function. First, if you get a device token, you deliver the token and transmission contents to the expo. then expo sends the notification to my mobile phone.
When the app is first rendered using useEffect, you are issued a token
If you send the data with the token to the expo backend using curl,
you can get a notification like above.
The mobile application may obtain data by requesting gps and passenger information as follows through the Mobius Python application.
Whenever the gps value changed or whenever the mobile application requested data, the mobile application sent a request to the Python application to obtain the data and then sprayed it on the screen through useState.
Basically, Hackster's Facial Recognition open source is used.
Our team is Using OpenCV 4.6.0 + Python 3.
After installed it, I set an OpenCV virtual environment.
workon cv
Use the above command to work with the cv python virtual environment.
Enter the environment, the command line is set as followed.
(cv) rasp@raspberrypi:
- STEP 1: Face Detection
For Facial Recognition, we are going to use opencv - "Haar cascade classifier".
import numpy as np
import cv2
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while True:
ret, img = cap.read()
img = cv2.flip(img, -1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('video',img)
k = cv2.waitKey(30) & 0xff
if k == 27: # press 'ESC' to quit
break
cap.release()
cv2.destroyAllWindows()
Download the xml file from the haarcascades directory for classifier application.
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
This is the line that loads the classifier.
Then, we set our camera and inside the loop, load our input video in grayscale mod.
Now we call classifier function, passing it some very important parameters, as scale factor, number of neighbors and minimum size of the detected face.
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)
The function will detect faces on the image. Next, we must mark the faces in the image, using a blue rectangle.
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
Set rectangle with the Corner(x, y), Width(w), and Height(h) and present the result with imshow() function.
Run the python script on cv python environment, using the Raspberry Pi Terminal.
python face_detection.py
The result:
- STEP 2: Face Data Set
First, create a subdirectory to store facial jpg data and named it "dataset".
mkdir dataset
This code is very similar with face detection code.
import cv2
import os
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==> ')
print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0
while(True):
ret, img = cam.read()
img = cv2.flip(img, -1) # flip video image vertically
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_detector.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
count += 1
# Save the captured image into the datasets folder
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
cv2.imshow('image', img)
k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
if k == 27:
break
elif count >= 30: # Take 30 face sample and stop video
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
There was an "input command" to set a user id. (Integer number)
face_id = input('\n enter user id end press <return> ==> ')
And for each one of the captured frames, it saves as a file on a "dataset" directory.
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
For example, for a user with a face_id=1, the second sample file on dataset directory will be like this.
User.1.2.jpg
And as shown in the photo from Raspberry Pi.
In this code, it captures 30 samples from each id.
- STEP 3: Training
We must take all user data from our dataset and train the OpenCV Recognizer. This is the directly by a specific OpenCV funtion. The result will be a.yml file that will be saved on a "trainer" directory.
So, create a subdirectory to store trained data.
mkdir trainer
import cv2
import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# function to get the images and label data
def getImagesAndLabels(path):
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
faceSamples=[]
ids = []
for imagePath in imagePaths:
PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
img_numpy = np.array(PIL_img,'uint8')
id = int(os.path.split(imagePath)[-1].split(".")[1])
faces = detector.detectMultiScale(img_numpy)
for (x,y,w,h) in faces:
faceSamples.append(img_numpy[y:y+h,x:x+w])
ids.append(id)
return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))
It uses recogizer, the LBPH(Local Binary Patterns Histograms) Face Recognizer, included on OpenCV-Contrib package.
recognizer = cv2.face.LBPHFaceRecognizer_create()
The function "getImagesAndLabels(path)" will take all photos on dataset directory and return 2 arrays, "ids" and "faces". With those arrays as input, it trains recognizer.
faces,ids = getImagesAndLabels(path)
As a result, a file named "trainer.yml" will be saved in the trainer directory.
- STEP 4: Face Recognition
Until now, we captured a face with Pi Camera and trained the recognizer. Now, this recognizer will make a "prediction" and return its id and an index.
import cv2
import numpy as np
import os
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
font = cv2.FONT_HERSHEY_SIMPLEX
#iniciate id counter
id = 0
# names related to ids: example ==> Marcelo: id=1, etc
names = ['None', 'Gyuseob', 'Jihye', 'Seunghyeon', 'Woohyeop', 'Seungchan']
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
while True:
ret, img =cam.read()
img = cv2.flip(img, -1) # Flip vertically
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor = 1.2,
minNeighbors = 5,
minSize = (int(minW), int(minH)),
)
for(x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
# Check if confidence is less them 100 ==> "0" is perfect match
if (confidence < 100):
id = names[id]
confidence = " {0}%".format(round(100 - confidence))
else:
id = "unknown"
confidence = " {0}%".format(round(100 - confidence))
cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)
cv2.imshow('camera',img)
k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
if k == 27:
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
Including new array so it will display names, instead of numbered ids.
names = ['None', 'Gyuseob', 'Jihye', 'Seunghyeon', 'Woohyeop', 'Seungchan']
So, for example, Gyuseob will the user with id=1, Jihye: id=2, etc.
Next, it will detect a face.
id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
Including new array sop it will display names, instead of numbered ids.of the face to be analyzed and will return its probable owner, indication its id and how much confidence the recognizer is in relation with this match. Zero will be considered a perfect match.
- STEP 5: Connect to Server, Set RGB led
Since Facial recognition needs to receive signals from the server, turn on the camera, LED, and send the result value to the server, the code needs to be modified.
import cv2
import numpy as np
import os
from time import sleep
import sys
from socket import *
DEBUG = True
TAS_PORT = 3333
def send_to_TAS(cntName, data):
send_to_TAS_(cntName, data)
send_to_TAS_(cntName, data)
def send_to_TAS_(cntName, data):
client = socket(AF_INET, SOCK_STREAM)
client.connect(('127.0.0.1', TAS_PORT))
if DEBUG:
print("[*] {}".format((cntName + '/' + data).encode('utf-8')))
client.send((cntName + '/' + data).encode('utf-8'))
client.close()
cin = int(sys.argv[1])
def face_recognition():
os.system("./led 1")
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
font = cv2.FONT_HERSHEY_SIMPLEX
#iniciate id counter
id = 0
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
while True:
#os.system("./led 1")
ret, img =cam.read()
img = cv2.flip(img, -1) # Flip vertically
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor = 1.2,
minNeighbors = 5,
minSize = (int(minW), int(minH)),
)
for(x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
# Check if confidence is less them 100 ==> "0" is perfect match
if (confidence < 100):
os.system("./led 2")
sleep(5)
os.system("./led 4")
return 1
else:
os.system("./led 3")
sleep(5)
os.system("./led 4")
return 0
if __name__ == "__main__":
if cin == 1:
out=face_recognition()
if out==1:
print("Welcome!")
else:
print("Unwelcomed!")
send_to_TAS('camera_data', str(out))
The facial recognition code communicates with the server using TAS. This code enables it.
DEBUG = True
TAS_PORT = 3333
def send_to_TAS(cntName, data):
send_to_TAS_(cntName, data)
send_to_TAS_(cntName, data)
def send_to_TAS_(cntName, data):
client = socket(AF_INET, SOCK_STREAM)
client.connect(('127.0.0.1', TAS_PORT))
if DEBUG:
print("[*] {}".format((cntName + '/' + data).encode('utf-8')))
client.send((cntName + '/' + data).encode('utf-8'))
client.close()
cin = int(sys.argv[1])
And use the main function to receive signal from the server and send return value.
if __name__ == "__main__":
if cin == 1:
out=face_recognition()
if out==1:
print("Welcome!")
else:
print("Unwelcomed!")
send_to_TAS('camera_data', str(out))
And for a quick response, program the LED code with c, not Python.
#include <wiringPi.h>
#include <stdio.h>
#include <string.h>
#define RED_PIN 27
#define BLUE_PIN 29
#define GREEN_PIN 28
int turn_off(){
digitalWrite(RED_PIN, LOW);
digitalWrite(GREEN_PIN, LOW);
digitalWrite(BLUE_PIN, LOW);
return 0;
}
int turn_on_red(){
turn_off();
digitalWrite(RED_PIN, HIGH);
return 0;
}
int turn_on_green(){
turn_off();
digitalWrite(GREEN_PIN, HIGH);
return 0;
}
int turn_on_purple(){
turn_off();
digitalWrite(RED_PIN, HIGH);
digitalWrite(BLUE_PIN, HIGH);
}
int init() {
digitalWrite(RED_PIN, HIGH);
delay(500);
digitalWrite(RED_PIN, LOW);
delay(500);
}
int main (int argc,char *argv[])
{
int i;
wiringPiSetup() ;
pinMode(RED_PIN, OUTPUT) ;
pinMode(GREEN_PIN, OUTPUT);
pinMode(BLUE_PIN, OUTPUT);
if(argc == 2){
char* comm = argv[1];
if(strcmp(comm, "1") == 0){
turn_on_red();
} else if(strcmp(comm, "2") == 0){
turn_on_green();
}else if(strcmp(comm, "3") == 0){
turn_on_purple();
}else if(strcmp(comm, "4") == 0){
turn_off();
}else if(strcmp(comm, "0") == 0){
init();
}
}
return 0 ;
}
4-4. Led sensor- Description
Case #1
When the board weight sensor is detected, the sensor value(cin) to be transmitted to the Mobius server is created as ‘1’. When sending a sensor value of ‘1’ to the Mobius server, it receives it as led.cpp and executes the code. And when led.cpp is executed, the red light of the led turns on, indicating that the child is standing on the board. The moment the child gets on or off the bus is considered the case when the child is standing on the board.
Case #2
When the board weight sensor is not detected, the sensor value(cin) to be transmitted to the Mobius server is created as ‘0’. When sending a sensor value of ‘0’ to the Mobius server, it receives it as led.cpp and executes the code. And when led.cpp is executed, the red light of the led turns off, indicating that the child is leaving on the board. When the bus has disembarked, it is considered that the child has left the board.
- Raspberrypi Module GPIO table
led_red = GPIO 22
- Result
In this case, it requests a value from the Mobius server to the board weight sensor.
This is a case where the board weight sensor value is initialized to ‘0’.
If ‘1’ is requested as the sensor value, it is considered that the board weight sensor has been detected. In this case, turn on the red light on the led to check on the sleeping child.
If '0' is requested again as the board weight sensor value, it is considered that the board weight sensor is not detected. In this case, it informs the driver that no child is getting off by turning off the led.
- Description
Case #1: the seat weight sensor
If the seat weight sensor value is 100, 000 or more, the sensor value(cin) is created as ‘1’ and transmitted to the Mobius server. When a child sits in the seat, the seat weight sensor value is considered to be 100, 000 or more. If the seat weight sensor value is less than 100, 000, the sensor value(cin) is created as ‘0’ and transmitted to the Mobius server. The case where the child leaves the seat is considered as the case where the seat weight sensor value is less than 100, 000.
Case #2: the board weight sensor
If the board weight sensor value is over 100, 000, the sensor value(cin) is created as ‘1’ and transmitted to the Mobius server. The case where the child is standing on the board is considered as the case where the board weight sensor value is 100, 000 or more. If the board weight sensor value is less than 100, 000, the sensor value(cin) is created as ‘0’ and transmitted to the Mobius server. The case where the child leaves the board is considered as the case where the weight sensor value for the scaffold is less than 100, 000.
- Raspberrypi Module GPIO table
seat1_weight : (DT, SCK) = GPIO(27, 17)
seat2_weight : (DT, SCK) = GPIO(23, 24)
seat3_weight : (DT, SCK) = GPIO(5, 6)
seat4_weight : (DT, SCK) = GPIO(13, 19)
board_weight : (DT, SCK) = GPIO(20, 21)
- Result
In case 1, a total of 4 seat sensors are used. The corresponding codes are weight1.py, weight2.py, weight3.py, and weight4.py. In case 2, a total of 1 board sensor is used. The corresponding code is board.py. In this case, it is the process of uploading values from the sensor to Mobius, and the five codes are the same code, but only the sending container is different, so weight1.py will be used as the basis. All five codes are executed on Raspberry Pi 3 to send values to the Mobius server.
To transmit the seat weight sensor value to the Mobius server in conjunction with tas, run index.js and then execute weight1.py to send the seat weight sensor value, the output is as follows.
- Description
Execute the code that transmits python3 buzzer.py and ‘1’ as a string from the Mobius server to the sensor. This case is regarded as a case in which the vehicle's ignition is turned off. When ‘1’ is entered as the sensor operation code, the buzzer sounds. If the buzzer sounds, press the button to end the buzzer. When ‘0’ is entered as the sensor operation code, the buzzer doesn’t sound.
- Raspberrypi Module GPIO table
buzzer = GPIO 12
button = GPIO 26
- Result
This code is the process of requesting a value from Mobius to the sensor by executing it on Raspberry Pi 3.
When the app.js code to get the buzzer sensor value is executed, you can see that the initial value ‘0’ in the Mobius server is executed. In the case of ‘0’, the buzzer does not sound because the vehicle has not yet been turned off.
Here, when the engine is turned off by transmitting ‘1’, a buzzer sounds and it can be ended by pressing the button.
- Description
Transmits the latitude and longitude of the current location to the Mobius server in real-time using GPS sensor values for bus location inquiry. Using the real-time output latitude and longitude, a GPS sensor value(cin) is generated and transmitted to the Mobius server.
- Raspberrypi Module GPIO table
VCC = + (Power)
RX = GPIO 14
TX = GPIO 15
GND = - (Ground)
- Result
This is the process of sending sensor values(latitude, longitude) to Mobius by executing it on Raspberry Pi 4.
import serial
import pynmea2
from socket import *
DEBUG = True
TAS_PORT = 3333
def send_to_TAS_(cntName, data):
client = socket(AF_INET, SOCK_STREAM)
client.connect(('127.0.0.1', TAS_PORT))
if DEBUG:
print("[*] {}".format((cntName + '/' + data).encode('utf-8')))
client.send((cntName + '/' + data).encode('utf-8'))
client.close()
def send_to_TAS(cntName, data):
send_to_TAS_(cntName, data)
send_to_TAS_(cntName, data)
def parseGPS(s):
if s.find('GGA') > 0:
msg = pynmea2.parse(s)
lat = float(msg.lat)/100
lon = float(msg.lon)/100
#print ("Timestamp: %s -- Lat: %s %s -- Lon: %s %s -- Altitude: %s %s" % (msg.timestamp,msg.lat,msg.lat_dir,msg.lon,msg.lon_dir,msg.altitude,msg.altitude_units))
send_to_TAS('gps_lat', str(lat))
send_to_TAS('gps_lon', str(lon))
serialPort = serial.Serial("/dev/serial0", 9600, timeout=0.5)
while True:
s = serialPort.readline()
parseGPS(s.decode())
5. Demo Video
Comments