TEAM NAME: Blossom
(Blossom is a word similar to the Korean pronunciation of "불났어"!; A fire breaks out!)
*Service Introduction*
Current problems in the event of a fire:In the event of a fire in a building, fire authorities cannot actively deal with it until fire trucks and firefighters arrive.
The cause for the current situation is...- Unable to aggregate information from IoT devices to share the entire fire situation.
- The absence of a platform that directs IoT, robots, etc.
- Fire detection and real-time sharing of fire situations using IoT devices.
- Using Mobius-based AIaaS(AIas a Service) to detect the number of lives and locate the robot
- Selection of evacuation sites based on fire conditions and visualize optimal evacuation routes to the digital twin.
- Implement digital twin with Mobius and visualize it by using NVIDIA's omniverse platform.
The scenario is the same as the image above.
When fire breaks out, various kinds of sensors detect fire, and control tower notices fire has broken out.
Then situation of fire in the building is synchronized to the digital twin, by using sensors and AI inferences.
Robot approaches the fire scene to help people by giving them survival kits(oxygen masks) and showing them where is a safe place to wait for firefighters.
The system diagram is the same as the image above.
various sensors values for detecting fire, images from CCTVs, and data from robot are sent to the Mobius platform.
Some of these are sent to GPU servers to get AI inferences through kafka
Protocol we usedHTTP and MQTT protocols were used as the oneM2M protocol.
HTTP used to post and get, and MQTT used sub.
Post was used to upload data, and get was used to retrieve data periodically. In addition, the sub was used when receiving asynchronously.
The group function was created by http post, and when importing data generated by group, the http get(fopt) function has been used to receive data.
BLOSSOM oneM2M resource structureThe following is about oneM2M resource structure design (BLOSSOM).
The oneM2M resource design was made by dividing it into a total of two.
First is the structure of the control command side of the control tower and various sensors necessary for monitoring fire occurrence, and the second is the structure of the AIHub side for AIaaS.
First of all, the first structure has four containers under AE called Blossom.
The controlTower container has a target container that determines whether there is a fire or not and an escapePath container that informs the escape route.
The 4WD container is a container related to the disaster robot, and there is a container called 4WDcam that uploads image data and a container called motionCapture that uploads real-time joint values of people.
Image data is uploaded to the CCTV container for the purpose of identifying the number of people in the building in case of fire.
There are two types of edge device containers: a flame sensor to detect fire and a gas sensor per zone, and each edge device continuously uploads values.
Fire detection AlgorithmThe fire sensors periodically sends the sensor values to Mobius. In order to detect fire, the group function was used for each zone to receive the values of the flame sensor and the gas sensor at the same time.
If the initial fire is detected from the counting algorithm based on the fire sensors, fire is notified to the fire agency through the container, target.
The fire information includes request ID and the location where fire broke out. The request ID is 0, it means fire broke out. On the other hand, it is 1, it means the firefighters extinguished the flames.
Based on the data uploaded to Mobius, the counting algorithm checks whether the sensor value is higher than the safety standarad. This algorithm is run for each zone.
If either the gas sensor or the flame sensor value is higher than the safety standard, the counting algorithm calculates the count about fire sensor value. If data that is lower than the safety standard doesn't come in consecutively in twice or more, the fire detection count for that zone is increased by one.
When the count becomes 3, request ID, the information about escape route and sensor-cf connection requests are sent to Mobius.
If the sensor value lower than the safety standard is continuously received from Mobius after fire, the count is decreased by one.
When the count goes back to 0, request ID and the request to stop the connection between the sensor and the cf(classifier) are sent to Mobius.
AIHub oneM2M resource structureThe following is about oneM2M resource structure design (AIHub).
AIaaS is a service that users use when they want to make predictions through artificial intelligence models based on their own sensor values or data.
Using the oneM2M standard IoT platform, AIaaS was developed by linking the IoT hub responsible for connection with the physical world and the AI hub responsible for AI as a service.
Kafka was used as an AI as a Service broker to support high-speed interworking between IoT hub and AI hub.
Under AE called AIHub, there is a target container for user requests and a container for several models that can be used.
Under the various model container, there is a container called 'report' that informs the return value of the AI model used.
In the target container, there is a request ID that requests connection. If it is 0, it is a command to request a connection, and if it is 1, it is a command to disconnect.
After that, in Info, the model name and sensor name for which the user wants AI service are written as request.
The result value predicted by the AI model is written along with the sensor name in the report container under each AI model container.
AIaaS sequence diagramThe way how we counted the number of rescued people in the building.
CCTV is usually used for security purposes, but in the event of a fire situation, it is used to identify the number of people to be rescued.
Image data is uploaded to Mobius through CCTV.
There are a total of 3 CCTVs and the zones are divided into 1, 2, and 3.
When CCTV image data is uploaded to Mobius, the number of rescued persons is identified through the human detection artificial intelligence model.
The way how robot recognizes his position by using camera images
Unlike locating an unmanned Earth moving object that moves from moment to moment and measures GPS data, it is said that there is no indoor-based visual localization to measure the indoor potential of a human moving object with a digital clock and time. The method of executing the visualization is to measure the degree of image overlap by aggregating the local vector of the image peak value into one global vector and measuring the global vector state diagram similarity. A netvlad model was used.
First, in the inferencing server, there are database photos captured for each node that is a certain distance away from the building, and the photos are tagged with the information of the corresponding node (information of the nearest room) (at this time, the information of the tagged nearest room determines localization in the digital twin).
It is used as a reference coordinate for performing). In the event of a fire, the unmanned mobile body moves and extracts the database photo with the highest similarity to the captured query photo and extracts the corresponding node information tagged in the photo.
The unmanned mobile body moves to the fireplace designated by the situation room and continuously shares movement information (variation of extracted node information) with the unmanned mobile body in the virtual environment to synchronize with the digital twin.
How people is displayed on Digtal TwinBefore talking about Digital Twin, We have 3d modeled the 5th floor of the school by using LiDAR sensors.
The way how we synchronized human pose gestures to digital twin
When the unmanned vehicle performs visual localization and arrives at the fire site, the person near the fire site also goes through the process of being implemented on the digital twin.
At this time, when human motion or coordinate information is extracted, pose estimation technology, which is a general method of detecting the position and direction of an object in computer vision, is used.
It goes through the process of localization and estimation.
For joint values, machine learning is used to extract elbow yaw, roll information, hip pitch information, shoulder roll, pitch information, and knee pitch information from camera data.
If joint information is extracted and published in Mobius, the human object is expressed on the digital twin by subscribing to the digital twin.
The building environment is built on Digital Twin in advance, and Digital Twin takes data from the standard IoT platform and synchronizes it in real time. As soon as the return value that there are three people in 529 are posted on, it can be expressed that there are three people on the Digital Twin.
You can see pose estimation on two following videos.
About Digital TwinOur service is aimed at making it easier for firefighters to grasp fire conditions.It should be easy for firefighters to use, and it was designed to prevent confusion when used. So, I turned off the visual effect, made a key button, and visualized information about the place where the fire broke out and where it would be okay to evacuate before firefighters came.
What you can see on Digital twin is...firefighters can see where fire has broken out, where is a safe shelter to wait, and where is the optimized path to shelter on the screen.
So the fire situation can transmitted to the firefighters more vividly.
You can check that on this video. It contains how firefighters can turn on/off a visual effect by just clicking buttons.
And this video shows how firefighters and people in need see the optimal evacuation path.
Also, firefighters can use these statistics, and remained life time, if such algorithms are developed.
Demo Video
Comments