Modern emergency response systems must provide rapid detection and assessment of incident scale, effective evacuation of people; facilitate the elimination of the incident as soon as possible, provide rescue services with real-time data, thereby increasing their effectiveness. System should be easy scalable, has high survivability and keep its efficiency in case of damage of its parts. All these safety measures ultimately contribute to saving people lives.
System architectureThe system design is based on the Fog Computing concept. Reference Fog architecture includes three layers:
- Edge layer (layer for devices)
- Fog layer (layer for distributed computing)
- Cloud layer (layer for global storage and analytics).
Such architecture looks ideal both for large emergency systems, for Android Things platform and for Google Cloud Services.
The Edge layer integrates end-point devices: smoke detectors, fire detectors, cameras, actuators, fire extinguishers, water pumps, lights etc. There are two types of such devices:
- Input devices detect something inside the building (fire, smoke etc.) and send information to the Fog layer.
- Output devices perform something inside the building (turn on lights, signals, sprinklers etc.) after getting corresponding command from the Fog layer.
Edge layer can be cost-effectively covered with lot of low-performance and low-cost nodes like microcontrollers (Arduino in this project) working with end-point devices (except ones working with cameras).
Fog layer consolidates Edge layer microcontrollers. Fog nodes receive sensor information from Edge layer devices and sends to-do commands to them. Fog nodes perform data processing themselves and send information to the Cloud layer. The Fog node do the decision-making based on data processing results. After that, Fog node commands the Edge layer to perform actions and informs upper layers about made decisions. The I2C bus interconnection with microcontrollers makes possible to consolidate multiple Edge nodes to a single Fog node. One Fog node can hold about ten Edge nodes, depending on their physical distribution, Fog node performance and required performance for edge data processing. Modern high-performance embedded boards (Raspberry Pi 3 in this project) suits well for the Fog layer requirements. Android Things OS also suits well for developing and running Fog nodes workload: especially with the ability of remote software update, that is crucial for physically distributed Fog layer.
Cloud layer provides advanced connectivity for disconnected fog regions, highload processing that exceed the fog nodes performance, distributed storage, various analytics etc. Broad range of services in Google Cloud Platform sufficiently covers the requirements that Fog computing technology brings to the Cloud layer. The Cloud layer is the Headquarter of the hierarchical Fog system. The Cloud layer gets a request for information processing. For example, image analyze. The Cloud uses its services to detect flame on the picture. After analysis, the Cloud sends the result back to the Fog layer. The Cloud also connects the Fog and the Rescue Service Center (RSC) terminal. Thus, the RSC gets up-to-date information from the object, controls elimination of the incident remotely.
This architecture is easy to scale: new Fog nodes with Device hubs can be easily integrated into the Fog. This architecture is reliable: if in any reason there is no connection with the Cloud, the system will make decision in the Fog layer.
How the system operates1. The system constantly monitors the situation on the object.
2. Smoke detector detects smoke in the particular sector of the stadium.
3. The Fog node turns on a sprinkler through the Device hub above a fire point.
4. The system calculates the optimal route of evacuation and turns on appropriate evacuation light boards and audio warning signals.
5. The Rescue Service Center terminal gets information from the Fog layer.
6. The Rescue Service Center coordinates the actions of various rescue services, controls the evacuation of people and monitors the situation remotely.
In accordance with the system architecture the Edge layer of Salutis project consists from: two cameras, a servo, a buzzer, four diodes and a smoke detector sensor. Three Arduino boards are used as Device hubs. They provide communication between devices and the Fog layer.
In general, Raspberry Pi (Fog nodes) could be used as Device hubs. Using Arduino as a Device hub makes it possible to implement local scalability: for each Raspberry Pi (Fog node) several Arduinos could be connected. It means that single Raspberry Pi could control a large number of devices. In addition, Arduino is convenient to use as a service equipment for servicing devices. Arduinos are connected with Raspberry Pi via I2C bus.
I2cDevice tmpDevice = mConnectedDevices.get(i);
if (flag_IsSetOperation)
{
if (clientsPage!=null)
clientsPage.UpdateTextView(true, false, ">" + deviceName + " | " + command + " | " + value);
try
{
i2c_writeBuffer(tmpDevice,buffer_command);
i2c_writeBuffer(tmpDevice,buffer_device_addr);
i2c_writeBuffer(tmpDevice,buffer_newVal);
}
catch (IOException e)
{
e.printStackTrace();
}
byte[] response = i2c_readResponse(tmpDevice, MAX_RESPONSE_SIZE);
}
else
{
try
{
i2c_writeBuffer(tmpDevice,buffer_command);
i2c_writeBuffer(tmpDevice,buffer_device_addr);
}
catch (IOException e)
{
e.printStackTrace();
}
byte[] response = i2c_readResponse(tmpDevice, MAX_RESPONSE_SIZE);
CustomMessage tmpMessage = new CustomMessage("rspi","get",neededDevice.deviceName,new String(String.valueOf(response[0])));
mPubsubPublisher.sendData(tmpMessage);
if (clientsPage!=null)
clientsPage.UpdateTextView(true, true, "<" + "get" + " | " + new String(String.valueOf(buffer_command[0])) + " | " + new String(String.valueOf(buffer_device_addr[0])));
}
The Fog layer of the Salutis prototype includes two boards Raspberry Pi 3 (Fog nodes) running Android Things.
RPi3 boards are configured with Google Cloud IoT Core and interacts through Google Cloud Pub/Sub Service. Both of them publish messages to a common topic and pull messages from personal subscriptions.
This provides the common pull of messages with personal access to their copies.
private Runnable mPublishRunnable = new Runnable() {
@Override
public void run() {
ConnectivityManager connectivityManager =
(ConnectivityManager) mContext.getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo activeNetwork = connectivityManager.getActiveNetworkInfo();
if (activeNetwork == null || !activeNetwork.isConnectedOrConnecting()) {
Log.e(TAG, "no active network");
return;
}
if (mMessagesQueue.size()==0)
{
return;
}
try {
CustomMessage tmpMessage = mMessagesQueue.get(0);
tmpMessage.mSource = mDevice;
JSONObject messagePayload = tmpMessage.GetJSONObject();
Log.d(TAG, "publishing message: " + messagePayload);
PubsubMessage m = new PubsubMessage();
m.setData(Base64.encodeToString(messagePayload.toString().getBytes(),
Base64.NO_WRAP));
PublishRequest request = new PublishRequest();
request.setMessages(Collections.singletonList(m));
mPubsub.projects().topics().publish(mTopic, request).execute();
} catch (IOException e) {
Log.e(TAG, "Error publishing message", e);
} finally {
mMessagesQueue.remove(0);
if (mMessagesQueue.size()>0)
mHandler.postDelayed(mPublishRunnable, PUSHING_SMALL_INTERVAL_MS);
}
}
private JSONObject createMessagePayload(float temperature, float pressure)
throws JSONException {
CustomMessage newMsg = new CustomMessage(mDevice, "status", "mchs", "okey");
return newMsg.GetJSONObject();
}
};
The key feature of the used approach is that message interaction is connection-independent. Sending messages are buffered to message pull (mMessagesQueue). The messaging thread mPublishRunnable check the connection availability and send all messages when connection is established. Thus the system do not stall on message sendign and no messages will be lost if connection will be temporary lost.
The same interface (Pub/Sub) is used for interaction between Fog nodes and with Rescue Service Center terminal.
function pull_data($projectId, $subscriptionName)
{
$retVals = array();
$pubsub = new PubSubClient([
'projectId' => $projectId,
'keyFilePath' => './keypub.json'
]);
$subscription = $pubsub->subscription($subscriptionName);
foreach ($subscription->pull() as $message) {
$subscription->acknowledge($message);
$json_decoded = json_decode($message->data());
if (!($json_decoded == NULL))
{
if ($json_decoded->{'src'} != 'mchs')
$retVals[] = $json_decoded;
}
}
echo json_encode($retVals);
return $retVals;
}
function publish_message($projectId, $topicName, $message)
{
$pubsub = new PubSubClient([
'projectId' => $projectId,
'keyFilePath' => './keypub.json'
]);
$topic = $pubsub->topic($topicName);
$pieces = explode("|", $message);
$json_out = array('src' => $pieces[0], 'type' => $pieces[1], 'device' => $pieces[2], 'value' => $pieces[3]);
$topic->publish(['data' => json_encode($json_out)]);
print('Message published' . PHP_EOL);
}
Fog nodes (RPi3) uses Google Vision API to detect fire in images captured from camera(s).
We decided to use single shots and Vision API instead of video stream and Google Cloud Vision Intelligence API, because image analysis should be enough for fire presence analytics. It allows to save internet bandwidth and reduce operational costs. We estimate that 10 second period of image analysis should be a good balance between detection quality and overheads.
private void onPictureTaken(final byte[] imageBytes) {
if (imageBytes != null) {
final Bitmap bitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream);
final byte[] byteArray_new = stream.toByteArray();
mCloudHandler.post(new Runnable() {
@Override
public void run() {
//Log.d(TAG, "sending image to cloud vision");
// annotate image by uploading to Cloud Vision API
try {
Map<String, Float> annotations = CloudVisionUtils.annotateImage(byteArray_new);
//Log.d(TAG, "cloud vision annotations:" + annotations);
if (annotations != null) {
final DatabaseReference log = mDatabase.getReference("mchs").push();
log.child("timestamp").setValue(ServerValue.TIMESTAMP);
if (textView_fire_1 != null)
if (annotations.containsKey("flame"))
{
runOnUiThread(new Runnable() {
public void run() {
Fragment_tab_camera.this.textView_fire_1.setText("FIRE!!!");
}
});
}
else
{
runOnUiThread(new Runnable() {
public void run() {
Fragment_tab_camera.this.textView_fire_1.setText("");
}
});
}
for (Map.Entry<String, Float> entry : annotations.entrySet())
{
log.child(entry.getKey()).setValue(entry.getValue().toString());
}
}
} catch (IOException e) {
Log.e(TAG, "Cloud Vison API error: ", e);
}
}
});
}
}
RPi3 takes image from camera, recode it (we faced the problem that original byte array was incorrectly interpreted by sending API) and send it to the Cloud Vision. Analysis result is interpreted for fire presence with alert firing.
Analysis results are sent to the Google Firebase for logging and additional processing.
Unfortunately, we could not connect external USB camera to the Raspberry Pi 3. The reason is that Android Things currently does not support such functionality. We hope developers will solve this issue soon.
Rescue Service Center (RSC)As it was mentioned above, in the Salutis project implements the connection between the Fog layer and the RSC terminal. Let’s have a closer look to the terminal functionality.
In the Salutis a stadium was used as an monitored object. The exact sector of the building is shown, where the incident takes place (sector 4 in the illustration). Systems shows the activated fire detection service.
Clicking on the sector of the building moves to a more detailed map with information. Here are displayed the positions of each sensor and which one of them is turned on. Sprinklers could be turned on/off manually by the operator.
For better control of the incident the operator has access to cameras inside the building.
We had a roadmap to implement the Tensorflow Analysis on the level of Fog node in this project, but fails in time planning and miss this task. We plan to continue this work and finish the planned roadmap with all these features.
The designed emergency control and reaction system is highly portable and will be able to adopt to other environments like trade centers, shopping malls, arehouses, transport infrastructure (underground, railroad) etc.
The project was done inside the IHPCNT department of the SUAI university, Saint-Petersburg, Russia.
Comments