This project has been done for the "Internet of Things" course at "La Sapienza University of Rome". This week, the goal is to build a web application to provide a crowd-sensing extension to our project, building on top of the cloud-based and edge-based components developed in the first and second assignments, which you can find here:
- Environment station using google cloud IoT and MQTT
- Environment station using RIOT / Google IoT / MQTT
This time, we need to develop an HTML5 application using the Generic Sensor API that collects data from the accelerator sensor of the mobile phone. We need to provide two possible deployments for a human activity recognition model:
- Cloud-based deployment: the HAR model is deployed on the cloud. Given the data arriving at the cloud, we should execute the model and provide a status for the state of the user either periodically or whenever new values come.
- Edge-based deployment: the HAR model is deployed on the mobile phone. Given the data collected by the mobile phone, the model should be executed locally to provide a status for the state of the user.
Here the demonstration video:
Structure overviewHere the components:
- A simple web application which detects people movements using the Linear Accelerometer provided by the Generic Sensor API;
- The Google cloud platform, providing: Cloud IoT core, Cloud Pub/Sub, Firebase Cloud Functions, and Firestore database;
- The simple REST web dashboard;
As shown in the picture, even if the HAR model could either be on the phone or the cloud (firebase function), the main structure remains the same in both cases.
Obviously there are easier possible structures to implement the service, but in this way, I can build on top of the components previously developed, and try different technologies.
Note: the code provided on my Github is made to work with my credentials, on my personal google account. I did not upload the keys, If you want to run it, you need to make your own setup.Mobile crowd-sensing
Crowdsensing is a technique where a large group of individuals having mobile devices capable of sensing and computing (such as smartphones, tablet computers, wearables) collectively share data and extract information to measure, map, analyze, estimate or infer (predict) any processes of common interest. In short, this means crowdsourcing of sensor data from mobile devices.
Mobile crowdsensing potential is limited by constraints involving energy, bandwidth and computation power. Moreover, in such a scenario, privacy becomes an important issue. Here there are some possible (partial) solutions:
- Anonymization, which removes identifying information from the data before it is sent to a third party. However, this method does not prevent inferences from being made based on details that remain in the data.
- Secure multiparty computation, which transforms data using cryptographic techniques. This method is not scalable and requires the generation and maintenance of multiple keys, which in return requires more energy.
- Data perturbation, which adds noise to sensor data before sharing it with a community. Noise can be added to data without compromising the accuracy of the data.
The Generic Sensor API Is a set of interfaces which expose sensor devices to the web platform. The API consists of the base Sensor interface and a set of concrete sensor classes built on top, such as Accelerometer, LinearAccelerationSensor, Gyroscope, AbsoluteOrientationSensor and RelativeOrientationSensor.
The goal of the Generic Sensor API is to promote consistency across sensor APIs, enable advanced use cases thanks to performant low-level APIs, and increase the pace at which new sensors can be exposed to the web by simplifying the specification and implementation processes.
This project implements the LinearAccelerationSensor interface, providing on each reading the acceleration applied to the device along all three axes, but without the contribution of gravity (as for the Accelerometer).
Human Activity Recognition (HAR) aims to identify the actions carried out by a person given a set of observations of him/herself and the surrounding environment. On the web, it is possible to find many publications referring to this problem, even using machine learning models. However, the implementation required by this assignment does not require high accuracy, while the response time and simplicity have higher priority. Therefore, the model I have used in this project, takes the length of the vectors [x, y, z], and computes the resultant vector as sqrt(x*x+y*y+z*z), then decides if the person is still or walking, based on the given threshold.
model = Math.sqrt(x*x + y*y + z*z);
if (model > 0.6) {
walking = 1
$('#status').html('walking');
} else {
walking = 0
$('#status').html('Still');
}
The web appIt is implemented by a simple node.js application, having all the logic on the front-end, and using the back-end as a gateway (forwards the telemetries to Google Cloud IoT Core).
Most of the back-end code is very similar to the virtual devices implemented in the first assignment, plus some essential components provided by the express homework (makes deployment easier). However, in this case, the publish primitive is triggered by socket.io messages, instead of a timer.
The front-end has two versions, depending on the type of deployment adopted:
- Cloud-based: the HAR model is deployed on the cloud; therefore the application only needs to collect the LinearAccelerationSensor parameters and send them to the cloud for the analysis;
- Edge-based: the HAR model is deployed on the mobile phone; therefore, the model is now executed locally, providing a status for the state of the user. In this case, the application sends both the result and the parameters to the cloud;
accelerometer.addEventListener('reading', () => {
var now = parseInt(Date.now());
let x = accelerometer.x;
let y = accelerometer.y;
let z = accelerometer.z
$('#accelerometerX').html(x);
$('#accelerometerY').html(y);
$('#accelerometerZ').html(z);
data = {
date: now,
status: 2,
accx: x,
accy: y,
accz: z
}
socket.emit('accelerometer', data);
});
The picture shows the code used on the cloud-based version, which mixed with the HAR model in the previous section gives the edge-based version. status:2
is used as a magic number to tell the Firebase cloud functions that the model has not been applied yet.
To host it, and make it accessible from a mobile device, I have used the well known Heroku.com. Unfortunately, since there is a limit on the number of messages I can exchange for free on the Google cloud platform, I will not provide the link, but here there is a picture.
The procedure to connect a device to Google cloud platform is explained in the first and second assignement.Firebase cloud functions
As far as I understood, Firebase services are designed to be mostly used for mobile applications. However, given the nature of the assignment, and the fact I have never used them before, I wanted to give a try.
Cloud Functions for Firebase is a serverless framework that lets automatically run backend code in response to events triggered by Firebase features and HTTPS requests. Moreover, given the wonderful interoperability between GCP and Firebase, it makes super easy to exchange messages between Pub/Sub and Firestore.
The snippet shows the function responsible for the interaction. It is the same for both the edge-based or the cloud-based version. Indeed, it automatically detects the web app used by the status number.
If the message comes from the edge-based version of the app, it merely forwards everything to the Firestore database, in the other case instead, it performs the model, and only after that, it completes the exchange.
Once deployed, it is also visible on the project page.
On Github there is only the Typescript function I wrote for the exchange. However, you also need to create a Firebase project connected to the Google one. Here how to do it: https://firebase.google.com/docs/functions/get-startedCloud Firestore and Dashboard
Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud Platform.
The Cloud Firestore's NoSQL data model, stores data in documents that contain fields mapping to values. These documents are stored in containers, the collections, that can be used to organize the data and build queries. They support many different data types, from simple strings and numbers, to complex, nested objects. Moreover, it also allows to create subcollections within documents and build hierarchical data structures that scale as the database grows.
For the dashboard, I opted for a serverlessapproach, as Firebase teaches. Indeed, this time I have a straightforward HTML, with some Javascript.
db.collection('telemetries').onSnapshot((snapshot) => {
snapshot.docChanges().forEach(element => {
renderValue(element.doc);
})
});
Thanks to this simple script, every time a new value comes, the page refresh and the table are updated.
In the end, we can easily decide if we want to use the cloud-based or edge-based version of the app, with pro and cons of both. Indeed, using the Google and Firebase ecosystem, we can easily switch between the two versions of the service, having a wonderfully modular system.
Comments