This is a project made for the Assignment4 of Internet of Things class at the Sapienza University of Rome.
IntroductionIn this assignment, I've created an HTML5 crowd sensing application that, using the Generic Sensor API, collects data from the accelerometer sensor of the mobile phone. The values collected are transmitted to the Google Cloud infrastructure like the previous tutorials.
Using the data collected at the cloud, I develop a simple activity recognition model that detects if the user is standing still, or moving.
The application is developed in two modes: Cloud-based Deployment and Edge-based Deployment. Notice that for simplicity of testing my application do both approaches together, but it is very simple to divide them.
The following sections are a hands-on tutorial on how to set up and run the system.
TECHNOLOGY USED: NodeJS, HTML5, Generic Sensor API, MQTT, Google Cloud IoT Core.
The mobile application is a simple system created with NodeJS, HTML5, and the Generic Sensor API. The goal is to develop a simple user activity recognition (UAR) model.
User activity recognition
The application reads the values from the Accelerometer sensor of the mobile phone providing the acceleration applied to the device’s X, Y and Z axis. As a first attempt I tried to sample the values with a frequency of 1Hz, but the resulting model is not reliable and for this reason, after some attempts, I set the frequency to 4Hz.
Set the sensor is very simple and thanks to the Generic Sensor API it can be done with a simple JS script on the front-end of the application. First of all, we must make sure that the phone is equipped with the sensor, and in this case, we must be able to access it. In this regard, it is important to note that Generic Sensor API requires the Google Chrome browser and the HTTPS protocol as prerequisites.
If all the checks are successful we are ready to make our sensor work with the startApp() function. But before seeing this part we have to talk about the distinction between the Cloud-based deployment and the Edge-based deployment.
- Cloud-based deployment:
In this mode, the application sends the values to Google Cloud IoT Core through an MQTT connection. Given the data arriving at the cloud, we execute the model and provide a status for the state of the user whenever new values arrive and display them in a web dashboard that provides the following functionality:
- Displays the latest values received from all the sensors and the resulting activity.
- Displays the values received during the last hour from all the sensors and the resulting activity.
- Edge-based deployment:
Given the data collected by the mobile phone, the model is executed locally to provide a status for the state of the user. Only the outcome of the activity recognition model should be transmitted to the cloud. Also, in this case, we create a web dashboard that provides the following functionality:
- Display the latest activity of the user.
- Display the activities received during the last hour.
Now we are ready to better understand how the startApp() function saw before work
First of all, we create a new instance of the Accelerometer Class with a frequency of 4Hz (line 66). We define an Event Listener (line 68) to handle the values that arrive from the sensor. We send the values every second, and, as we said before, we distinguish the two deployment modes. After that, we also show the activity of the User on the front-end of the app (line 100-101), and finally, we start the sensor (line 103).
NOTE: usually only one of the two modes is used, but for simplicity, in this case, they have been put together. If you want to run the app only in one way, just comment the part of the code related to the other mode.
The payload published on the Google broker contains also a field to identify the users. The identification is made with a cookie that is generated the first time the user opens the application. This approach has only the problem of duplicate users indeed for a better identification it is necessary an authentication process, but it isn't the goal of this project.
In the last part of this section, we discuss the prediction model used to recognize human activity.
Prediction model
We want to estimate the movement checking if the acceleration remains in a given threshold that in our case is the range 9.05 - 9.95. This choice is motivated by the fact that when the mobile is stationary, regardless of position, only gravitational acceleration will act on it. The range is a bit larger than the gravitational acceleration to guarantee a nice level of reliability. Since we have three components for the acceleration (x, y, z) we can calculate the magnitude of the hypotenuse to retrieve a single value.
The main problems in this approach are false positives and false negatives since it is a simple calculation and not a machine learning trained model.
Experiments with UCI HAR dataset
Some experiments have also been carried out with the dataset UCI HAR - Dataset that has 6 classes: WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING. Since we have only to recognize the movement, I merge the six classes in 2 classes: STANDING, MOVING. For the experiment, I use a Random Forest classifier with a result accuracy of 97%. Unfortunately, I was unable to implement the system well, encountering several problems in extracting the features from the new input values. This experiment is in the git repository of this project. I used the Python library Scikit Learn (see folder uar_model).
Google Cloud BackEndThe first thing to do is to set up our Google platform. Once opened the "IoT Core" section follow these simple steps:
1) Create a registry
2) Create devices and add them to the registry (for our purpose 4 devices)
3) Create a subscription and connect it to the devices
Notice that to do the second step you have to create a certificate. This guide, provided by Google, contains all the detailed steps: Quickstart - Guide.
Completed this part we can develop our back-end.
As before, we note the distinction between the Cloud deployment and the Edge deployment. In fact in the cloud-based the arrived data are processed with our model (line 95), saved in the DB (line 97-106) and sent to the dashboard (line108), instead in the edge-based the data is directly saved in the DB (line 112-118) and sent to the dashboard (line 120).
DashboardThe dashboard is the same of the previous tutorial. I added only a new home page and another page to display the values of this new assignment.
It only displays in a web page the values arrived, basing on the type of approach (Cloud or Edge)
Mobile application
Open the application here: https://uar-mobile-app.herokuapp.com
Dashboard
Open the dashboard here: https://iot-assignment1.herokuapp.com/useractivityrecognition
- Cloud-based deployment:
- Edge-based deployment:
Comments
Please log in or sign up to comment.