This is a project made for the fourth assignment of the Internet Of Things course, at Sapienza University of Rome.
The aim of the project is to build a crowdsensing application based on Amazon Web Services (AWS) that predicts the user activity, taking the accelerometer data from the mobile phone and recognizing if the user is moving or not. To realize this system we will use the Generic Sensor API, and its goal is to promote consistency across sensor APIs, enable advanced use cases thanks to performant low-level APIs, and increase the pace at which new sensors can be exposed to the Web by simplifying the specification and implementation processes.
The analysis of the data provided by the accelerometer will be processed in two ways:
- Cloud-based deployment: the HAR model is deployed on the cloud. Given the data arriving at the cloud, we should execute the model and provide a status for the state of the user either periodically or whenever new values come.
- Edge-based deployment: the HAR model is deployed on the mobile phone. Given the data collected by the mobile phone, the model should be executed locally to provide a status for the state of the user.
To run the application I have created a NodeJS server to host the service on a local IP address, and its configuration will be explained in the next sections.
What is crowdsensing?According to Wikipedia, crowdsensing (sometimes referred to as mobile crowdsensing) is a technique where a large group of individuals having mobile devices capable of sensing and computing (such as smartphones, tablet computers, wearables) collectively share data and extract information to measure, map, analyze, estimate or infer (predict) any processes of common interest. In short, this means crowdsourcing of sensor data from mobile devices.
It basically associates sensors of various types to users (real persons) through some kind of hardware, in order to collect useful information on their status. This is carried out with the aim of improving a certain experience or providing a new service, otherwise not possible.
HTTPS serverThe first step to follow is to run a server on your machine, to deploy the application on a specified IP address. Before starting, you should click on this link, that provides you all the information to successfully create a server, with the certificate and the private key.
Firstly you have to create a JavaScript file named server.js in your project folder, defining it in this way:
const https = require('https');
const fs = require('fs');
const options = {
key: fs.readFileSync('./key.pem'),
cert: fs.readFileSync('./cert.pem')
};
https.createServer(options, function (req, res) {
res.writeHead(200);
res.end(fs.readFileSync('index.html'));
}).listen(8000);
So you have to install the server globally via npm:
> npm install --global http-server
Then, you need to make sure that openssl is installed correctly, and you have key.pem
and cert.pem
files. You can generate them using this command:
> openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
And finally run the server:
> http-server -S -C cert.pem
Where the flag -S (or, if you want, -ssl) serves to enable HTTPS to your server, otherwise you cannot use the accelerometer with the Generic Sensor API, and the flag -C indicates the path to the file certs.pem
.
Now the server is waiting to connections from the client on a specified IP address, and you have to remember that the client must be connected on the same network of the server.
The project folder must contain files generated above and a subfolder named public which contains the index.html file and all necessary files to correctly run the application.
If you want, you can host your application on GitHub Pages, because it works on HTTPS and you don't have to launch the server everytime you need since that the web app is hosted on the cloud.
Generic Sensor APIFor this project we are going to use the Generic Sensor API, that allows us to access very easily a discrete variety of sensors increasingly used by our devices; using this API is really simple and there are a lot of different demos in the API playground where you can experiment different functions and understand which one better suits your needs.
Furthermore we can retrieve data about the user's activity through any sensor measuring variations in the acceleration of the device. In this work, we will use the linear acceleration sensor of the mobile phone, which provides a triple (x, y, z) with the acceleration on the three axis, without including the contribution of the gravity.
In this line:
let sensor = new LinearAccelerationSensor({ frequency: 1 });
I have defined the linear acceleration sensor assuming a movement of at most 0.5 Hz (i.e. 30 steps per minute) and a sampling frequency of 1 Hz (i.e. 1 message per second), which is theoretically sufficient to recognize if the user is standing still or not.
Cloud-based deploymentIn the cloud-based deployment, as I said before, we execute the Human Activity Recognition model on the cloud, providing to it only the accelerometer data. Therefore from the web app we will send to AWS IoT Core the triple (x, y, z), that replies with a rule the data to a Lambda function, which processes the values in order to recognize the user's activity, and finally store the values in the DynamoDB table.
I suppose that you already have an account on AWS, and you have followed all the steps that I show you in my first article, including the creation of a table in the DynamoDB service. In that hands-on tutorial, however, I have excluded the idea to create a rule that could store the data automatically on the table, because I did it with a Python script, but now I will show you how it can be possible only using the services offered by AWS.
Firstly, you have to create an AWS Cognito Identity Pool that will grant users access to the DynamoDB service and MQTT operations with AWS IoT. With an Identity Pool, furthermore, you can obtain temporary AWS credentials with permissions you define to directly access other AWS services or to access resources through Amazon API Gateway. You can find the Cognito service on the AWS console:
You can easily follow this guide to create an Identity Pool. Remember that you need to create also an IAM role, that defines the permissions for your users to access AWS resources. Following this other guide you will be able to attach policies to the unauthorized Cognito role that you have just created. To guarantee the correctness of all the communication structure select AmazonDynamoDBFullAccess and AWSIoTFullAccess from the permissions list.
Now you have all the ingredients to pass the values from the web app to the DynamoDB table, but you need also to compute these values in order to recognize the user's activity. So we have to create a Lambda function, that keeps data from the AWS IoT broker, predict the activity through the model and replies the values just computed to DynamoDB. From the AWS documentation you can see here how to create a Lambda function and how to connect it to the required services.
With Python3 I have defined the function in this way:
from __future__ import print_function
import json
import boto3
from math import sqrt
dynamodb = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamodb.Table('CrowdSensingDB')
def lambda_handler(event, context):
if len(event) < 6:
x = event['x']
y = event['y']
z = event['z']
a = sqrt(x*x + y*y + z*z)
if a > 3:
activity = 'running'
elif a > 0.6:
activity = 'walking'
else:
activity = 'standing still'
side = 'cloud'
else:
activity = event['activity']
side = 'edge'
response = table.put_item(
Item={
'id': event['id'],
'timestamp': event['timestamp'],
'x': str(event['x']),
'y': str(event['y']),
'z': str(event['z']),
'activity': activity,
'side': side
}
)
Essentially I have divided the function in two cases:
- Cloud side, if the message does not contain the information about the activity, we have to compute it with the HAR model, sending to DynamoDB the message with this new value (and the relative origin side)
- Edgeside, the message is ready to be sent to DynamoDB, adding only the information about the provenience of the data
Few lines ago I anticipated you the possibility to pass messages automatically from AWS IoT to the Lambda function with rules. So from the side menu of the AWS IoT console we can navigate to Act > Rules, and click on Create. Then we will define the name of the rule and the action to perform.
In this case I have taken all the messages sent to that topic, but you can obviously change it with your personal choice.
Now the structure is completed and we can use our web app, with the cloud-based deployment.
Edge-based deploymentIn this case the Human Activity Recognition model is deployed on the mobile phone, so we do not need to insert the model on the cloud, but we need the Lambda function anyway to put JSON documents in the DynamoDB table.
If you have followed the steps listed in the previous section, you start from a very good point, because in this part you do not include any service respect to the previous part, therefore I distinguished the edge-based deployment in the JavaScript file:
function sendData(x, y, z) {
try {
// Variable that contains the current time.
var datetime = getDateTime();
// Subscribe to the topic.
mqttClient.sub(topic);
var json = "{ \"id\":\"" + id + "\", \"timestamp\": \"" + datetime[0] + "\"";
// The additional data of the user's activity will added in the json
// only if the edge button is activated.
if (edgeActivated)
json += ", \"x\":" + x + ", \"y\":" + y + ", \"z\":" + z + ", \"activity\": \"" + activity + "\"}";
if (cloudActivated)
json += ", \"x\":" + x + ", \"y\":" + y + ", \"z\":" + z + "}";
mqttClient.pub(json, topic);
} catch (error) {
console.log("Error");
}
}
In which I have inserted the additional information about the activity only if we are considering the edge computation, because the model is not deployed on the cloud through the Lambda function but I have created a function directly on this file:
function activityRecognition(x, y, z) {
var a = Math.sqrt(x * x + y * y + z * z).toFixed(2);
if (a > 3)
return "running";
else if (a > 0.6)
return "walking";
else
return "standing still";
}
With this function we can recognize if the user is standing still, walking or running, based on the value of the variable a that is the length of the 3-dimensional vector.
How the system worksFirst of all we have to start the server, so in the Linux terminal type:
> http-server -S -C cert.pem
Now the server is waiting for connections on a specified IP address and a port, and we can connect with our mobile phone:
Then, clicking on a button, you can start the process on the corresponding side (edge or cloud) and the values will be sent automatically to AWS.
Once the messages arrive to DynamoDB, the web app retrieves these and shows in a table all the values received during the last hour by the current user:
That's all! We have created a model for the User Activity Recognition with the Accelerometer sensor of the smartphone, using a crowdsensing IoT app.
Thank you for the attention, see you in the next article!
Comments
Please log in or sign up to comment.