Every year in Italy, public administrations require a check with certification for every type of flue (only one check for gas boiler and two or more for GPL and solid fuels). Infact a not well working boiler could pollute the enviroment with its bad emissions. And more, despite our modern time, gas accidents are yet a big problem and cause several deaths and injured people. So it's logical to create a better monitoring network to remotely check our Flues.
RESOURCES
You can find the videos and files used for this project on following site:
- Medium: LoRa Flue Gas Monitoring System
- Youtube: a series of configuration videos for The Things Network, Elastic Search and Kibana
- Github: collection of files to configure LoRa devices and Dragino
- Slideshare: the slides of final presentation of the project
- AMI2019: we have submitted a paper to this conference (Rome — 2019)
REQUIREMENTS
By tecnological side, this network has several type of request:
- communication infrastructure
- power management
- data use- privacy
- and obviously cost
On the other hand, quantity of data transmission is little. We are in the IOT field. Everyone would have a device more simple as possible, with a long battery duration and a free transmission cost.
Our choice has been to use a LoRa device because has unique big quality in this sense:
- there exist a public network of gateway and it's possible to expand them in a very simple way
- it could transmit data for years thanks its low power requirements
- low cost
- security feature for transmission
- and others, as, for example, long range for connect a lot of devices together
We have realized a proof-of-concept of our idea and in the following chapter we describe every step we have made.
Part 1: Introduction to this guide
Part 2: The Things Network configuration with example
Part 3: LoRa Board and sensor used description
Part 4: Dragino gateway configuration
Part 5: Elastic Search and Kibana for a Public Use of Flue Gas Monitoring Data
Part 6: AWS IOT with Dynamo DB for storage certificate of Public Administration
Part 7: AWS IOT with Android App for Private communications with citizens
Part 1 - Introduction to this guideThis guide doesn't follow the normal working scheme from the hardware to software layer in the web. We have chosen to start with The Things Network because configuration settings required in Dragino gateway and LoRa board depends on parameters given by The Things Network. We hope this helps you with all the process.
In addition you will find a brief video at beginning of every section: you can use to immediate configurate every part without read all this long guide.
This initial part of this how to is divided in two: the first one is the configuration for gateway, the second for the LoRa board. We have added two videos (one for part) to configurated immediately both. Otherwise you can follow the detailed written guides.
One of the most important feature of LoRa is the public (and free) nature of the network. Infact, in Lora Network, everyone can add his personal Lora gateway and permits to other Lora devices to transmit their signal through it.
For connecting every LoRa devices and gateways, it exists The Things Network - thethingsnetwork.org - the free LoRa portal to put all together and make possible to simply transmit data from a sensor to the every web services. The idea behind it's very simple: The Things Network collects data from gateways, decode online and pack them towards several different services (in our case we will use two different services: Elastic Search, AWS IOT).In our case we use Dragino as Gateway for our Flue Gas Monitoring.
[Part 2.A] - Steps for connect the Dragino with The Things Network
Fig. 2.2: After you have created a free account on The Things Network, open your dashboard and go in the Gateway section. Here you will find a list of all our gateways.
Fig. 2.3: To add another one, click on "Register Gateway" option. It will open a new panel where we will put data to create a unique connection between our Dragino gateway and The Things Network.
Fig. 2.4: Set the parameters
-Gateway EUI: obviously every gateway has to have its unique identifier. There are two ways to select it. First, you can use "automatic tool" of The Things Network: infact if you click in EUI id field, it will appear a list of possible unique identifier. You can pick one of them (as in the figure 2.5).
Or you could get the MAC address (Media Access Control address of a device) because is a unique identifier assigned to a network interface controller (NIC).In this second ways, you can get it from the back of your Dragino (Fig. 2.6).
Select "I'm using the legacy packet forwarder" to guarantee a better compatibility with Dragino (Fig. 2.7).
- Description: here you can put a simple description for your Dragino gateway
- Frequency Plan: click on the field and it appears a menu with the list of all possibilities. For Italy, we have chosen "Europe: 868 Mhz" (Fig. 2.7).
- Router: this is setting from previous "frequency Plan" filed. When you set it, automatically the system put the right router configuration. So it's all very simple.
- Location: it's obviuos: you have to report your gateway location. Clicking on the map you could add your gateway geo position. (Fig. 2.8)
- Antenna placement: this is the last configuration parameter. Always very simple: do you have an internal or external antenna? (Fig. 2.9)
Success!
If you have done all right, you will see your gateway in the list of gateway
Fig. 2.11: Click on it and check if your gateway is online. From the gateway page you can see (and set) every paramenter. But in particular it's very useful to check if your gateway is connected and when and if it receives some data.
Additional Settings fro The Things Network Gateway
If we click on the Settings button, we have obviously the possibility to change some data on our gateway, but also to put additional information that could be helpful during transmission process.
Infact if we analyze the data receveid from gateway, we can see how all data put in this configuration are sending. And this could be very useful as for example the position of our gateway or, better, the timestamp.
Gateway ID to save and put in Dragino Configuration
Fig. 2.14: As we have said at beginning, we have started with The Things Network configuration to obtain the values to put in the Dragino gateway and LoRa board to communicate with TTN. So, all you have to do it's to save Gateway-ID and in the next chapter, put in the Dragino settings so it could send data to TTN.
[Part 2.B] - Steps for connect the LoRa board with The Things Network
1) Application -> Create Application
To connect a LoRa board with The Things Network you have to create an "application" throught which TTN give you connection parameters to put in LoRa transmitter. This is the reason why we are writing about TTN before LoRa board. An application is a software layer to recevice data from gateway, decode them and send to server (Fig. 2.15).
Fig. 2.16: first of all go in the dashboard and click on "application" section. Here you will find a list of all your applications. Press the "add application" button.
Fig. 2.17: it will open a new panel where you have to insert only a unique ID for your application (but you can use automatic generator clicking on the blank field) and a description. The remain filed will be filled automatically.
Now go in the Application dashboard: here you can see some info on your application (that you can change or add other through "Settings" menu).
In Application we "create" the connections from the LoRa phisical board that send data to gateway and receive them in Application and our output server. So from here there are two essential passage:
- creation of data settings that we will put in the real device to transmit to gateway - configuration of encoder/decoder software layer to "understand" and pack data through the device
- link connection to our output server (for example AWS IOT or Elastic Search)
2) Application -> Device Registration and Settings
Click on one of "Register Device" (Fig. 2.18) or go in "Devices list" panel (Fig.2.19) and there click "Register device".
Fig. 2.20: in the registration panel you have to configurate only a name (your preferite and in lower case): The Thing Network automatic tool can fill all the other informations (if you click on the left icon near respective field).
Settings
This is the most important part of our configuration. Infact here we configure synchronization between LoRa board and TTN (Fig. 2.21).
In General we find several configuration parameter that we can change. Many fields are already configured. You have to change the following paramters
Fig.2.22: Description: put a description of your device (in case you have more thn one). Activation Method: set to ABP. In the hardware section we explain why.
Fig.2.23: Frame Counter Checks: this checks avoid to receive duplicate messages. In case two gateway receive the same messages, this checks avoid to show both. Anyway, remember to "reset frame counter" in device panel if you get some errors. Otherwise, if you want to be sure to receive every messages, unset this paramters. Sometime Frame counter go in "crisis" so..
Fig.2.24: Location: this is very useful to understand where is your sensor and to avoid to set other variables in hardware.
Altitude: you can set also the altitude.
3) Application -> Payload Format
Data arrived from gateway have to decode. So we have to go in the "Payload Formats" panel and put and function to transform byte received in a correct form.
function Decoder(bytes, port) {
// Decode an uplink message from a buffer
// (array) of bytes to an object of fields.
var decoded = {};
var gasvalue = '';
decoded.gas = '';
decoded.id = '';
for(var i = 0 ; i<bytes.length ; i++){
decoded.lat = 41.891253;
decoded.lng = 12.503410;
if(i<4)
decoded.id += String.fromCharCode(bytes[i]);
else
gasvalue += String.fromCharCode(bytes[i]);
}
decoded.gas = parseFloat (gasvalue)
console.log(bytes);
return decoded;
}
In our case, we have chosen to send three types of data form LoRa device: - gas/smoke sensor value - position: latitude value - position: longidude value Really, we have added these two values also if on TTN, in the devices setting parameters, it's already possible to set the position of the device. But we would create a more complete example.
IMPORTANT: parseFloat function
decoded.gas = parseFloat (gasvalue)
We would make to note the use of parseFloat function. It's very useful because transform byte in a "float value", so when we will pass to server, if this has the capability to understand data type and use. It's useful in particular for Elastic Search: infact when Elastic Search receive float number, it can use as number and not as string: in this way it's possible immediately use these type of data on the graphs.
Fig. 2.25: if you want to checks if everything goes fine, you can use this code in the Payload field and click TEST button:
41 41 46 46 31 32 33 30 37
You will see this string converted in:
{
"gas": 12307,
"id": "AAFF",
"lat": 41.891253,
"lng": 12.50341
}
4) Application -> Integration (with requestBin.com example)
Now we can make a working proof, utilizing an external service in place of AWS IOT or Elastic. We will use RequestBin.com as external test server to receive messages incoming from The Things Network. To do this, we have to setup an Integrations plugin in TTN. Click on Applications -> Integrations and go on the Integrations page. Here click on "Add integrations" button.
Fig. 2.27: I have to choose HTTP integration from the long list of plugins.
Process ID: in the HTTP integration setup page, we have to insert an identifier for this plugin.
Access Key: click on the field and it appears the default key, then click on and select it. Very simple and direct.
URL: here you have to get the link of your RequestBin.com (we will see soon)
Method: POST
Now leave identical all other fields and now go on the RequestBin.com service to get URL to use.
Fig. 2.30: in RequestBin.com homepage deselect "private check button" because requires a log in. Simply click on "Create a Request Bin" and in the new page finally get the URL. Return on Integrations page and put this URL in the field we have leaved blank before. Now finally click on "ADD integration".
Your Integration is done and you can try immediately to send data from LoRa device.
*** VERY IMPORTANT THINGS TO REMEMBER! ***
Fig. 2.31: In the third section, we configure the hardware: DRAGINO and LoRa device. This is the reason this part come before the hardware configuration: you cannot make any hardware configuration without following parameters.
So keep apart these parameters:
Application -> device -> name_of_your_device page
*** VIDEO WORKING EXAMPLE OF LoRa BOARD -> TTN -> RequestBin.com ***
In the video you can see a complete process of data receiving and visualization also on RequestBin.com.
Part 3 - Dragino ConfigurationAttach your Dragino to a PC ethernet port to connect it via LAN.
- Attach your Dragino to a PC ethernet port to connect it via LAN.
Open website at address: 10.130.1.1 and you will see the Dragino homepage login. Put the following access credential and enter in the dashboard. Username: root Password: dragino
- Open website at address: 10.130.1.1 and you will see the Dragino homepage login. Put the following access credential and enter in the dashboard. Username: root Password: dragino
Connecto dragino to WIFI
Your Dragino have to communicate with Internet to send data to The Things Network, so we have to give it settings for internet access. There are two way: - Ethernet - Wifi Go in “Access Internet Via” and here you can have: - WAN port: to access internet via WAN with an ethernet cabel - WIFI Client: filling the page with the name of SSID (name of your wifi net) and the password as in the above photo.
Setting Dragino Gateway
Now we have to config our dragino as an IOT Server. So under "Sensor" menù we have to make two passages:
1) Open IOT SERVER menù and in "IOT Server" field select: LoRaWAN In this way obviously our Dragino works as LoRaWAN server, getting data from sensors.
After we have to go to set the connection between Dragino and TTN. To send data to The Things Network, every gateway has to have an unique identifier.
We have get this identifier in section 2, when we config "GATEWAY" in The Things Network. Read there to get more information on how create this unique identifier (in short: we have get MAC address on behind of Dragino and we have added 4 "0" to this address).
Now we can create a gateway into the TTN platform. Note that a LoRa gateway is used to receive packets from LoRa devices and forward them to the TTN networks or in the inverse process. Note that Dragino LG01 supports only the single channel gateway to connect to TTN server.
Here we have to set several parameters to connect with The Things Network.
- Server address: router.eu.thethings.network - Server port: 1700
- Gateway ID: the unique ID we have created in The Things Network
Always on this page, in the second part you have to check the radio settings. According in which part of where you live, there different frequence band you could use. For Europe, Dragino works in the 800 Mhz frequency band, so you have to setup TX and RX Frequency with this values. So please attention!
Save your configurations settings
At last, you have to save your configuration.
Click on top right "UNSAVED CHANGES" to go in the SAVE page.
Click "SAVE&APPLY" and wait.
In case RESTART your Dragino.
Check Sending Data
Before to end this part, we want report a very useful tool to check if data are arriving to Dragino. Go in the Sensor main menù and select "Sensor data" (as in the photo).
Turno on your LoRa device and begin the transmission of the data. From this page you can check if data are arriving on Dragino or not. If yes, go on The Things Network and check if your Dragino is connected. Anyway recheck section 2 of this guide.
Part 4 - LoRaWan on MbedAll the comunication system is based on LoRaWAN stack protocol managed by STM32 microcontroller board DISCO-L072CZ, programmed by Mbed (www.mbed.com).
Firmware uses Mbed OS 5 to provide drivers and pre-built code for:
- LoRaWAN radio module: SX1276
- Analog Input/Output
- Digital GPIOs
- Debug and tracing
- RS232 over USB stack
Code is divided in two main blocks: 1. System initialization: main.cpp 2. mbed-os configuration (LoRaWAN login): mbed_app.json
System initializationIn the main.cpp file, there are all the functions/declarations needed to deploy the send data function.The user has to change the `ID` constant in the file, as a hexadecimal value of 16 bit (uint16_t):
...
// Hex ID for each module from 0x0000 to 0xFFFF
#define ID 0xAAFF...
Sensor, for the board used in the project, is addressed on pin analog 1 (A1). Change it in case of need. Pin P0 can't be used.
...
AnalogIn sensor(<pin name>);
...
In the send_data() function the user can set any kind of encoding of transmitted data, writing the correct message into the tx_buffer and settings the correct dimension value of it, into packet_len variable.
static void send_message()
{
uint16_t packet_len;
int16_t retcode;
/*
Reading sensor data (unsigned int 16 bit)
Writing ID and sensor data into transmission buffer
Sending data to Gateway by LoRaWAN stack
*/
uint16_t data = sensor.read_u16();
packet_len = sprintf((char *) tx_buffer, "%4X%u", ID, data);
printf("\r\ndata sent: %s\r\n", tx_buffer);
...
...
}
LoRaWAN login configuration
LoRaWAN login configurationIn the mbed_app.json file, each user NEED TO CONFIGURE their own TheTingsNetwork credential to their proper application.
Note:
This code works only for APB protocol. The OOTA way isn't verificated yet.For all information about TheTingsNetwork procedures, please refer to www.thethingsnetwork.org.
This code works only for APB protocol. The OOTA way isn't verificated yet.For all information about TheTingsNetwork procedures, please refer to www.thethingsnetwork.org.
This code works only for APB protocol. The OOTA way isn't verificated yet.For all information about TheTingsNetwork procedures, please refer to www.thethingsnetwork.org.
In order to change radio module, set the proper value in:
... "config":
{ "lora-radio":
{ "help":
"Which radio to use (options: SX1272,SX1276)", "value": "SX1276"
},
...
For TheThingsNetwork credentials, change:
...
"target_overrides":
{ "*":
{
...
"lora.appskey": "{C-STYLE HEX App Session Key}",
"lora.nwkskey": "{C-STYLE HEX Network Session Key}",
"lora.device-address": "0x<HEX DATA OF device address>"
...
...
Part 5 - Elastic Search and Kibana ConfigurationElastic Search is a search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java.
It's very scalable and has near real-time search. Each node hosts one or more shards. Related data is often stored in the same index, which consists of one or more primary shards.
Kibana is a viualization tool for Elastic Search: it uses a web interface where show the data using a several plugin and configurations. It's very powerful and real-time. In addition, it's possible to integrate machine learning analytics tools.
* Before to proceed - Please Attention *
Elasti.co website gives us problems with the internal universitary (La Sapienza - Roma) access point. If you are not able to access their website, then try a different internet connection (for example smartphone access point).
[Part 5.A] - Steps for connect The Things Network with Elastic Search
1) Elastic Search -> Account creation
First of all we need to create an account on Elastic Search. It gives a free and complete account only for 14 days (at the time we write this post!): after those, everything you have done on Elastic are removed and begin inaccessible.
When you are in the dashboard, click on "Create deployment". It begin to ask you some setup configuration parameters.
Give a name to your deployment, select a commercial service between Amazon or Google Cloud platform, select a region. Finally choose one of the preconfigured setup in case you would try a more IO oriented service or real-time or machine learning features. For our scope everyone of these settings are enough, because we have only to try one sensor.
So after choosing, press the final "Create development" button and wait.
*** VERY IMPORTANT - SAVE USERNAME AND PASSWORD ***
During enviroment deploying, it appears a screen panel with your username and your password: YOU HAVE TO SAVE CAREFULLY these parameters because they are essential to connect Elastic Search with an external service!
2) Elastic Search -> Dashboard
After deployed the enviroment, you can access to Elastic Search dashboard. Here you will see your deployment: click on the "card" and visualize the deployment setup page.
In this page you can get all the link to : - Elastic Search service - Kibana interface
It's useful observe that from this page you can click (on the left menu) "API CONSOLE" button. The API console is a very useful tool to access al data store in Elastic Search. You can query your database from here with a set of instructions.
Anyway, from this page, we need only to copy the URL to our enviroment.
Click on the "Copy endpoint URL" and go in The Things Network with this URL.
3) The Things Network -> HTTP intergration to connect TTN to Elastic Search
In the second section of this guide (Chapter 2.B - 4), you can find any details to create an Integration to a TTN application (briefly: go in TTN dashboard, enter in your test application, click on "Integration" button and on "ADD INTEGRATION". From the list select "HTTP INTEGRATION").
Give a name to your HTTP integration, select "deafult key" clicking on the Access Key field and finally go to put the URL. This is the most important field to fill. What's the URL to connect TTN to Elastic Search?
- VERY IMPORTANT: Elastic Search URL composition -
To obtain the URL address of Elastic Search you to combine four different data: - Fig 5.23: Elastic username and password - Fig 5.24: Elastic Endpoint URL - Indicate and Index and a document where to store your data on Elastic Search
Username:
elastic
Password:
wzcV4bFK7ipzp3AG5RkgfgYk
Elastic Search Endpoint URL:
https://0f3e064a23df4a5d9153d60f865e4e43.eu-west-2.aws.cloud.es.io:9243
Index and Document path where to store your sensor data:
/lorafluegas/user1
Combine together these data and you will obtain the final URL you to put in The Things Network. Keep attention to composition, because it's very simple make some error and everything depends from this URL. Use the following image to build the correct URL.
In our case is the URL is the following (don't worry for security, our deployment is already expired... only 14 dys are a very little period):
The Things Network Integration URL:
https://elastic:wzcV4bFK7ipzp3AG5RkgfgYk@0f3e064a23df4a5d9153d60f865e4e43.eu-west-2.aws.cloud.es.io:9243/lorafluegas/user1
Now keep all the other fields identical and save the Integration as in Fig....
At this point you have finally connect The Things Network and Elastic Search. Now every sensor data that you will receive on The Things Network it will be automatically send to Elastic Search and you will can visualize on Kibana. (or more simply in the API CONSOLE).
[Part 5.B] - Kibana Settings and Visualization
Before to start some words about Kibana visualization service. Kibana is not only a visualization tool: it's a complete platform to structure your search and analyze your data in the most complete form as you can desire. As we can read on their site:
"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. The Elastic Stack is the next evolution of the ELK Stack.
If we give a sight to the left menu, we can have already the powerful of this toolbox.
In Fig.5.16 we can distinguish:
- Dashboard: the starting point
- Discover: a tool to access immediately to Elastic Search data. You can submit search queries, filter the search results, and view document data. You can also see the number of documents that match the search query and get field value statistics. [Link to manual]
- Visualize: it's the tool for create visualization, dashboard with graphs, aggregations, visualitazions based on specific queries. [Link to manual] - Canvas: it's an immediate way to see live data and create presentation [Link to manual]
- Machine learning: previously called "X-Pack", this tools offers a way to see inside the data, giving us the capability to perform data analysis for example on "anomaly" [Link to manual]
- Management: it's the options and setup page of kibana, where we also connect the Elastic Seaarch to Kibana. [Link to manual]
1) Kibana -> Settings
From Elastic Search dashboard, click on "Kibana Launch" button.
First of all, for use Kibana, we have to set the index where there are data that we want to visualize. So go in the "Kibana management" (the last icon on the bottom left). Here there are all the options we have to setup.
Initially check if there is your index: remember that in the previous section we have used "lorafluegas" as name of our index. We have to check if in the linked Elastic Search deployment there is this index.
Clicking on "Index Management" we can see all indices of our Elastic Search deployment. And there is also "lorafluegas" between them. So we are receiving data from The Things Network. And if we click on "lorafluegas" we can give a sight to composition of our index.
In the Summary it's possibile to read how many docs we have received and the status of them. Mapping is the structure of our dataset: this table tells us what kinf of data we are receivind and the hierarchical structure. For example it's possible to understand if "payload.gas field" is a string or a number. (Do you remember that in the second section we have converted "gas value" to a FLOAT? Here we can see the result of that conversion). And in the end some stats on our deployment.
To visualize data on Kibana, it's necessary to link an index under Elastic Search to Kibana.
Click on "Index Patterns" under Kibana lef menu. You will open a new page with the list of all pattern that you can visualize. In our case we have to add our pattern, co click on the Right Upper button "Create Index pattern".
In the search field put the name of the index you want to visualize. In our case is "lorafluegas": Kibana will find it.
Now click on the name found to add to patterns and click NEXT.
In the second (and last) step, you have to select which time do you want to use for create a temporal sequence. Our prototype get two values, one from gateway and one directly from the sensor device. We choose to use time from sensor. Choose it and click on "Create index pattern". Now your index is on the list of pattern and it's possible to use it.
If you click on the name of the new pattern, you can visualize the list of data you are receiving from The Things Network and more important you can see also the mapping. So in our case, scrollind down the list of fileds, we find the 3 we have send:
- payload_fields.gas : the value of smoke gas sensor
- payload_fields.lat : latitude from sensor
- payload_fields.lng : longitude from sensor.
As we have said, they are "number" so we can use in data visualization.
2) Kibana -> Discover
As we have said at beginning, the Discover tool permits us to give an immediate view to our data and to perform searchs. This is not the purpose of this guide, but if you would to know more on this topic you can visit the Elast guide here
[link to queries and filters].
- Timeline data: this graph show the numbers of data received on a timeline. Yuo can pass on the green bar and you received the count of "documents" (documents are the elementary data in Elastic Search) yo uhave received. If the bar is composed by more one documents, it's possibile to click on it and "open", enlarging the timeline.
On top of the timeline there is a time filter where to individuate the exact time period we are seraching for.
- Last documents received: under the timeline it's possible read the list of the last documents received. It's useful to understand what we are receiving.
- Index documents filters: See Fig. 5.30 and Fig.5.31 in the left menu, there is the list of all fields in one documents. It's possible and useful click on one item of the menu and it appears a panel with the list of all documents containing that parameters.
It's a way to filter the data based on what we are searching.
3) Kibana -> Visualize
Visualize tool it's the base to build any type of graphs combining all our data.
Click on "Create new visualization"
Fig. 5.33: select the "Line" graph option because we want to visualize the trends of data received during a timeframe.
Fig. 5.34: from the list of indeces, select "lora*".
Fig. 5.35: Finally we have arrived on graph page.
Now we have to configure the metrics on the axes.
Fig: 5.36: select a time interval from time filter box.
Fig. 5.37: on the graph appear some (very little) point. If you pass on it, you can read the count of documents present in the selected time interval.
Now we have to distribute on the both axes.
Fig. 5.38: Go on the Bucket and from menu select X-axis. We have to configure the X-axis visualitazion.
Fig. 5.39: select a "Date histogram" aggregation from the menù.
Fig. 5.40: for the "Date histogram" you have to choose which "time" to get from your bucket (we have "gateway time" and "LoRa board time"). And also there the minimum interval: we choose seconds because our device transmit every minute.
We can give a namo to the X-axis through "label" field.
Fig. 5.41: after X-axis, we have to configure the Y-axis. As before click on "Add a Metrics" and select Y-axis.
We have to select the data we want to visualize on the Y-axis: in our case, payload_field.gas that is to say the values our sensors are recording in real time.
Give a name to this axis to display.
Fig. 5.42: a final checks on the time interval and click on "Apply changes".
Fig. 5.43: if everything goes fine it will appear a series of point joint by a line.
X-axis will be divided in "seconds" as interval, instead Y-axis will have values around our average measure.
Fig. 5.44: only to try, we can change type of graph to improve the understanding of data. Go in Metrics and axes and change "mode" in "stacked": in this way we pass from a continuos line to a series of stacked lines that give a better idea of Fipeaks at precise intervals.
Fig. 5.45: change again the type of visualization using a "bar" config instead of line.
*** VIDEO EXAMPLE FROM LoRa DEVICE -> TTN -> ELASTIC -> KIBANA ***
Part 6 - AWS IOT with Dynamo DB for storage certificate of Public AdministrationWe can make the data we collect available to third-party users, to do this we used a data storage system received from the sensors: DynamoDB
Now let's see how:
In AWS IoT we can set a rule so that if a message received in AWS IoT respects it, then an action is performed automatically.
We want every message received in AWS IoT to be sent and saved as a row in DynamoDB.
We can do this in two way:
- The first one from the main AWS IoT interface by clicking on Act
From here selecting Create we can insert our Rule
In the first section, we must insert a mandatory Name with an optional description. In the second section, we will have to insert the rule that must be respected to act. The rule must be written in SQL. We want to select the fields app_id, dev_id, hardware_serial which we will rename Keu, port, counter (the progressive number of the message), payload_raw, and the fields payload_fields.gas as gas (value registered by the sensor), metadata.time as time (detection time), payload_fields.lat as lat and payload_fields_lng as LNG for georeferencing and we will take them from all messages from the topic ‘matteo_test00/devices/+/up’
This is the SQL query in our case:
SELECT app_id, dev_id, hardware_serial as Key, port, counter, payload_raw, payload_fields.gas as gas, metadata.time as time, payload_fields.lat as lat, payload_fields.lng as lng FROM 'matteo_test00/devices/+/up'
Now let's add an action corresponding to this rule by selecting Add Action. We select: Split message into multiple columns of a DynamoDB table (DynamoDBv2)By clicking on configure action we can now configure our action:
We need a new resource of DynamoDB, so select Create a new resource and new table on the next page.
We insert the name of our table in Table name, for example, Gas_DB and as a primary key, we put Key ( the hardware_serial field we renamed ) adding a sort key with the value of the counter. We can click on create, at this point we will have to wait a few seconds for the creation of the table and we can return to configure our action.
In the table name, we select the table just created and assign a Role to this action that has the permissions to make entries in the table. We can create a new role or assign an existing one, in the second case we click on Update Role to modify the permissions of that role and add the entry requirements. We can now click on Add action and continue.
The next two sections are optional but very useful: The first allows you to create a rule similar to how it was done previously but it is executed in the event of an error. The last section instead allows you to assign tags to the rule, this is very useful for very large projects and you need to find a resource by filtering through these tags. Concluded everything we can click on Create role.
- A second method to perform the same operations described above is via AWS Command Line Interface or AWS CLI, I refer you to this link to install it https://aws.amazon.com/cli/?nc1=h_ls.
If this is the first time you use AWS CLI you will have to configure it with your data by entering the command
aws configure
You will be asked to enter AWS Access Key ID, AWS Secret Access Key, Default region name and Default output format. For more details: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.htmlWe can now create our DynamoDB table using the following command:
aws dynamodb create-table --table-name GasDB --attribute-definitions AttributeName=Key,AttributeType=S AttributeName=counter,AttributeType=N --key-schema AttributeName=Key,KeyType=HASH AttributeName=counter,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
The command will return values in JSON format,
We copy the value of TableArn in our case: "arn: aws: dynamodb: eu-west-1:: ########## :: table / GasDB"
Now let's create a policy to write on this table: In a policyDB file we write:
{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Action": "dynamodb:PutItem",
"Resource": "arn:aws:dynamodb:eu-west-1:##########:table/GasDB"
}]
}
Now we run the command to create the policy on AWS
aws iam create-policy --policy-name policyForDB --policy-document file://policyDB
In this way, we have created a policy to insert items in the DynamoDB GasDB resource
Create an IAM role in a file called trust.json we write:
{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Principal": {
"Service": "iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
Execute the following command
aws iam create-role --role-name DBRole --assume-role-policy-document file://trust.json
Take the arn of this role
"arn:aws:iam::#########:role/Test-Role"
And finally
Make the last JSON file action.json
{
"sql": "SELECT app_id, dev_id, hardware_serial as Key, port, counter, payload_raw, payload_fields.gas as gas, metadata.time as time, payload_fields.lat as lat, payload_fields.lng as lng FROM 'matteo_test00/devices/+/up'",
"description": "Optional Description",
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23",
"actions" : [
{
"dynamoDBv2": {
"roleArn": “arn:aws:iam::###########:role/DBRole",
"putItem": {
"tableName": "GasDB"
}
}
}]
}
Change the roleArn of the previous value and execute
iot create-topic-rule --rule-name "DynamoDB" --topic-rule-payload file:// action.json
Let's see now how to create a restful service in python to access our data via HTTP and receive them in JSON format.
We use the libraries of AWS SDK for python boto and flask for Rest
#!/usr/bin/env python
# coding: utf-8
import boto3
import json
# Get the service resource.
from boto3.dynamodb.conditions import Key, Attr
dynamodb = boto3.resource('dynamodb')
# Instantiate a table resource object without actually
# creating a DynamoDB table. Note that the attributes of this table
# are lazy-loaded: a request is not made nor are the attribute
# values populated until the attributes
# on the table resource are accessed or its load() method is called.
table = dynamodb.Table('GasDB')
from flask import Flask, request
from flask_restful import Resource, Api
from flask_cors import CORS
from flask import jsonify, make_response
from flask_json import FlaskJSON, JsonError, json_response, as_json
app = Flask(__name__)
api = Api(app)
CORS(app)
#gas sample class
class GS(object):
def __init__(self, id,counter,value,time):
self.id = id
self.counter = counter
self.value = value
self.time = time
def serialize(self):
return {
'id': self.id,
'counter': self.counter,
'value': self.value,
'time' : self.time
}
class GasSamples(Resource):
def get(self, key_id):
response = table.query(
KeyConditionExpression=Key('Key').eq(key_id)
)
items = response['Items']
sample = []
for savedSample in items:
sample.append(GS(savedSample['Key'],str(savedSample['counter']),str(savedSample['gas']), savedSample['time'][0:10] +" "+ savedSample['time'][11:19]))
return jsonify( data =[e.serialize() for e in sample])
#device class
class Dev(object):
def __init__(self, id,lat,lng):
self.id = id
self.lat = lat
self.lng = lng
def serialize(self):
return {
'id': self.id,
'lat': self.lat,
'lng': self.lng,
}
class Devices(Resource):
def get(self):
response = table.scan(
AttributesToGet=[
'Key','lat','lng',
],
Select='SPECIFIC_ATTRIBUTES'
)
items = response['Items']
devices = []
seen = set()
for d in items:
t = tuple(d.items())
if t not in seen:
seen.add(t)
devices.append(Dev(d['Key'],str(d['lat']),str(d['lng'])))
return jsonify( data = [e.serialize() for e in devices])
api.add_resource(Devices, '/devices') # Route_1
api.add_resource(GasSamples, '/gassamples/<key_id>') # Route_2
if __name__ == '__main__':
app.run(port='5002')
In this way, we return all the devices that have forwarded messages in our database and the specific data of each of them.
In addition to the python version we have added a .net version for the restful service, you can find the code on our GitHub page
We can try our Rest service with this simple application that shows the data in a simple way:
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title></title>
<style>
/* Set the size of the div element that contains the map */
#map {
height: 400px; /* The height is 400 pixels */
width: 100%; /* The width is the width of the web page */
}
</style>
<!-- jquery -->
<script src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-3.4.1.min.js"></script>
<!-- bootstrap -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" />
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/v/dt/dt-1.10.18/datatables.min.css" />
<script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.18/datatables.min.js"></script>
<script>
var map;
var table;
var marker;
// Initialize and add the map
function initMap() {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(function (position) {
var pos = {
lat: position.coords.latitude,
lng: position.coords.longitude
};
map = new google.maps.Map(
document.getElementById('map'), { zoom: 4, center: pos });
});
}
$.ajax({
type: "GET",
url: "http://127.0.0.1:5002/devices",
dataType: "json",
data: "data",
success: function (result) {
for (var i = 0; i < result.data.length; i++) {
var latLng = new google.maps.LatLng(result.data[i].lat, result.data[i].lng);
marker = new google.maps.Marker({ position: latLng, map: map, title: result.data[i].id });
marker.addListener('click', toggleBounce);
}
},
error: function (error) {
alert(error);
},
});
table = $('#example').DataTable({
dataSrc: "data",
columns: [
{ data: "id", title: "ID" },
{ data: "counter", title: "Counter" },
{ data: "value", title: "Gas Value" },
{ data: "time", title: "Time" }
]
});
//data(results);
}
function toggleBounce() {
$("#dvMap").removeClass("col-12");
$("#dvMap").addClass("col-6");
$("#dvTable").addClass("col-6");
$("#dvTable").show();
table.ajax.type = "GET";
table.ajax.url('http://127.0.0.1:5002/gassamples/' + this.title).load();
table.draw();
}
</script>
</head>
<body>
<form id="form1" runat="server">
<h3>My Google Maps Demo</h3>
<!--The div element for the map -->
<div class="row" style="height:400px">
<div id="dvMap" class="col-12">
<div id="map"></div>
</div>
<div id="dvTable" style="display:none" >
<table id="example" class="display"></table>
</div>
</div>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyBWIIZVkdGedNYWMLxRiHQHIavvIxEYKyo&libraries=visualization&callback=initMap"></script>
</form>
</body>
</html>
The HTML page shows a map with a selectable marker
Selecting the marker, that is associated with a device we can show its data
We can create a mobile application to alert users if the values sampled by our sensor exceed a certain threshold.
We want every message received in AWS IoT to be republished in a different topic.
We can do this in a similar way of the previous section:
- The first one from the main AWS IoT interface by clicking on Act
From here selecting Create we can insert our Rule
We insert a name of the rule, for example, Alert and the SQL query in this case:
SELECT payload_fields.gas, payload_fields.id as id FROM 'matteo_test00/devices/+/up' WHERE payload_fields.gas > 1000
In this case, we can filter the value of the sample gas that we received; here we'll send an alert when the value is greatest then 1000.
Now let's add an action corresponding to this rule by selecting Add Action. We select: Republish a message to an AWS IoT topic by clicking on configure action we can now configure our action:
In the topic, we insert the topic where we want to publish our messages and assign a Role to this action that has the permissions to make entries in the table. We can create a new role or assign an existing one, in the second case we click on Update Role to modify the permissions of that role and add the entry requirements. We can now click on Add action and continue.
Concluded everything we can click on Create role.
We describe how to do the same with AWS CLI, I will jump the configuration that we just illustrate the paragraph before.
We create a policy to publish a message to a specific topic: In a policyTopic file we write:
{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Action": "iot:Publish",
"Resource": "arn:aws:iot:eu-west-1:##########:topic/Alert"
}]
}
Now we run the command to create the policy on AWS
aws iam create-policy --policy-name policyForTopic --policy-document file://policyTopic
In this way, we have created a policy to publish messages in the Topic Alert
Create an IAM role in a file called trust.json we write:
{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Principal": {
"Service": "iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
Execute the following command
aws iam create-role --role-name AlertRole --assume-role-policy-document file://trust.json
Take the arn of this role
"arn:aws:iam::#########:role/AlertRole"
And finally
Make the last JSON file action.json
{
"sql": "SELECT payload_fields.gas, payload_fields.id as id FROM 'matteo_test00/devices/+/up' WHERE payload_fields.gas > 1000",
"description": "This is a description for test",
"actions":
[{
"republish": {
"roleArn": "arn:aws:iam::############:role/AlertRole",
"topic": "Alert",
"qos": 0
}
}],
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23"
}
Change the roleArn of the previous value and execute
iot create-topic-rule --rule-name "Alert" --topic-rule-payload file://action.json
Later we created a simple android application that allows you to read the data that is published by the previous rule. The application code can be found on the GitHub page and here we will illustrate only some basic parts.
In the AWSService class, we create an AWSIoT object passing the context and the topic to be monitored, in our case a generic "Alert".
awsIoT = new AWSIoT(this, "Alert");
In the AWS IoT class what we do is connect to our AWS service and once the connection is established we subscribe to the topic.
if(statusConnection == AWSIotMqttClientStatusCallback.AWSIotMqttClientStatus.Connected)
subscribe(TOPIC);
Now every message that is received by our application is managed by a listener: onMessageArrived. In this way, we can notify the user of the non-standard value.
JSONObject jObject = new JSONObject(message);
int gas = jObject.getInt("gas");
String sendMessage = "Alert! Your device has detected abnormal values!";
NotificationCompat.Builder builder = new NotificationCompat.Builder(context, LOG_TAG)
.setSmallIcon(R.drawable.ic_launcher_foreground)
.setContentTitle(topic + " qValue: " + gas)
.setContentText(sendMessage)
.setPriority(NotificationCompat.PRIORITY_DEFAULT);
NotificationManagerCompat notificationManager = NotificationManagerCompat.from(context);
notificationManager.notify(100590, builder.build());
This is an example of notification in case of values above our threshold.
Comments