The goal of this project is to build a serviceable, reliable and cheap system in full with the IoT philosophy. We have devised an intelligent product management system, both in the industrial and commercial fields, capable of merging the simplicity of the now ancient “shelf” concept with the modern technologies available.
Our audience is made of customers who belong both to the world of industry and logistics, department stores, shopping centres, small retailers and travelling exhibitors.
We have divided the design into two different scenarios: “large scale deploy” and “low power exhibitors”, to cover in a more efficient and widespread way any type of request from our customers.
In this document we present the technical characteristics of the product.
HardwareThe technical choice fell on the material available to the course professor, coherently with the associated development systems.
These are part of the ST Microelectronics product ecosystem and the “Mbed” online development system.
Part of the development, especially the study of the platform, was carried out using free and open source compilers and debuggers.
Microcontroller
The board at our disposal (B-L072Z-LRWAN1) is equipped with an ARM microcontroller (MCU) of the series STM32L0 with those characteristics:
- Ultra-low-power platform:
1.65 V to 3.6 V power supply
0.29 μA Standby mode
Down to 93 μA/M Hz in Run mode
41 μA 12-bit ADC conversion at 10 ksps - Core:
ARM R 32-bit Cortex R -M0+ with MPU
From 32 kHz up to 32 MHz - Memories:
Up to 192 KB Flash memory with ECC(2 banks with read-while-write capability 20 KB RAM, 6 KB of data EEPROM with ECC)
Sensors
The presence sensors are simply on/off switch and they can be easily switched to more complex and efficient pressure/weight sensors. Only for first stage of development (and debug) we emulate this kind of data by random number generation.
LoRaWAN
We use LoRaWAN for its low power profile, long range and high time life for the end device.
It has AES encryption for secure communication built-it and it is easy to deploy on any kind of sensor circuit with a simple microcontroller.
It can be easily connected to the The Things Network in order to send data over internet to cloud service.
Setting up Mbed online compiler
Open Mbed website at www.mbed.com and login (or register) in order to access the online compiler.
Click on the Import button, make sure that you have selected Programs tab and click on Click here to import from URL.
The device configuration starts with the creation of the data acquisition code. At Debug time we used the generation of a random number (between 0 and 255), but the code can be changed to enter the code to capture data from various sensors.
The code to be edited is located in the file main.cpp in the function static void send_message() at line 10 (uint8_t data = rand()%0xFF;)
main.cpp
.../** * Sends a message to the Network Server */static void send_message(){ uint16_t packet_len; int16_t retcode; // Prototype random data for debugging uint8_t data = rand()%0xFF; // TX status led ledBlue = 1; packet_len = sprintf((char *) tx_buffer, "%d", data); printf("\r\n Data sent: %d", data); retcode = lorawan.send(MBED_CONF_LORA_APP_PORT, tx_buffer, packet_len, MSG_UNCONFIRMED_FLAG);...
Change TTN loginFrom the TheThingsNetwork site (www.thethingsnetwork.org) , create a new Application, create a new Device and get the information to copy on the file:
mbed_app.json
... "target_overrides": { "*": { "platform.stdio-convert-newlines": true, "platform.stdio-baud-rate": 115200, "platform.default-serial-baud-rate": 115200, "lora.over-the-air-activation": false, "lora.duty-cycle-on": false, "lora.phy": "EU868", "lora.appskey": "<TTN APP SESSION KEY>", "lora.nwkskey": "<TTN NET SESSION KEY>", "lora.device-address": "0x<TTN DEVICE ADDRESS>" },...
- Device Address
- Network Session Key
- App Session Key
One of the first and most stable projects for IOT networks based on LoRaWAN is represented by TheThingsNetwork. As LoRaWAN is not an IP protocol we need this network to send messages that come out from the device to the desired application. That’s why TheThingsNetwork is between the gateways and the applications.
How does TTN work?
Routing operations are all executed in a distributed and decentralized way. This allows to perform either local or global implementations. Messages are sent through the LoRa protocol and they can be of three different types:
A (rarely sending data)
B (regularly receiving data)
C (constantly receiving data)
The gateway is a device that receives LoRa messages and sends them to the router. Router is a microservice that is responsible of the gateway state and of the transmission scheduling. Each router is connected to one or more brokers that represent the heart of TTN. The broker is a microservice that identifies the device, performs some processing on data and sends the packet to the relative application’s handler. It is very important because it associates a specific device with a specific application and sends uplink messages to the correct receiver. The handler is a microservice that is responsible of the data management within the applications and it also performs AES decrypt/encrypt operations. The number of gateways is also important because more gateways mean more scalability.
In order to connect a device to TheThingsNetwork we need to add it to the console on https://www.thethingsnetwork.org. The registration is very simple and the majority of the settings are randomly generated. We have to set just our device's ID that must be unique within the application. One of the most important things to remember is that TTN allows us to use two different activation methods for our devices. One is OTAA and the other one is ABP. The main difference between the two is that OTAA uses dynamic keys while ABP keys are inserted manually by the user in the code of the board. This technique is obviously more practical but there are some security issues to face. In this project we use ABP method.
Once the board is connected to TTN, communication starts and sent packets are visualized on TTN through the Data panel. This panel gives us some information about the sender and the message. The payload of the message is Hex encrypted, but an ASCII translation is also proposed. The message is decoded following the instructions of the payload format, that is nothing more than a JSON file. It is possible to modify the payload format in the relative panel.
As we have previously said, TTN is just used to route the packets in the network and so we are also interested in knowing how to come out from it. To do so, there is a TTN integration that is called HTTP integration. It simply forwards packets to the desired server through the usage of HTTP protocol. As our server needs a login to be used, it was necessary to add to the server URL, also the connection parameters such as username and password. This is done by clearly writing them at the beginning of the URL and by separating them with an "@" symbol from the rest of the URL, like this:
http://user:password@cloud_server_url.com:port
Obviously this choice can cause security issues but for our demo is not relevant at the moment.
In this way all the packets that we receive on TTN are automatically sent to our server. In this specific case, our server is ElasticSearch that is a distributed database that works with Kibana visualizer to show us some graphs about our application.
Elastic and KibanaElastic
Elastic is a real time distributed and analytics engine. It is open source, developed in Java. It uses an structure based in documents instead of tables and schema. Elasticsearch is very useful for big data, making it easy to analyse million of data in almost real time searches. And about analysis, Elasticsearch lets you understand billions of log lines easily. It provides aggregations which help you zoom out to explore trends and patterns in your data.
For example, if you have a cloud with 1000 nodes, you can analyse the entire infrastructure in a short period of time, importing the logs into Elasticsearch and, based on it’s response, you can get to the root cause of an issue in your infrastructure. Elasticsearch is used from a very famous clients: for example Mozilla, GitHub, Stack Exchange, Netflix, and many more users.
Kibana
Another important feature of Elastic is its Kibana, a great web interface to visualize and manipulate the data. It can be downloaded in elastic.co and installed following few simple steps. You need to download the same version for Elasticsearch and Kibana.
When, in the future, will find yourself needing to develop a software to interact with Elasticsearch, you can use a programming language to interact with it. Some of the programming languages acceptable are: Java, C#, Python, JavaScript, PHP, Perl, Ruby
Create an Elastic DEMO DATASET for iShelf (with Python)To test better Elastic we have been focused on create a demo dataset to use with Elastic and Kibana.
As we said previous, it's possible load data with Rest method (url method) for example using a simple interface as PostMan. Rest is the method that we use to connect our device iShell trought TTN until Elastic. Anyway there other methods: for example the Dev Tools under Kibana or API tools Under Elastic interface.
But it's also possible connect Python with Elastic to save documents.
This is a very powerful possibility beacause we can create a lot of data and configure as we want our index under Elastic.
Install
The first thing to do is to install Elastic library for Python.
Now we can connect with Elastic through few simple lines:
# Import Elasticsearch package
from elasticsearch import Elasticsearch
# Connect to the elastic cluster
es=Elasticsearch([{'host':'localhost','port':9200}])
Variables
Creation of several variables to configure in depth a very diversified documents. We work for to put inside quantity from a shelf, but in the real world we will possible change and add many different other data through this program.
index_name = 'ishelf'
global_index = 90
supermarkets = ['COOP - ROME', 'COOP - MILAN', 'COOP - NAPOLI', 'COOP - PALERMO', 'COOP - TURIN']
products = ['Coca Cola', 'Pepsi']
max_product_for_store = 100
dataset = []
quantities = rand.randint(0, max_product_for_store)
Time series format solution for auto recognize under Elastic
Which are the most important and difficult data to put inside Elastic? Time series.
Through sell_timeline it's possibile create a linspace timeline. The most important thing is the formattation of date. Infact Elastic is programmed to recognize date timestamp if this has a particular structure.
Otherways it will insert a normale text data.
def sell_timeline (quantity):
start = pd.Timestamp(2019, 8, 4, 8)
end = pd.Timestamp(2019, 8, 4, 21)
#print (start)
timeline = np.linspace(start.value, end.value, quantity)
timeline= pd.to_datetime(timeline, format='%Y-%m-%d %H:%M:%S.%f').strftime('%Y-%m-%dT%H:%M:%S')
#timeline= pd.to_datetime(timeline, format='%Y-%m-%d %H:%M:%S.%f')
#timeline= pd.to_datetime(timeline)
return timeline
Send everything to Elastic
The last part of program create the string to send to Elastic.
The instruction es.index insert document in the index and in the type we have choosen. For the index_id is possibile to indicate also it, or leave blank thi field and Elastic make an automatic insertion.
for z in range (quantities):
#document = '{ "index" : { "_index" : "' + index_name + '", "_type" : "json" } }\n{"product":"' + product + '","supermarket":"' + supermarket + '","date":"' + str(t[z]) + '","quantity":' + str(remains_product) + '}'
document = {"product": product, "supermarket": supermarket ,"timestamp": str(t[z]), "time_istant": str(t_istant[z]), "quantity": str(remains_product) }
global_index = global_index + 1
remains_product = remains_product - 1
res = es.index(index= index_name, doc_type='shelf',id=global_index,body=document)
#dataset.append (document)
print (document)
Manage data on Kibana - iShelf Index
To verify if data arrived on Elastic, go under Management tab. Here is possible see how many INDEX (database!) are present on our Elasticsearch, their state, composition and many more details of management.
And, more important, we have to set one of them to see it under visualization Kibana tools.
Practically, in Management tab we can set options for all our installation.
Discover data on Kibana - iShelf dataset composition
We have send our dataset to Elastic server.
Now we can visualize our database through Kibana interface.
Open Kibana and go on Discover tab. We can observe that there is a list of object (our products on shelf) with all characteristcs attached.
Now it's possible reorganize the list as in easy file manager, creating several rows each one grouping a characteristic.
Advanced visualization features of Kibana
After upload the dataset, we can now work with the most advanced features of Kibana.
Kibana Visualize - measure of products on the iShelf
Kibana act as a visual interface for Elastic, so it's full of way to show, monitor and manage data that we have uploaded on Elastic.
There are several methods based on our necessity.
First of all, there is a "Visualize" tab where it's possible to create every type of graphs.
For example, in the our project, we need to know how to measure number of product on the shelf.
In Visualization tab we have to set the X and Y-Axis.
Under "Aggregation" and "Field" select "Terms" a "date.keyword" to use time data as X-axis. Instead for Y select the metric we want to measure, that is to say the "quantity" of products on the shelf.
And Kibana will create the plot.
Kibana Timelion - a more powerful way to discover timeseries
Another very powerful tool of Kibana is Timelion.
Timelion is an visualization tool for time series in Kibana. Time series visualizations are visualizations, that analyze data in time order. Timelion can be used to draw two dimensional graphs, with the time drawn on the x-axis.
What’s the advantage over just using plain bar or line visualizations? Timelion takes a different approach. Instead of using a visual editor to create charts, you define a graph by chaining functions together, using a timelion specific syntax. This sytnax enables some features, that classical point series charts don’t offer - like drawing data from different indices or data sources into one graph.
(from https://www.elastic.co/blog/timelion-tutorial-from-zero-to-hero )
In our case, Timelion is a very big chance to study the some other correlation between data.
There are a lot of possibility to combine data from shelf: for example is also interesting see the ratio of product get from the shelf and after put down.
But for now we have limited our study to only try to visualize data on the graph.
Comments