This tutorial is the third in our series of LoraWan IoT projects. It builds on our understanding of working with the Rak wireless RAK811 and RAK 831 LoraWan products and their integration with Node-red.
Part 1 :
https://www.hackster.io/naresh-krish/using-the-rak811-lora-module-with-arduino-a38de8
Part 2:
https://www.hackster.io/naresh-krish/integrate-node-red-with-rak811-lora-node-using-the-ttn-c4564a
This turorial specifically takes the data to the next level. VISUALIZATION. It gonna take our data in the form of number to something as beautiful as this:
So lets get rolling....
Hardware:As mentioned earlier, Our project revolves around the two open source board namely the RAk811 LoRa node and the RAK831 LoRa concentrator. They are incredibly affordable lora module and will help you quick start your LoRaWan based projects.
RAK 811
RAK 831
As always our backhaul will be wifi and will be handled by the venerable Raspberry pi 3. I cant praise this board enough for its flexibility and bringing linux to the masses.
Also make sure that from the Part 2 tutorial you have the Node-red setup running correctly and node messages are arriving at your node and your able to view the data in the debug node that was present in the workflow we deployed in node-red.
ON TO THE VISUALIZATIONNNNNN !!!!!!
Installing InfluxDB:Influx db is one of the most famous Time series database that is fast, reliable and also supports high availability via clustering. We wont go itno to much detail for now. FOr installing Influxdb on your system please follow the steps below:
OS X (via Homebrew)brew update
brew install influxdb
Docker Image
docker pull influxdb
Ubuntu & Debian SHA256:
wget https://dl.influxdata.com/influxdb/releases/influxdb_1.4.2_amd64.deb
sudo dpkg -i influxdb_1.4.2_amd64.deb
RedHat & CentOS SHA256:
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.4.2.x86_64.rpm
sudo yum localinstall influxdb-1.4.2.x86_64.rpm
Windows Binaries (64-bit) SHA256:
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.4.2_windows_amd64.zip
unzip influxdb-1.4.2_windows_amd64.zip
We will assume that we install this on a linux box.
NetworkingBy default, InfluxDB uses the following network ports:
- TCP port
8086
is used for client-server communication over InfluxDB’s HTTP API
- TCP port
8088
is used for the RPC service for backup and restore
In addition to the ports above, InfluxDB also offers multiple plugins that may require custom ports. All port mappings can be modified through the configuration file, which is located at /etc/influxdb/influxdb.conf
for default installations.
The system has internal defaults for every configuration file setting. View the default configuration settings with the influxd config
command.
Most of the settings in the local configuration file (/etc/influxdb/influxdb.conf
) are commented out; all commented-out settings will be determined by the internal defaults. Any uncommented settings in the local configuration file override the internal defaults. Note that the local configuration file does not need to include every configuration setting.
There are two ways to launch InfluxDB with your configuration file:
- Point the process to the correct configuration file by using the
-config
option:influxd -config /etc/influxdb/influxdb.conf
- Set the environment variable
INFLUXDB_CONFIG_PATH
to the path of your configuration file and start the process. For example:echo $INFLUXDB_CONFIG_PATH /etc/influxdb/influxdb.conf influxd
InfluxDB first checks for the -config
option and then for the environment variable. Now you should have influxdb setup on port 8086 and ready to accept incoming data.
Setting up Node-Red with InfluxDB
Once you have setup node red. Proceed to the Palette menu in the Node-red setting and install node-red influxdb plugin:
via commandline:
cd ~/.node-red npm install node-red-contrib-influxdb
We should now restart Node-red to assume/detect the new nodes.
Configuring InfluxDB databases:We need to understand first how influxdb databases work. Influxdb databases are basically time series databases, They store data as points in time and each point in time has various attributes asscoates with it
Also the other main concept that we need to be aware when using the InfluxDB is that record of data has a time stamp, a set of tags and a measured value. This allows, for example to create a value named Temp and tag it depending on the source sensor:
Temp: Value=19.1 , Sensor=Room1
Temp: Value=21.9 , Sensor=Room2
This allows to process all the data or only process data based on a certain tag or tags. Values and tags can be created on the fly without previously define them.
Create an influxdb database:
Database creation is a simple process. To create the database, we need to access the machine hosting the InfluxDB server and execute the command influx:
~$host influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> create database TempData
> show databases
name: databases
name
----
_internal
TempData
>
.
This creates a database called Tempdata to which we can now start posting values from our TTN nodes.
Before adding data to a database we need to first select it, just like we do in SQL.
> use TempData
Using database TempData >
Now the database is selected and we can use the database to write values to streams. Let see how thats done:
> insert Temperature,Sensor=room1 value=20.0
> insert Temperature,Sensor=room2 value=21.9
> select * from Temperature
name: Temperature
time Sensor value
---- ------ -----
1487939008959909164 room1 20.0
1487939056354678353 room2 21.9
Now that our database is ready. Lets connect to it from Node-red
Configuring Node-Red influxdb paletteSince we now have a database we can configure the InfluxDB Node Red nodes to store data onto the database: There are two types of InfluxDB nodes, one that has an Input and Output and other that only has Input. We also have a influxdb config node. In our example we use the influxdb output node and the config node to connect to the database and store data.
Add an inpflux db output node to the stage:
Double click on the node and add the details as follows:
1) COnfigure the Influxdb server:
HEre click the penceil icon and add the configuration for your influxdb server, Make sure if you have set a username and password for access youy set the right ones in the menu:
Now select the mesaurement to which you want to write your time series data. In our case it is temperature:
Now if the msg.payload provided has input to the node is a single value, let’s say 20.0, this is equivalent to do:
> Insert Temperature value=12
We other formats for msg.payload that allows to associate tags and measures. Just check the Info tab for the node.
Now connect the TTN output node for your TTN application to the input of the influxdb node like so:
For more details on how to setup your TTN node:
https://www.hackster.io/naresh-krish/integrate-node-red-with-rak811-lora-node-using-the-ttn-c4564a
Here the msg.payload from the RAK811 device node will have a number of information as sent by the TTN service. You can either choose to send just the values you require or send the entire payload after a bit of parsing.
Now every message coming from TTN would be stored in the Influxdb database measurement called Temperature. We have crossed the first river of configuring out database and adding data. Now its visualize it:
Enter Grafana:Grafana is a very powerful tool for visualizing and monitoring data streams. It support influxdb by default and hence its a great choice for our project. Lets dig in
Install grafana:Please visit the page here:
http://docs.grafana.org/installation/
Once installed, Grafana would be available at port 3000. Default username/password is admin/admin.
The primary configuration to add is the data source configuration
Here add the following
- Set the tyep to Influxdb.
- Set the URL to localhost:8086 if the two components are on the same machine. Else provide teh correct adddress of the DB
- Set the InfluxDB details like the Database name: In our case TempData and the username and password.
- Click Add
This data source will be available for our graphs/dashboards.
Lets add some graphs:Its time to add a dashboard and create some graphs:
GO to the dashboard page in Grafana:
Click The graph button on top: This will create a graph like so:
Now you are supposed to change the data for this graph. Right click on the graph title and click edit
Here int eh data source select the TempData data source.
Once selected the various mesaurements will be shows as a drop down just below data source. A sample is shown below:
Here select the Temperature mesaurement. and do not edit any of the other fields for now. We can leave it as such. As soon as you click save at the top of the graph. The graph will show up realtime stats from the infludb as you new messages from TTN start coming into the Node-red node.
Alerting in Grafana allows you to attach rules to your dashboard panels. When you save the dashboard Grafana will extract the alert rules into a separate alert rule storage and schedule them for evaluation.
In the alert tab of the graph panel you can configure how often the alert rule should be evaluated and the conditions that need to be met for the alert to change state and trigger its notifications.
ExecutionThe alert rules are evaluated in the Grafana backend in a scheduler and query execution engine that is part of core Grafana. Only some data sources are supported right now. They include Graphite
, Prometheus
, InfluxDB
, OpenTSDB
, MySQL
, Postgres
and Cloudwatch
.
Currently alerting supports a limited form of high availability. Since v4.2.0 of Grafana, alert notifications are deduped when running multiple servers. This means all alerts are executed on every server but no duplicate alert notifications are sent due to the deduping logic. Proper load balancing of alerts will be introduced in the future.
Rule ConfigCurrently only the graph panel supports alert rules but this will be added to the Singlestat and Table panels as well in a future release.
Name & Evaluation intervalHere you can specify the name of the alert rule and how often the scheduler should evaluate the alert rule.
ConditionsCurrently the only condition type that exists is a Query
condition that allows you to specify a query letter, time range and an aggregation function.
avg() OF query(A, 5m, now) IS BELOW 14
avg()
Controls how the values for each series should be reduced to a value that can be compared against the threshold. Click on the function to change it to another aggregation function.
query(A, 5m, now)
The letter defines what query to execute from the Metrics tab. The second two parameters define the time range,5m, now
means 5 minutes from now to now. You can also do10m, now-2m
to define a time range that will be 10 minutes from now to 2 minutes from now. This is useful if you want to ignore the last 2 minutes of data.
IS BELOW 14
Defines the type of threshold and the threshold value. You can click onIS BELOW
to change the type of threshold.
The query used in an alert rule cannot contain any template variables. Currently we only support AND
and OR
operators between conditions and they are executed serially. For example, we have 3 conditions in the following order: condition:A(evaluates to: TRUE) OR condition:B(evaluates to: FALSE) AND condition:C(evaluates to: TRUE) so the result will be calculated as ((TRUE OR FALSE) AND TRUE) = TRUE.
We plan to add other condition types in the future, like Other Alert
, where you can include the state of another alert in your conditions, and Time Of Day
.
If a query returns multiple series then the aggregation function and threshold check will be evaluated for each series. What Grafana does not do currently is track alert rule state per series. This has implications that are detailed in the scenario below.
- Alert condition with query that returns 2 series: server1 and server2
- server1 series cause the alert rule to fire and switch to state
Alerting
- Notifications are sent out with message: load peaking (server1)
- In a subsequence evaluation of the same alert rule the server2 series also cause the alert rule to fire
- No new notifications are sent as the alert rule is already in state
Alerting
.
This concludes the part 3 of our RAK811 experiments tutorial. Hope you guys liked it and were able to create awesome visualizations with these tools.
Comments