Our Engineering school Polytech Sorbonne lends equipment (video projectors, computers, etc.) for the teachers and students to use. Sometimes these are not returned on time, which leads to shortage of these objects.
We designed a system that tracks the position of different objects, and sends it to a website where the live data can be consulted on a map.
Determination of the locationIn order to determine the location of an object, we use a network of LoRaWAN Gateways. Our tracked object has a MKRWAN 1310 microcontroller attached to it, which sends a LoRaWAN message periodically. We query the gateways that receive the message, and depending on different reception parameters (RSSI, reception time), we evaluate where the object is.
We used 2 approaches for the determination of the position of the tracked object: with an Artificial Intelligence through Machine Learning and RSSI, and using the Time Of Arrival (TOA) approach.
Machine Learning - RSSIThe Received Signal Strength Indicator (RSSI) can be a good measurement to determine an indoor location. The general concept is: the closer you are to a gateway, the higher the RSSI is, and the more gateway there are, the more precise the localization process is.
We therefore used RSSI fingerprinting to create a database of typical RSSI values per gateway per location. We decided to locate our objects by rooms in a building, so in each room we took about 15 measurements of the RSSI values for the 4 gateways that constitute our reference points.
Once the database is complete, you have two options to be able to locate an object in a room:
- you could analyze the dependencies for each room between all the RSSI measurement, extract the RSSI limits and then compare the new point's RSSI measurements to the database to locate the object
- or you could "let a Machine Learning model do it itself"
We chose the second option. The database is split between a training, a testing and a validation set. There are several different Machine Learning classifiers that are suitable to our problem, but we chose k-means clustering, logistic regression and random forests. We won't go into details of these models, but they all learn features and dependencies of the data for each label. We trained the models with Python and the library scikit-learn.
Time of ArrivalOn TTN, we retrieve the TOA from each gateway for each time we send a message. This Time on air is consistent with the distance equation if we approximate the speed to the speed of light * an attenuation coefficient (different for each gateway).
This coefficient can be found by experimentation. Then we have our final equation :
d = time*speed_of_light*attenuation_coefficient
Here we have two possibilities. If the card send the time of emission we just have to calculate distances by using the time of emission - the TOA of the gateway. With three gateways we can map the position of the card quite accurately.
If we don't have this info, we can be less precise (but just enough to find the room) by using the shortest TOA as the fake position of the card. Then we calculate the distance from this gateway to others and we can make an estimation of the real position. Thanks to experimentation, we know the exact distance between each gateway and the average distance between each gateway and each room.
This technique needs only 3 calculations to estimate a room and has an average of 85% accuracy. It is to be noted that the gateways need to be of the same model and furthermore, synchronized together.
Battery LevelIn order to know the battery level of the device, the tracker is equipped with a voltage divider. When completely charged, the battery is at 4.2V, and when almost dead, it is at 3.1V. The bridge divides the voltage by a factor 2, so that it can safely be sent to an analog port of the microcontroller (powered at 3.3V).
The analog measure is then approximated to a percentage from 0% to 100%, and this value is the only content sent in the LoRaWAN message.
The consumption of the device is of 50mA when sending the message (duration: 50ms), and of around 700uA in deep sleep (duration: 3 minutes). With this energy usage, and a battery with a capacity of 1A, the device should last about 2 months without being charged.
Containerization and WebsiteOur code is split into 3 different processes. Because we need to deploy our solution to a server in order to have an remote access of the website, we have to containerize it.
We use the Docker solution to isolate our processes. We need one docker for each process. Thanks to this solution, each docker will have the necessary to run the process.
The first process is dedicated to the python code which takes all the information from The Things Network, treated them with the machine learning and Time on Air algorithms, and update the MySQL database with the latitude, longitude, classroom number and more over the probability of position of the dedicated sensor.
The second one is the website. Its goal is to display on a map the position of all the sensors in real time from the MySQL database. In order to do that, we use the JavaScript API Leaflet and the free map OpenStreetMap. In addition, the website is build with PHP for the back-end and the CSS Framework Bootstrap 5 for the front-end.
The third one is simply the MySQL database. Because our MySQL docker is stateful, it will therefore be necessary to perpetuate the data. To do that, we need to create a volume and a network (you can see these lines in the Dockerfile or docker-compose.yml).
The final step is to run these three dockers. To do that, we can either build and run the dockers separately or use a docker-compose file to build and run all the dockers in one command.
You can go to our website with this URL : http://51.38.237.54:8081/map.php.
Comments