DISCLAIMER: This application is used for demonstrative and illustrative purposes only and does not constitute an offering that has gone through regulatory review. It is not intended to serve as a medical application. There is no representation as to the accuracy of the output of this application and it is presented without warranty.
IntroductionVital signs monitoring for a healthy person is seldom taken into account and sometimes preventive health can’t even rise from the ground if you don’t have any previous health history and tracks. But then you have a huge part of the population living with diabetes. When you have diabetes, you may need to check your blood sugar throughout the day. It can help you decide what to eat and whether your medication needs to be adjusted. It can also help you steer clear of diabetes-related problems like:
- Heart disease
- Stroke
- High blood pressure
- High cholesterol
- Blindness
- Kidney disease
- Skin problems
There are currently three ways to keep track of the disease:
- One is self-check or the good ole’ way, you may have to use a glucometer and somehow record the number through weeks or even months, which is a regimen hard to adhere to.
- The A1c test is a blood test that gives the doctor a broader picture (this is quite expensive and sometimes painful).
- The third way is an invasive continuous glucose monitoring system that requires indeed surgery and is even more expensive.
If the patient does the first one, but with BETTER TOOLS you are saving money and even complications that may be caused by the other procedures. If only there was an easier, more natural way of tracking that…..
Market Analysis and competitive landscape.
The existing solutions for tracking vitals signs and glucose in the market nowadays include blood tests, and even IoT glucometers which are quite expensive. They do not cover the problem of keeping on tracking and adherence to the routine. A new solution has the potential and capability to solve this very present problem.
Some statistics:
- The number of people with diabetes has risen from 108 million in 1980 to 422 million in 2014.
- The global prevalence of diabetes* among adults over 18 years of age has risen from 4.7% in 1980 to 8.5% in 2014 (1).
- Diabetes prevalence has been rising more rapidly in middle- and low-income countries.
- Diabetes is a major cause of blindness, kidney failure, heart attacks, stroke, and lower limb amputation.
In 2016, an estimated 1.6 million deaths were directly caused by diabetes. Another 2.2 million deaths were attributable to high blood glucose in 2012**. Regretfully we have a big market and this will be a great showcase of a Matrix and Snips solution.
More at: https://medlineplus.gov/diabetes.html
And: https://www.who.int/news-room/fact-sheets/detail/diabetes
The main advantage that Snips gives over other assistants such as Amazon Echo or Google Assistant, is its own superior value proposition for privacy: none of your voice gets sent to the cloud. 100% of the machine learning algorithms required to transform your voice into actionable information runs fully on device. This is what Snips advertises and calls Privacy by Design.
And for healthcare, that in the current day is greatly influenced by insurance decisions privacy is of utmost importance. Than and GDPR (nervous laugh).The main advantage that Snips gives over other assistants such as Amazon Echo or Google Assistant, is its own superior value proposition for privacy: none of your voice gets sent to the cloud. 100% of the machine learning algorithms required to transform your voice into actionable information runs fully on device. This is what Snips advertises and calls Privacy by Design.
SolutionIt’ll be perhaps the first vital signs and glucose monitor made entirely on the MATRIX Creator using the Snips voice assistant. With just, an almost inexpensive regular glucose monitor the patient would be able to dictate to Snips his medical information numbers and record them. Also, give back that information on request and have an additional user interface that can be accessed both by the patient and the Medical Provider in order to give historical data and trends. It should be as easy as possible for the users. And it’ll have compatibility with current IoT Glucometers in the market (perhaps even IoT vital signs monitors if I have time). Additionally, it’ll have the capability to send and vocally express reminders for the patient as it’ll help fight adherence to the routine.
Step 1: Snips.ai and the console.You can follow the very thorough guide on: https://docs.snips.ai/articles/console/ to get started on your apps.
Let's begin with the console, which is the web interface you’ll use to create a Voice Assistant, add Apps(s) to it from the apps store, or create them from scratch.
You'll have to create your own account: Please go to https://console.snips.ai/signup
Sign up with your information, I recommend signing up with Github to speed the process.
IMPORTANT: Snips and snips console is a work in continuous progress and renovation. At the time of publication of this article most of the apps of the console will not be usable with the current version of SAM. Most of the apps must be tweaked for them to function properly and a vast knowledge of both python and linux is needed to implement this. No wonder several of the people that received the kit were not able to design or run an assistant, the learning curve is quite steep and if it's your first time using a raspberry just imagine.
Let's go step by step, first setup your Raspberry pi:
https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up
The ONLY OS that works with Matrix+SNIPS is this one:
http://downloads.raspberrypi.org/raspbian/images/raspbian-2019-04-09/2019-04-08-raspbian-stretch.zip
Set up your mic for the MATRIX with the next one:
https://matrix-io.github.io/matrix-documentation/matrix-creator/resources/microphone/
At this point you should be able to do the following:
Go ahead and install SAM in your computer.To install SAM you will need to have installed Node.js from nodejs.org, if in trouble follow this guide provided:
https://docs.snips.ai/getting-started/quick-start-raspberry-pi
By this point you should have:
- An updated SAM on your computer
- A Raspberry with Snips installed
- Matrix's mic installed and operational
- Node.js installed
- npm installed
Another thing, if you are using a speaker set the audio icon to ANALOG and type the following if you are running the raspberry on desktop mode:
sudo systemctl restart snips-audio-server
Step 2: Fool proof Algorithm.Now let's go to the fool proof algorithm to upload the assistant to the Raspberry.
1.- Have everything updated!
2.- Login your console account.
sam login
You'll have to provide the username and password from the Snips console.
3.- Then choose the assistant
sam install assistant
You'll have between several options or just one if it is your first time using the console, just grab the assistant that has the greetings application, select it and type enter.
Wait for several seconds, you should get a warning that you'll have no access to that file, ignore it for now.
4.- Enter your raspberry pi:
You'll have to access your router directly to get the IP from your pi and then type the following on cmd.
ssh pi@<HOSTNAME/IP>
Your username should be "pi" and password "raspberry".
Inside the pi type:
sudo chmod +x /var/lib/snips/skills/SnipsGreetingsandTemplate/action-greetings.py
then:
exit
5. Now in SAM type
sam reboot
This will reboot your pi and then it is done, magically everything will work properly!
0.- If you previously had a skill installed or action code first go to your pi's ssh and type
sudo rm -r /var/lib/snips/skills/*
This should be done before all the other steps.
Step 3: Design sprint.Now for the fun part (the submission), but no so fast. Designing a VUI (voice user interface) is particularly different from designing an app, webpage, screen or any other kind of user interface. So, I will make use of this space to introduce certain best practices when designing and developing VUI's.
This is a great introduction to that: https://developer.amazon.com/es-mx/alexa-skills-kit/vui
But, let's start with ours:
1.- Understanding, Ideation and brainstorming:
There are several points you have to cover first if you are doing the sprint with a team. The team must be familiar with the technology and how it works, and should all brainstorm together to find a solution to a proposed problem. In this case also get familiar with certain Snips apps that solve similar problems. We should not set any restrictions and be trying to solve for these issues:
- Who their user might be.
- What voice could offer.
- In what context might people use voice.
- What the final benefit for the customer is.
The good part of all of this is that we are not really doing this as a team and most of these points have been developed previously! So we can go for the next step, having said that I leave this here for reference and good practices.
2.- Focus on an idea and start mapping the outcome.
Start by writing on paper how a conversation for your use case should look like (or be heard like?). So start writing down a small script for everything you want to do and work on it testing it by yourselves.
For SnipsHealth I want something like this:
SnipsHealth Dialog
S= Snips U= User
U: Hey snips! ---Master
1.- Glucose------------Intents: LastGlu, NewGlu
U - I wish to know my last Glucose level.
S - Your Glucose level was: number + milimiters per deciliter
-----------------------------------------------------------------------------------------------
U - I want to log a new glucose measurement.
S - Allright, tell me your new measurement.
U - My measurement is+ newNumber
S - Okay, it is now logged.
2.- Heart rate -----Intents: NewHeart, LastHeart
U - I want to take my heart rate
S - Excellent!, put your finger on the device for 15 seconds starting NOW!
Later….
U - Hey Snips…..sfx…. what was my heart rate?
S - Your heart rate was + Heart Rate Number.
3.- Oxygenation ---------Intents: NewOX, LastOX
U - I want to take my Oxygenation
S - Excellent!, put your finger on the device for 15 seconds starting NOW!
Later….
U - Hey Snips…..sfx…. what was my Oxygenation?
S - Your spO2 was + spO2 Number.
4.- Tips -----------Intents: HistoricData
U - What does my historic data is predicting?
S -Depending on Glucose level give the following recommendations:
If average Glu is High (more than 200): “You have to implement tight controls on y our sugar levels and continue visiting your doctor to implement them. Avoid smoking and try to eat healthily. Exercise has to be approved by your doctor first but it is highly recommended”
If average Glu is Borderline high (between 100 and 200): “This is not all bad, avoid smoking as much as you can and if possible avoid it, control how much carbohydrates and fats you are eating and try to incorporate a physical activity in your daily routine.”
If average Glu is normal (below 100): “Staying at a healthy weight can help you prevent and manage problems like prediabetes, type 2 diabetes, heart disease, high blood pressure and unhealthy cholesterol. Keep at it you are doing great. Try not to smoke and do contant exercise!”
* Recommendations by the American Diabetes Association.
5.- Hello---------------Intents: Hello
U - Hello there
S - General Kenobi!
(hahaha I couldn’t resist)
6.- Bye--------------Intents: Bye
U - Bye
S - Bye, see you next time
3.- Prototype
While you can continue testing the idea on paper and by human voice, when testing it on the device several things may change so you'll have to iterate on that directly on the Kit.
So upload your idea and run it. In this case we will do it in two steps:
First will be just to test the conversation.
Second will be to test the IoT devices made for this PoC.
Instructions for this are provided later!
The IoT devices will be incorporated at a later point also.
4.- User Testing
Time to try it with several people to see if it is Natural! In my case I went for a couple friends and grandparents (who are my user personas) and tested it on them.
5.- Analysis and planning
Time to incorporate nexts steps, we have to polish the voice interaction and as previously stated we have to integrate the promised IoT medical devices to track vital signs and everything, but first we have to make them. In this phase the important part is also to start ideating how to best sell the idea, in case of commertialization and be thinking on the next steps for the project. But, this one is not yet finished so let's go back to prototyping for now.
Step 4: Vital signs devicesFor this one I'm going to improve on one of the modules of my past projects (just a single module) and link it to Snips. It is a heart rate and SpO2 monitor based on the MAX30100. This time it'll be connected to Snips, these are two variables that are always important to know in a biomedical setting, and are quite useful. The other one and is the primordial part of this project, which is a Glucometer.
All this will be done by connecting the ESP8266 to the native MQTT provided by SAM and Snips.
Ha! You thought a beginners guide + Best practices for production + Prototype was too much? If followed correctly this will also serve as a template to connect any peripheral device to Snips using its native MQTT! And a guide to make a Heart rate monitor + spO2 device on a Node MCU.
This is quite easy once you get a hang of the MAX30100 module.
If you have the green RCWL-0530, MAX30100 module, I know your pain. But here is the best solution in the whole internet. Instead of desoldering the SMD resistors like some sources say, just set some 4.7k Ohm pull-up resistors for the SDA and SCL pins. Always use 5V for VCC and you are set.
For this project you will need these libraries installed in Arduino IDE:
- PubSubClient: https://github.com/knolleary/pubsubclient
- ArduinoJSON: https://github.com/bblanchon/ArduinoJson
Just go to the library manager at Programs->Include Library->Manager.
MQTT setup
If you need help setting up the ESP8266 go to the official resource:
https://github.com/esp8266/Arduino
Perhaps the most important line of code is:
Wire.begin(D1, D2); // sda, scl
You have to set these or else the sensor will not send anything as the I2C pins for the Wire library are others.
You just have to connect the ESP8266 to the Snips MQTT (commonly the Host is "localhost" and the port is "1883", the password can be Null).
If you have any more questions with this, I probably solved them in my other project: https://www.hackster.io/Edoliver/health-and-fitness-tracker-753695
The ESP8266 code as always is in the project's Github at the bottom.
Step 5: IoT integration with SnipsThis will be done with another Raspberry pi 3 B running Node-RED. This gives us flexibility to add ANY device that we want to Snips and sense and control them without messing with the processing power of the NLU in the Snips Base. Think of this architecture as having a Home assistant or broadlink. And in addition we can have a cool dashboard for it, that can also be expanded as much as you want!
Here is a look at the architecture of the solution:
Snips runs by default its own MQTT broker so what we will do is just listen, subscribe and publish to that broker from the Node-RED implementation, run a dashboard and this way control devices.
Starting with Node-RED and running it in the Raspberry pi: https://nodered.org/docs/hardware/raspberrypi
Follow those instructions people! Please follow them, the version of Node-RED natively installed in the raspberry is a very old and useless one.
If you need a primer or a better grasp of Node-RED, you can check my other guide here, there's a section of Node-RED with everything explained:
https://www.hackster.io/107329/aggrofox-large-scale-and-urban-agriculture-iot-solution-8155fe
Install those dashboard nodes, we are not playing here, and remember to always use:
sudo systemctl enable nodered.service
Once you have the node red implementation running properly, you can just paste the flow provided inside the Github (for real check my Github, that will give you a better grasp of the action code and dependencies) at the bottom of this article. Here is how it looks:
But let's explain a little bit of the characteristics. What I do here is use the MQTT nodes to Subscribe and publish into the native MQTT broker ran by Snips.
The ESP8266 is also subscribed to this broker so whenever you publish something to it, you can act on the result without having the NLU central voice kit doing that work. This way you free some processing power. And also we can log everything to S3 buckets and have a quite nice user interface.
With this logic you can control any device you can imagine and also display and log everything!
Step 6: Showcase and demo!Basically using the previous steps I implemented on Snips an assistant + action code to create SnipsHealth for the diabetic, you can find it published under that name on the Snips console. Integrating the conversation with MQTT and Node-RED to trigger the sensing device I was able to accomplish the following demo:
https://drive.google.com/drive/folders/15fjIWTIK0niEJEA-xMic1WDimvz1OgXs?usp=sharing
As always all the code is on the lower part of this article. Remember that for it to run you have to include all the dependencies as stated on the beginner's guide portion, the code here presented and linked at the bottom is just the action-code.
Hopefully you liked the project and if you have any question please ask.
Comments