Preparing Tylenol for a little kid having fever in the middle of the night is a tedious task. Liquid medicine is easy for kids to take but it is needed to give them a right amount. So, you would need a liquid measuring device, such as a measuring cup, and the medicine liquid is needed to pour from a bottle in to the measuring cup to measure a desired amount. This would not be an easy task if you have shaky hands, weak vision, or if you are not in a good mood.
For easier preparing liquid medicine, it comes to an idea that to put a measuring cup and medicine bottle together as one unit like the figure above, and there is an electronic pump control unit integrated on it. User can use voice commands over a cloud or push a button to order dispensing a desired amount of liquid medicine from the bottle to the cup. Then the medicine can be used from the cup. A see-through cup with liquid level indicators let users to verify the amount before use.
The above figure shows elements for building a prototype of a liquid medicine dispenser. The smart speaker receives user voice commands, converts voice to data and send to the cloud. The cloud processes the command data and send the command to the electronic pump control unit over the Wi-Fi. There may be several IoT devices in the house; the cloud helps to address the command to a right device, in this case, the device is the liquid medicine dispenser.
At the electronic pump control unit, the controller processes the cloud command received via Wi-Fi and executes a pump process. The controller sends signal to the motor pump driver to drive the pump motor to start pumping. The liquid from bottle is conveyed to the cup via the tube. The liquid level sensor continuously feeds the liquid level on the cup to controller. When the liquid level reaches the desired amount, the controller sends a stop signal to motor pump driver to stop pumping.
User can use buttons for finger command inputs to order a pump as well. The buttons can also be used for setting and controlling other operations. The display and speaker provide feedbacks, schedule reminders, and other operating conditions to user via visual and audible ways.
The pump motor can be a mini size of various types such as piston, flow rotating, piezo, or peristaltic types.
The liquid level probe can be optical, capacitive, pressure, or floater types.
The whole electric pump control unit can be self-contained in enclosure running with a battery.
The internet connection capability allows the liquid medicine dispenser to connect with pharmacies, health care providers, and patient data bases for dose usages.
PlansBase on elements and their functions described in the figure-2, I can sort out parts and resources to be used
Cloud service
1. Cloud: IoT Core, Cloud function, Google Assistant, Google Dialogflow.
2. Smart speaker: Google Home Mini
Software tools
1. MPLAB for coding and programming AVR
2. Atmel START for generate code for AVR
Build Hardware
1. Controller and Wi-Fi: AVR-IoT WG board (Contest hardware award)
2. Motor pump driver: PWM motor driver module (Amazon.com)
3. Motor pump: 6V peristaltic pump (Amazon.com)
4. Liquid level probe: copper tape (Amazon.com)
5. Liquid level sensor: Microchip touch sensor QT2120
6. User buttons: Microchip touch sensor QT2120
7. Display: LEDs
8. Battery: 3.7V Lithium
9. Liquid measuring cup
10. Liquid container
11. Flexible silicon tube
Cloud ServiceCloud service helps to store your data and convey messages from internet to device and vise versa. I followed "Voice Control with Google Assistance" instruction document for setting and configurating my cloud server
http://ww1.microchip.com/downloads/en/DeviceDoc/Voice-Control-with-Google-Assistant-DS50002969A.pdf
Step 1: Create Cloud IoT Core
In this step, user needs to register an account and provide information of the device, such as project name, device name, device location, authenticated key, and device ID, to cloud website. The cloud needs this information to address internet messages to and from your device. Creating cloud IoT Core step let you to input the registry information. Your account information is the proof that you are the device owner, and yeah, you need to pay for the cloud service. But you don’t have to pay now, there is three months free trial and $300 credit to use.
The image above captures some of registry information for creating IoT Core. The IoT Core is considered as a gateway to an IoT device.
Step 2: Create and Setting Cloud Function
As I see, a cloud function is a function runs on cloud. It looks like a regular C code function running on typical local computer or microcontroller, and a function has unique name. When it is called the function is executed and process items listed in a function. For this project the cloud function is written in Python programming language and stored on cloud, and its function trigger name is an unique URL generated by the cloud. When it’s called or trigged, the function run on cloud and execute things we want, as such thing relate to internet.
There is a reference "main.py" file contains processes and the file will run when the Cloud Function is triggered. I modified this file with my project ID and others registry information for delivering message from/to my dispenser device. I also modified the "process_voice(request)" function for packing and sending my payload data.
main.py
from googleapiclient import discovery
import base64
# IMPORTANT: Change these fields to your projects settings
PROJECT_ID = "voice-to-avr-299516"
IOT_CORE_REGION = "us-central1"
IOT_CORE_REGISTRY_ID = "voice-device"
IOT_CORE_DEVICE_ID = "d01239F1C3233829AFE"
# Code obtained at https://cloud.google.com
def get_gcloud_client():
api_version = 'v1'
discovery_api = 'https://cloudiot.googleapis.com/$discovery/rest'
service_name = 'cloudiotcore'
discovery_url = '{}?version={}'.format(
discovery_api, api_version)
return discovery.build(
service_name,
api_version,
discoveryServiceUrl=discovery_url,
credentials=None,
cache_discovery=False)
# Code obtained at https://cloud.google.com
def send_message_to_device(project_id, cloud_region, registry_id, device_id, payload):
"""
Sends a message to an IoT Device through the config pubsub topic. (Config pubsub is /devices/d_id/config)
:param project_id: Google Cloud project ID
:param cloud_region: sWhich region is the device located in. For instance us-central1
:param registry_id: IoT Core Registry the device is loacted in
:param device_id: The device ID
:param payload:
:return:
"""
client = get_gcloud_client()
device_path = 'projects/{}/locations/{}/registries/{}/devices/{}'.format(
project_id, cloud_region, registry_id, device_id)
config_body = {
'binaryData': base64.urlsafe_b64encode(
payload.encode('utf-8')).decode('ascii')
}
return client.projects(
).locations().registries(
).devices().modifyCloudToDeviceConfig(
name=device_path, body=config_body).execute()
def process_voice(request):
request_json = request.get_json()
queryResult = request_json['queryResult']
parameters = queryResult['parameters']
unitVolume = parameters['unit-volume']
amount = str(int(unitVolume['amount']))
unit = unitVolume['unit']
payload = '{{"amount":"{}","unit":"{}"}}'.format(amount,unit)
print("Sent {} {} to device".format(amount,unit))
send_message_to_device(PROJECT_ID, IOT_CORE_REGION, IOT_CORE_REGISTRY_ID, IOT_CORE_DEVICE_ID, payload)
Note at the process_voice(request) function near the bottom of the file.
This function receives a “request” from DialogFlow, extracts parameters from the request, and pack the parameter into a payload. The DialogFlow will be mentioned in the next section and the request is to dispense a mount of liquid. For instance, a request is “Please dispense 5 ml.” The function extracts the request into two parameters: (1) is “amount” and (2) is “unit”. The value of the amount is “5” and the unit is “ml”.
At last, the payload is combined with other information, such as project ID, device region, registry information, and device ID, and to be sent out to device. The information in this function provides address sending payload to the correct device on cloud. In this case, message is addressed to my liquid dispenser device because my liquid dispenser device is registered when create IoT Core in the first step.
The set for cloud function also includes this "requirements.txt" file
requirements.txt
google-cloud-storage
google-auth==1.6.2
google-api-python-client==1.7.8
google-auth-httplib2==0.0.3
google-cloud-pubsub==0.39.1
paho-mqtt==1.4.0
pyjwt==1.7.1
oauth2client
When the Cloud Function configuration step is completed, the server generates trigger URL as seen at the bottom of the capture. This Trigger URL will be used for the next step.
Step 3: Work with Google Dialogflow
Dialogflow is an online service helps user to create conversations and interactions between human to machine or machine to machine. It involves with text-to-speech, speech-to-text, voice recognition, artificial intelligent, and more. People can use Dialogflow to build chatbots of simple to complex dialogs.
For this project, Dialogflow is used to recognize my liquid dispense voice/text commands, and when my voice/text command is understood, the Dialogflow called the trigger URL to run a Cloud Function mentioned in the previous step.
To have the Dialogflow to understand my commands, I needed to give it training phrases. The training phrases I used are three phrases as below:
1. “Please dispense 5 ml.”
2. “Could you please dispense ten ml?”
3. “Dispense 20 ml.”
The Dialogflow identifies the entity parameters in the phrases to understand it better. The Dialogflow needs to define the dispensed amounts and their unit in the phrases. The yellow hi-lite words in the phrases are amount and unit keywords listed in the table below. The amount and unit are assigned to be recognized as sys.unit-volume. The amount can be in digit (5, 10, and 20) or in word (five, ten, and twenty). The Dialogflow can understand other amount values even though it's trained with only three numbers (5, 10, and 20). The unit "ml" is interpreted as "millimeter" by the Dialogflow.
After entering training phrases, a response phrase, and defining quantity-unit to dialog boxes, the dialog can be tested right away.
The capture above is dialog configuration for my liquid dispenser device. The left side on the capture are training phrases and a response mentioned above. The right side of the capture shows testing the dialog.
You can type or, if you have a microphone connected to you computer, say a phrase “Please dispense five ml.” And you will receive response the phrase “Dispense 5 millilitter to the cup.”
I type or say: “Please dispense 5 ml.”
Dialog responses: “Dispense 5 ml to the cup”
Now I want make something more useful out of the Dialogflow. After the Dialogflow responses to what I type or say, I want to make an action. Remember the cloud function in the previous section? That cloud function processes a request and send a dispensed amount of liquid command to my liquid dispenser, and I want to the diagflow call this function or trigger this function. To do this, I hooked up the cloud function with the Dialogflow by copying the name of the Trigger URL and pasting it in to URL Webhook in the Fulfillment section of the Dialogflow.
If very thing is right, Dialogflow test happen as below
1. I type or say: “Please dispense 5 ml.”
2. Dialog responses: “Dispense 5 ml to the cup.”
3. The Dialogflow trigs the Cloud Function and passes the request. The Cloud Function extract parameters { "unit": "ml", "amount": "5 "} from the request, packs the parameters in a payload and sends to my liquid dispenser over internet.
A program run on the AVR-IoT board of liquid dispenser receives the payload and execute a dispenser of a desired amount of 5 ml.
Step 4: Google Smart Speaker Integration
We now have the Dialogflow performing conversation, receiving user request and responding to the request as well as trigging Google Function to send payload to dispenser device. We can do one step further. Instead having a computer connecting to the cloud and typing on a keyboard or saying to microphone for requesting an action of an IoT device, such as this dispenser, we can use Google Smart Speaker device to do the job of the computer. I'm talking about integration of the Smart Speaker and Google Assistance with the Dialogflow.
There are a few requirements and configuration steps to integrate with smart speaker and Google Assistant with Dialogflow
Requirements:
1. Google Mini Smart Speaker or another Google speaker device
2. Wi-Fi connection
3. A smart phone or tablet with Google Home app installed
4. Use the same account for Google Home, Dialogflow and Google Cloud
5. Power the Smart Speaker and go through regular setup steps of Google Home app on the phone or table
6. Now the Smart Speaker is connected to the cloud over WiFi.
Dialogflow Integration:
From the main Dialogflow console, click Integrations tap to start integration setting. Select the Google Assistant to be integrated with Dialogflow. When integrating the Google Assistant window pops up, select the “dispenseIntent” for Implicit invocation. The "dispenseIntent" was created during Dialogflow configuration. Selecting the "dispenseIntent" lets the Google Assistant links with the Dialogflow . After this step, click “TEST”, and user can start communicate with the Google cloud using the Google smart speaker with voice.
The diagram above is the summary of cloud operation with integration of Google Assistant and Google Home Mini that carries a message from user to the dispenser. As usual, the Google Home Mini functions are just like a normal smart speaker device, and it responses to the calls like “Hey Google” or “Okay Google.” User can ask it today’s weather or almost any things you can think of, as link between the Google Home Mini with Google Assistant. But when I request to “Talk to My Dispenser”, then the Google Assistant link the Google Home Mini with the Dialogflow, and what I say or hear from the speaker will be link to the Dialogflow, and I can use voice to order an amount of liquid to be dispensed.
Here is the conversation between Google Home Mini and I as a user:
I say: “Hey Google"
Home Mini turns on 4 LEDs indicates it’s ready.
I say: “Talk to My Dispenser.”
Google Home Mini: “All right. Here is test version of My Dispenser."
Google Home Mini (another voice): "Greeting! How can I assist.”
I say: “Dispense 5 ml.”
Google Home Mini: "Dispense 5 milliliters to the cup."
The dispenser device is named "My Dispenser" during integration of Dialogflow configuration. The piece of conversation of user “Dispense 5 ml” and the Google response “Dispense 5 milliliter to the cup.” is configured via Dialogflow intent.
When the Dialogflow receives and recognizes the “Dispense 5 ml” phrase, the Dialog responses “Dispense 5 milliliter to the cup” phrase to the Google Assistant, and the Google Assistant sends response phrase to the Google Home Mini Speaker, and that is spoken out. The Dialogflow also trigs the Google Cloud Function to send a payload which contains the dispensed amount and unit to the dispenser. The Cloud IoT Core addresses the payload to the My Dispenser.
Coding and Programming microcontrollerI spent quite a lot of time at beginning for searching and trying software development tools. But since I got used to it, every thing went smoothly through out the project.
Below are software development tools I used for this project.
1. MPLAB
a. MPLAB IDE v5.45.
b. MPLAB XC8 Compiler v2.31
c. MPLAB Code Configurator MCC v4.02
2. Atmel START
a. Baseline of AVR-IoT WG example code
b. Generate code for PWM motor
c. Generate code for cloud configuration
The MPLAB IDE is the main tool for software development which I used to develop the programs, write C codes, compile, load and debug program on to the AVR microcontroller. The IDE can detect the AVR-IoT WG board when it’s plugged.
The MPLAB XC8 compiler is selected to compile my C code and generate output file to run on the microcontroller because the microcontroller on the AVR-IoT WG board is 8-bit AVR ATmega4808 MCU.
The MPLAB Code Configurator MCC is an additional optional tool help to build a project and generate C code and driver by configurating functions and parameters of a selected microcontroller. The MCC gives a visualization of embedded peripherals and how the peripherals are link in microcontroller system, such as clock distribution, PWM, I2C bus, and SPI.
Below is my programming flow chart for the dispenser.
After in initialization and configuration, two processes are set to run. Process 1 loops about every two seconds that sending to the cloud the local ambient temperature and light conditions, and check if is there message received from the cloud. If a valid message received from the cloud, the parameters of amount and unit of the dispensed liquid are extracted. The parameters are used to set the dispense process. Below is the receiveFromCloud function, handling message received from the cloud.
The process 2 check if there is dispense command from user button or from the cloud. If so, the dispensed process is set to start. The process 2 continuously check the cup liquid level, if the dispense liquid level reaches command level, the dispensed process is done. The process 2 also checks if the “STOP” button is press. This button let user to stop dispensed process any time.
Build project baseline for AVR-IoT WG board
I followed instruction AVR Home Automation Kit as link below to build basic firmware for the AVR-IoT WG board. It contains code for Wi-Fi, cloud connection, sensor, libraries, configuration and framework ready to compile and download onto the AVR-IoT WG board. I only need to add code modules that support for the liquid dispenser.
The yellow blocks are software modules I need to develop for supporting orange hardware blocks of the dispenser.
For the capacitive Qtouch sensor driver, I used driver library from https://github.com/SmartTech/AT42QT
For the PWM Driver, I used the online tool Atmel START at https://start.atmel.com/ to configure and generate code for PWM.
As seen in the capture, the top right PWM_0 block peripheral is added the AVR-IoT WG example project with configurations of using timer/counter TCA0 to generate PWM waveform output WO at pin PA0. From the main system clock of 10 MHz, the Top Value is set to 0x3e8 (1000) to generate 20kHz PWM.
The Atmel START generate below code for me.
The above is the initialization of PWM_0 to generate 20KHz PWM at pin PA0. To set the PWM duty, use this function:
PWM_0_load_duty_cycle_ch0(pumpPWM);
Check the whole project code for more details at https://github.com/txnghia2020/liquid_dispenser
Build HardwareComponents1. Controller and Wi-F: AVR-IoT WG board (Contest hardware award)
2. Motor pump driver: PWM motor driver module (Amazon.com)
3. Motor pump: 6V peristaltic pump (Amazon.com)
4. Liquid level probe: copper tape (Amazon.com)
5. Liquid level sensor: Microchip touch sensor QT2120
6. User buttons: Microchip touch sensor QT2120
7. Display:
8. Battery: 3.7V Lithium
9. Liquid measuring cup
10. Liquid container
11. Flexible silicon tube ID: 1mm, OD: 3mm
Peristaltic pump
Looking for a liquid pump was my number one priority because I knew little about it. I ran across some of pump types using for medical fields but they are every high prices. Other type like piezo pumps seemed every good for this application but are still expensive and not easy to purchase. I finally found this peristaltic pump on Amazon
This video recorded what I learned and experienced with the peristaltic liquid pump. The motor draw about 250mA at 5V.
PWM Motor Driver
This PWM motor driver is used to drive the peristaltic pump motor. The driver receives PWM input signal from the AVR microcontroller and provide PWM high current output to motor. The drive can support up to 15A motor. The peristaltic pump does not need that much current but because this driver is available to buy and is easy to used for prototyping.
Liquid Level Sensor
The liquid level sensor was the thing I worried the most. I heard about using capacitive for sensing liquid level but I could not find much of the resources and references
I spent time to investigate one of the capacitive touch sensor from Microchip. The QT2120 can be used for finger touch buttons and wheel or slider buttons. In communication mode, a host microcontroller can read status of touch input keys via I2C bus (pin SDA and SCL from the pinout )
Touch Sensor Chip QT2120
An experiment was conducted that used liquid capacitance to emulate finger capacitance effectting on the key input of the QT2120. It was successful. Each of of the key input is used to detect liquid level in a measuring cup. In the the video of Discrete Liquid Level Sensor, five input keys (KEY0 to KEY4) are used to detect level of 5ml, 10ml, 15ml, 20ml, and 25ml.
User Input Touch Buttons
The QT2120 is also used for touch user input buttons for the liquid dispenser. The four input keys (KEY5 to KEY8) are used to detect finger input for buttons 5ml, 10ml, 15ml, and STOP dispense commands. Please see the video of Homemade capacitive touch buttons below.
AVR-IoT WG Board
The AVR-IoT board is good fit for my application. Besides managing the cloud connectivity, the dispenser application needs the ATMEGA4808 microcontroller to handle driving pump motor, reading liquid level sensor, and a few user button inputs.
For driving the peristaltic pump motor, PWM peripheral on the microcontroller is needed to enable and generate PWM frequency of 20kHz and its duty cycle to be set on the fly for changing pumping or dispensing rate. The pin PA0 is selected for PWM waveform output. (Note: generating PWM waveform output at the pin PD4 were not successful.)
For liquid level sensor, the I2C bus is selected to interface with a QT2120 capacitive touch sensor. The QT2120 shares I2C bus with the Crypto Authentication ATECC808A and temperature sensor MCP9808. The QT2120 is also used for input buttons let user to choose an amount of liquid to be dispensed and control dispense process.
Here are pins used for liquid dispenser
PA0: Generate 20kHz PWM
PA2: I2C_SDA. Two-wire interface data for touch buttons and liquid level sensor.
PA3: I2C_SCL. Two-wire interface clock for touch buttons and liquid level sensor.
Schematic
This is detail for of schematic and pin resource used for the dispenser.
The schematic diagram above can be considered comprised of 3 circuits:
1. Control circuit: the control circuit is small current flow signals from sensor of liquid level sensor and user buttons to ATMEGA4808 on the AVR-IoT WG board. The control circuit also includes the PWM signal from the microcontroller to the PWM motor driver.
2. The power circuit: the power circuit is high current flow from battery to the peristaltic pump motor. The PWM signal control duty cycle of current flow from battery to motor. The higher duty cycle set, the faster the dispensed rate.
3. The liquid circuit includes the main liquid container (medicine bottle), the secondary container (cup), and tube conveying liquid from the bottle to the cup with the peristaltic pump. The liquid can be pumped to or extracted from the cup. In this application, liquid is only controlled to pump to the cup.
DemonstrationConclusionI'm very glad that I got my first prototype of an IoT device running. It was very exciting seeing my voice command was routed from a smart speaker to a cloud, and from the cloud to my dispenser, and after a few seconds, a desired amount of liquid was dispensed to the cup. The prototype seems work fine but it may scare a little kid to try it. I wish I can make the things smaller and enclose them in a nice self-contained unit.
The AVR-IoT WG board comes with a WiFi module and support software for connecting to the cloud and managing receive/send data. It works reliably. Every time power was supplied, the board quickly connected to WiFi and Google Cloud, and the data could be transferred. I completed the project without touching much with the WiFi and connection codes, except changing some registry infomation and setting an authenticated number.
It seems the AVR-IoT board does not have many I/O resources available but actually the board can adapt to numerous of ideas you can think of. Besides the GPIOs for digital input/output, the SPI, I2C, and UART allow developers to connect the board with variety of peripheral devices, such as sensors and actuators, and to extent its connection to other processors if needed. The AVR-IoT board also includes analog inputs, accepting sensor values.
For coding the AVR-IoT board, since I nailed down the MPLAB IDE and AtmelSTART as software tools, I felt every comfortable adding files and codes to the example project to perform processes serving for dispenser application.
The experiment of using touch sensors for measuring liquid levels is, I think, a valuable for some other applications. I hope you enjoy watching the videos.
Thanks to my partners, Hoa Phan and Thien Phung, for supporting and providing resources for this this project.
Thanks to the contest organizer for giving us the chances to learn new things and challenge our skills. And good luck to all contestants.
ReferencesAVR-IoT WG Liquid Dispenser Source code
https://github.com/txnghia2020/liquid_dispenser
Instruction of for Google Cloud and Google voice for set up and configuration
http://ww1.microchip.com/downloads/en/DeviceDoc/Voice-Control-with-Google-Assistant-DS50002969A.pdf
Documentations and other guidance
https://www.microchip.com/DevelopmentTools/ProductDetails/PartNO/AC164160
Online tool for generating code
More tutorial for AVR-IoT WG board
https://www.mouser.com/applications/connecting-google-cloud-iot-avr-iot-wg-eval-board/
QTouch sensor
https://microchipdeveloper.com/touch:slider-sensor-design-when-using-modular-qtouch-library
Comments