Smart cities adopt connected health and technology-enabled care to provide citizens the ability to self-manage their health and well-being, alert healthcare professionals to changes in their condition and support medication adherence. Leveraging digital technology, safer, more efficient social care can be delivered.
One problem in the adoption of technology-enabled care solutions has been the limitation of human interface. The access to health information has always been limited to healthcare providers’ manual input into a centralized medical record database, and can be hard to synchronize across multiple stakeholders. The result is a lack of inter-operable patient records and inaccessibility of critical health information during medical incidents.
Imagine a situation where you become unconscious at home; paramedics arrive, but they need information such as allergies and medical histories, etc., to provide the best possible treatment. Traditionally, there's no easy way to obtain that data; however, if you have all that information stored in a database, and the paramedics are able to access that via your home Alexa-enabled device, the difference could be life or death.
Convo-Care is a voice controlled e-health platform for better health records management and accessibility. It uses natural language and voice interface to help people manage their own health records, and allows healthcare providers to retrieve accurate records at ease. It is extremely useful for people with mobility problems, or during critical or emergency conditions.
This application solves two major safety hazards that are found in urban environments: difficulty accessing accurate medical information and difficulty contacting emergency services when unable to reach a phone. The hazard is increased further when the user suffers from conditions such as dementia, Alzheimer's, or various mobility issues. With Convo-Care, the user can be assured that help is always available, even if the user lives alone.
Here's a demo of the application in action:
Example User Experience FlowData Entry
- Step 1: User talks to Alexa to input profile data, medical history, emergency contact etc.
- Step 2: The Alexa skill is called, which interacts with the Convo-Care Cloud (Powered by AWS) to add records to database.
- Step 3: Alexa responds to the user with a confirmation that the record has been updated.
Event Trigger
- Step 1: User uses Alexa to call emergency medical services.
- Step 2: The Alexa skill is called, which interacts with cloud based e-911 service.
- Step 3: In parallel, the Alexa skill triggers AWS IoT to unlock the front door (assuming smart lock installed).
- Step 4: Alexa responds to user with confirmation that help is on the way.
Action Scenario
- Step 1: Help (EMS) arrives and sees that the home is Convo-Care enabled.
- Step 2: Paramedic talks to Alexa to request retrieving of medical profile.
- Step 3: The Alexa skill is called and relay critical medical info to EMS.
To emulate the most complex part of this service, where the Alexa skill triggers the smart lock, we used Intel NUC gateway developer kit to illustrate the logic.
The Convo-Care Alexa skill consists of the following dynamic data input and output capabilities:
- name
- gender
- age
- height
- weight
- date of birth
- blood type
- allergies
- medications
- existing conditions
- emergency contact
- quick medical profile
With this skill the user can update their personal health profile with new data any time easily via voice control. Imagine the user just got back from doctor visit, and have a prescription update. The voice input will automatically change the record for "medications" in the database.
If a medical emergency occurs, and the user experience mobility or mental issues, the user can speak "emergency" or "911" to trigger Alexa skill. Upon paramedic arrival, following are some of the phrases which can be asked:
- What is her blood type?
- Give me the medications.
- Does he have any allergies?
- What is the medical profile?
Setting up backend database on AWS required these configuration and dependencies:
- 1 DynamoDB table
- 1 S3 bucket
The Intel Gateway with sensors and actuators were configured with Node-RED via Intel Developer Hub. We used:
- Grove button - read button presses, emulate any active input from user
- Grove light - measure light level, emulate any passive sensors in the environment
- Grove LED - set LED indicator on/off, emulate a function working
- mqtt - Send data to AWS IoT, the output node adds a secure broker for communications between the edge and the cloud
Q: Doesn't this already exist with services like Life Alert?
A: This is a hands-free version which is not dependent on the existing hardware. Many situations can occur with the user unable to reach for a phone or press a button for help.
Q: Many target users may find this difficult to set up or may resist new technology. Why is that not the case here?
A: This technology would likely be purchased and implemented by the children of the elderly or family of disabled users for additional safety and peace of mind.
Comments