AI assistant for people with brain damage and memory loss (ex. after stroke). It listens to them and answers their questions using speech based on data shared by caregivers.
Why did I decide to make it?Many people after a stroke experience brain damage and partial memory loss. Some of them partially lose the ability to create new memories. But they still can speak, are very curious, and ask many questions to fill the gaps in their memory. It is very taxing for caregivers to answer similar questions continuously.
How does it work?The solution listens to a person with memory loss and answers their questions using speech based on a predefined profile and common knowledge. In addition, it locally transcripts conversations for review by the actual caregiver. It shares analytics with the caregiver using a dashboard so the caregiver can review trends and anomalies in the patient's behavior, and helps monitor the patient's progress and well-being.
The solution is using MINISFORUM Venus UM790 Pro Mini PC based on AMD Ryzen 9 7940HS w/Radeon 780M Graphics and Windows 11 Pro.
The code is a fork of KoljaB/LocalAIVoiceChat. The code was modified to persist dialogs into the SQLite database and use Microsoft Phi-3-mini-4k-instruct-gguf LLM for a faster response. The additional component is a Stroke Care Activity Dashboard ⏳. It is based on Streamlit framework.
The high-level solution:
• A microphone captures the speech of the patient and converts it into audio.
• A speech recognition module (CoquiEngine) transcribes the audio signals into text.
○ The question text gets converted into a vector
○ A system prompt stores patient profile data and related context
○ The question sent to LLM
○ LLM produces the response as a text
○ Conversation gets recorded into a SQLite table
• Dashboard runs SQL queries and visualizes data as a set of charts and a table.
The charts help analyze patient interactions and compare them historically.
Comments