We address these problems when the visually impaired person using the first Aid kit:
- Labelling: Traditional first aid kits usually have labels and instructions printed in text, which can be hard for visually impaired people to read.
- Organization: If the items in the first aid kit are not arranged in a clear and orderly way, it can be difficult to find what you need quickly in emergency times.
- Contents: The kit might have small items that look similar and are hard to tell apart by touch alone. For example, bandages, ointments, and antiseptics might feel the same and be hard to identify without looking.
- Emergency Procedures: If the kit has emergency procedures or first aid steps, they might not be in formats that are helpful for visually impaired people.
We give a Solution for these problems in this project. These problems are identified, after my idea submission by the help of Contest master feedback. Thank you contest master's for your valued feedback. They have provided some helpful ideas through the feedback; thank you contest masters!
Solution:We provide solutions for the problems listed above in this project. Each problem has been thoroughly analysed to develop a comprehensive approach. We have implemented computer vision and Generative AI models to assist visually impaired individuals, offering solutions for each issue.
In this project, we provide solutions using automation algorithms and Generative AI. With our system, visually impaired individuals can receive the correct tablets through voice commands. For example, when the user says, "I want a tablet for cold," the first aid box will automatically open the container that holds cold-related tablets. The AI will then instruct the user through voice guidance: "The first tablet box has opened, and the tablet is on the left side of the box" or "The needed second tablet has opened, and it's at the front side of the box." After the tablet is retrieved, the Generative AI will provide information about it.
If the user says, "insect bite," the appropriate container will open, and the AI will provide step-by-step instructions. The container includes 4 crepe bandages, an instant cold pack, disposable nitrile gloves, a resuscitation shield, tweezers, 5 itch relief cream sachets, and 1 cardboard splint. The AI will guide the user through each step to treat their wound.
Key Features of This Project:
- The user can obtain tablets and other first aid kit items through voice commands, with instructions provided by the AI voice.
- If the user does not know the information about a tablet, they can simply show the front and back sides of the tablet in front of the first aid box. Using the Seed Studio XIAO ESP32S3 Sense and Camera module, the carved text on the tablet's front and back sides will be recognized and sent to the Generative AI model Gemini or GPT-4. The model will then provide information about the tablet in text format, which will be converted to voice.
- All first aid kit items are well-organized in the first aid box, which has two layers. The first layer contains daily needed items and tablets. The tablets are stored in separate containers. Normally, there are five small containers on both the left and right sides, and four containers in the front.
- In the second layer, the containers are customized based on the size of the items. This layer holds the essential first aid kit items, such as scissors, bandages, ointments, creams, and more.
- In this box, the items are well organized, making it easy to retrieve items during emergencies through voice commands.
- If you want any first aid instructions just ask the AI will reply in step-by-step procedure.
- We also provide an Android app for scanning tablets and ointments to obtain information via voice. In this app, the first aid box data is displayed, including the items and their expiry dates. You can access this information through voice commands by saying, "Tell me the item details".
- In this project, three months before the expiration of an item, you and your guardian will receive a message and notification. This message will also be sent to your medicine provider.
First Aid box designing using Blender:
The implementation process for the solutions is described below, step by step, including data management, hardware setup, and data flow.
Data Preparation and management:
We are using MongoDB Atlas and MongoDB to store data in a database named Smart First Aid. For demo purposes, we only store the container name or number, tablet name, and expiry date in the database. This data is stored via an Android app, which is connected to this database.
- MongoDB Atlas is a cloud service that takes care of MongoDB for you. It handles things like setting up, scaling, and backing up the database. It makes deployment easier by managing the infrastructure automatically and offers extra features like global clusters and built-in security.
- MongoDB is a NoSQL database that keeps data in flexible, JSON-like documents, making it easy to manage and scale. You need to set it up and manage it yourself on your own servers.
Step 1: Visit the MongoDB Atlas website and click "Start Free" to create an account.
Step 2: After logging in for the first time, you will see a welcome page and a form. Enter your details and click "Submit."
Step 3: After you submit the form, the cluster creation page will open automatically. If you donβt want to create a cluster at this time, simply click the "I'll do this later" button to close the page.
Step 4: In your dashboard, you will see a large "Create" button under the "Data Services" tab. Click the "Create" button to start creating your database and data cluster.
Step 5: After clicking the "Create" button, you will be directed to the cluster creation page. Here, you will see three types of storage options. For the demo, choose the Free M0 storage space. Next, enter your cluster name, select the provider, and click the "Create Deployment" button to create your first cluster.
Step 6: After the cluster is created, the "Connect" page will automatically open. In the "Set Up Connection Security" step, enter your new username and password, then click the "Create Database User" button.
Step 7: Next, click the "Choose a Connection Method" button. The available connectivity options will be listed. Select the "Compass" option to manage and view your data in MongoDB Compass. After selecting this option, you will see a connection string for MongoDB Compass under the "Use this connection string in your application" label.
Step 8: Copy the connection string and paste it into the "New Connection URI" field in MongoDB Compass. Click "Connect." After a successful connection, you will be able to view your database data and collections.
Note that database creation and data insertion queries are covered in the subsequent API creation steps.
sequence pictures for above steps:
API Creation for Data Transfer and Receive a Data from the Android Application:
For API creation we are using the vercel platform . Vercel offers a free plan that lets you deploy and host your websites. This free plan is good for personal projects, hobby sites, and small applications. With it, you get:
- Deployments: You can deploy your projects with automatic builds and global delivery.
- Serverless Functions: You get a certain amount of free time each month to run serverless functions.
- Static and Dynamic Hosting: It supports both static sites and dynamic features through serverless functions.
- Collaborative Features: You can invite team members and work together on projects.
- Custom Domains: You can use your own domain names and get HTTPS certificates automatically.
The below steps describe the how to create a API using Vercel:
Step 1: Visiting the Vercel platform to create a account using your GitHub account.
Step 2: After Creating a account Install the vercel on GitHub repo and starting to create a API.
Step 3: Click a Add New , next choose Import your API creation Vercel code having GitHub repository. The GitHub repository File Explained below .
Step 4: before doing these steps read the below Vercel API GitHub file creation steps. After Importing the file you will redirect to the project configure page so that Enter your project name and click deploy.
Step 5: After click deploy you will see the deploy status and successful message. Go to your dashboard click your project and click visit button to visit the Data showing page , copy the link . This link will be paste in Notehub.io for making route.
Create a Git Hub file for Vercel:
First we needed to create a python script for send and receive the data , process the data and stored in Mongo DB database. The python script explained below:
Mongo DB connection code:
# MongoDB URI connection string
uri = f"mongodb+srv://{username}:{password}@mycluster.oilvp.mongodb.net/?retryWrites=true&w=majority&appName=MyCluster"
The above code is a connection URI for mongoDbthe URI getting from the MongoDB atlas connection option that's explained above how to getting a URI.
# Create a MongoDB client
client = pymongo.MongoClient(uri)
# Define the database and collection
db = client.smartaid # Replace with your database name
collection = db.box # Replace with your collection name
This code helps to create a database and Collection in Mongo Db cloud.
other operations of JSON data parsing and other operations code are in api.py that's file in below GitHub link.
versel.json file vercel connectivity:
{
"version": 2,
"builds": [
{
"src": "api.py",
"use": "@vercel/python"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "api.py"
}
]
Requirements:
annotated-types==0.7.0
anyio==4.4.0
click==8.1.7
colorama==0.4.6
dnspython==2.6.1
fastapi==0.112.0
h11==0.14.0
idna==3.7
motor==3.5.1
pydantic==2.8.2
pydantic_core==2.20.1
pymongo==4.8.0
sniffio==1.3.1
starlette==0.37.2
typing_extensions==4.12.2
uvicorn==0.30.6
GitHub link:
lonely-dev04/notecard (github.com)
When running the python script the database will created and collection also created.
This api connected to the android app show that by the android app interface the needed data will store in database by this API:
async def reply(request: Request):
data = await request.json() # Extract the JSON body from the request
box_id = data.get("box_id") # Get 'box_id' from the JSON
tablet_name = data.get("tablet_name") # Get 'tablet_name' from the JSON
expiry_date = data.get("expiry_date") # Get 'expiry_date' from the JSON
# Save data to MongoDB
document = {"box_id": box_id, "tablet_name": tablet_name, "expiry_date": expiry_date}
result = collection.insert_one(document)
return {"message": f"Saved", "id": str(result.inserted_id)} # Return a JSON response with MongoDB ID
the above code insert the recived data from the app to database.
Notecard Setup for Data receiving from the API through the Nothub.io:
I using a Notecard Quick start guide to make communication between notecard to note hub .
Connect the Notecard WIFI 1.2 main to Note carrier A main like in picture.
Using a type B data cable to connect the USB port of laptop. Visit notecard online CLI and click USB notecard , next use the com port. After successful connection.
Check the Serial connection using below request code on CLI:
{"req":"card.version"}
Connect to the WIFI using a below code:
{"req":"card.wifi","ssid":"<ssid name>","password":"<password>"}
Validate the Wi-Fi connection:
{"req":"card.wireless"}
This link having a guide for how to create a Notehub account and how to configure the notecard with Notehub project id: Notecard LoRa Quickstart - Blues Developers.
Create Route for Notecard using Notehub.io:
1.Navigate to the Routes in dashboard,
2.click Create route
3.After click the create a route you will see a certain options i choose webget option.
4.Next Enter your route name and paste vercel API link in the URL field.next click "create route" to creating of a route finally successful .
note that this routes access request sending by the STM32F411 via notecard. The notecard and STM32F411 communicate with UART connection.
1. UART does not need a clock signal for sending data. It uses start and stop bits to frame the data.
2. Data is sent one bit at a time over a single wire, which makes it a simple and efficient way of communication.
3. UART can send and receive data at the same time, using separate TX (Transmit) and RX (Receive) lines.
4. Both the sending and receiving devices must use the same baud rate, which is the speed of data transfer measured in bits per second.
5.UART can include a parity bit to check for any errors during data transmission.
UART Communication pin connection between STM32F411 black pill and Notecarrier:
Note carrier RX pin connected to the STM32F411 PA9(TXD1)
STM32F411 PA10(RXD1) pin connected to the Note carrier TXpin.
By the help of this UART connection STM32F411 sending a Request to the notecard next the note hub sending response to the notecard . By the help of UART communication STM32F411 receive the data from the notecard.
Before we start the code we needed the STM Electronics STM cube IDE on yur system.
software must be installed on your system for pin configuration and other operations.
STM32F411 DEV black Pill boardSTM32F411 black pill code for Sending a Request to the notecard:
For programming and configure the STM32F411 balck pill i using below link content as a reference.
https://www.instructables.com/Mastering-STM32-Black-Pill-and-STM-Cube-IDE-a-Step-1/
void initializeNotecard() {
sendCommand("{\"req\": \"hub.set\", \"token\": \"" NOTE_AUTH_TOKEN "\"}");
sendCommand("{\"req\": \"card.set\", \"mode\": \"cellular\", \"pwr\": 1}");
}
The above function make a connection and initialize the notecard.
void requestDataFromNotehub() {
String request = "{\"req\": \"note.get\", \"hub\": \"" NOTE_CARRIER_ID "\"}";
sendCommand(request.c_str());
}
The above function send the request to the note hub via notecard.
void processJsonData(const char* jsonData) {
DynamicJsonDocument doc(bufferSize);
DeserializationError error = deserializeJson(doc, jsonData);
if (error) {
Serial.print("Failed to parse JSON: ");
Serial.println(error.c_str());
return;
}
String box1 = doc["box1"].as<String>();
String box2 = doc["box2"].as<String>();
String box3 = doc["box3"].as<String>();
Serial.print("Box1: ");
Serial.println(box1);
Serial.print("Box2: ");
Serial.println(box2);
Serial.print("Box3: ");
Serial.println(box3);
sendDataToNordic(box1, box2, box3);
}
This functionality parse the received JSON data and send to the nordic Semiconductor nrf52840.
Sending the parsed data to the Nordic nrf52840 using UART communication:
STM32F411 pin number PA2(TXD2) connected to Nordic nrf52840 GPIO pin number P0.08.
STM32F411 pin number PA3(RXD2) connected to the Nordic nrf52840 GPIO pin number P0.06.
Code for Sending the data to the Nrf52840 via UART communication:
void sendDataToNordic(const String& box1, const String& box2, const String& box3) {
nordicSerial.println(box1);
nordicSerial.println(box2);
nordicSerial.println(box3);
Serial.print("Sent data to Nordic: ");
Serial.println(box1);
Serial.println(box2);
Serial.println(box3);
}
The above functionality helps to send the data to the nrf52840 ,the fetched data of the box details stored in three variables that are sending to the Nordic nrf52840.
XIAO ESP32Ssense operations:XIAO ESP32S3SENSE used for Speech recognition and Image capturing to extract the text from the tablet or ointment images.Tablet Information Gathering:
By the help of the XIAO ESp32 s3 sense camera module to tablets scanned, and details provided to the user.
In this we are using the OCR software for Extract the text from image or video.
OCR:
Optical Character Recognition (OCR) software is designed to recognize and extract text from images, scanned documents, or photographs of objects. This technology is widely used for digitizing printed or handwritten text so that it can be edited, searched, or stored electronically. Below are some key OCR software tools and libraries
In this project for tablet identification the tablet both front and back captured and then the carved text converted to digital that text forwarded to the Nordic nrf52840 . Nrf52840 received text compared and check some condition and to forwarded to the GPT model via API connection. GPT model given result converted to audio and play at a time.
Code:
Wifi connection code
void connect_wifi() {
WiFi.begin(ssid, password);
Serial.print("Connecting to WiFi");
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("Connected");
}
OCR code for extract the text from image:
void send_image_to_server(const uint8_t *image_data, size_t size) {
if (WiFi.status() == WL_CONNECTED) {
HTTPClient http;
http.begin(ocrApiUrl);
http.addHeader("Content-Type", "image/jpeg");
int httpResponseCode = http.POST(image_data, size);
if (httpResponseCode > 0) {
String response = http.getString();
Serial.println("OCR Response:");
Serial.println(response);
mySerial.print("OCR: ");
mySerial.println(response);
} else {
Serial.println("Error on HTTP request");
}
http.end();
} else {
Serial.println("WiFi not connected");
}
}
the above functionality helps to extract the text from the images.
Sending the extracted text and voice recognition converted text via UART communication :
XIAO ESP32S3SENSE pin number D6(TX) connected to Nordic nrf52840 GPIO pin number P0.22(RX PIN).
XIAO ESP32S3SENSE pin number D7(RX) connected to the Nordic nrf52840 GPIO pin number P0.20(TX PIN).
these pin configuration and UART communication code attached to this blog.
For Voice recognition I am using the prebuilt mic of the XIAOESP32S3sense. I am using an Edge Impulse platform for create a voice recognition model.
I followed this link content as a reference to I build my voice recognition operation and by the help of Google's speech to text conversion, The user speech converted to text.
The link for Voice recognition model Creation and other configurations click blue text.
Nordic Semiconductor Nrf52840 as a main Controller:In this project the Nordic semiconductor Nrf52840 Development kit place a vital role like as main Controller. Major operations of the system performed by the Nrf52840 . The following operations are performed and processed by the Nrf52840. list of Operations,
Note that iusing Zephyr RTOS .
Initialization of Peripherals:
- The code initializes UART devices (UART_1 and UART_2 for the XIAO ESP32S3 Sense) and the GPIO device.
- Configures GPIO pins for controlling three servos
(SERVO_PIN_1
,SERVO_PIN_2
,SERVO_PIN_3
) and a speaker (SPEAKER_PIN
).
UART Communication Handling:
UART_1 (Connected to another device):
- A thread (
uart_thread_fn
) listens for incoming data onUART_1
. - The received data is stored in
uart_buffer
. - If the data contains box information (e.g.,
box1=
,box2=
,box3=
), theparse_box_data
function is called to extract and store the data inbox1
,box2
, andbox3
. - If the received data does not contain box information, it is passed to the
control_servo
function.
UART_2 (Connected to XIAO ESP32S3 Sense):
- A separate thread (
uart_xiao_thread_fn
) listens for incoming data onUART_2
. - The received data is stored in
uart_xiao_buffer
. - The data is logged for debugging purposes and passed to the
control_servo
function.
Servo Motor Control:
- The
control_servo
function checks the received text data (from eitherUART_1
orUART_2
) to see if it matches any of the box data (box1
,box2
, orbox3
).
If a match is found:
- The corresponding servo is activated.
- The matched box data is sent to the
generate_and_play_audio
function for further processing.
GPT Model Request and Response Handling:
GPT Model Request:
- The
generate_and_play_audio
function creates a request to a GPT model API. - The request includes a prompt that asks for details about the matched box data and specifies a maximum token count (100 tokens).
- The request is sent to the GPT model API using an HTTP POST request.
- GPT Model Request:
Thegenerate_and_play_audio
function creates a request to a GPT model API.
The request includes a prompt that asks for details about the matched box data and specifies a maximum token count (100 tokens).
The request is sent to the GPT model API using an HTTP POST request.
GPT Model Response:
- The function waits for the GPT model to respond.
- If the response is successful (HTTP status 200), the response data (containing the generated text) is stored in
uart_buffer
. - The text from the GPT model is then passed to the
play_audio
function for audio playback. - GPT Model Response:
The function waits for the GPT model to respond.
If the response is successful (HTTP status 200), the response data (containing the generated text) is stored inuart_buffer
.
The text from the GPT model is then passed to theplay_audio
function for audio playback.
Audio Playback:
- The
play_audio
function simulates playing the generated text as audio. - The speaker is toggled on and off to simulate audio output for a specified duration (2 seconds).
Thread Management:
The code defines two threads:
uart_thread
for handling communication onUART_1
.uart_xiao_thread
for handling communication onUART_2
.- The code defines two threads:
uart_thread
for handling communication onUART_1
.uart_xiao_thread
for handling communication onUART_2
. - These threads run concurrently, allowing the nRF52840 to process data from multiple sources simultaneously.
The above-mentioned functions and some codes are attached below.
Nrf52840 Coding :
Before starting the coding, we need to install and configure the Nrf52840. The Nrf52840 configured by the device tree file, that's file attached below.
First install the JLink_Windows_V798d_x86_64 .
Next install the Nrf command line tools.
Next Download and install the Nrf_SDK.
Next open your visual studio code navigates to the extensions search NRF Connect for visual studio code install this extension.
Next install NRF Device tree extension.
Next install toolchains in Visual studio code.
After that installation of all, Using a USB cable to connect the Nrf52840 and power on.
After that Click connect your Nrf52840 device . Next click create new application start your fresh application with nrfconnect.
After create new application inside of the src file the main.c file there ,that s an main file for coding my project.
next create a device tree file for Nrf52840 board configuration after compiling the code click flash to flash the microcontroller.
After flashing the file Using the Device tree extension to see your device configuration.
Hardware setup :Based on the device tree overlay connect the three-servo motor's and then the speaker and other components.
The Servo motor placed on the 3d model , the model designed for the motor acting like linear actuator and then boxes are attached to the model. The 3d model attached below .
The attached diagram will explain how to make a connection.
Link for 3D model:
This model helps to convert the motor circular rotation to the linearly. This setup helsp to open and close the box.
Servo Linear Actuator (9g) + adjustable backlash Version by Geekmakes - Thingiverse
Comments