The Dynamic Vision Assist is a wearable tool for individuals who have visual impairments and wish to navigate outdoor areas like parks, hiking trails, or walking paths with little to no assistance.
Summary
The Dynamic Vision Assist wearable core components consist of three M5stickCplus esp32 modules, an Oak-D ai camera, A Raspberry Pi 5, an M5stack Core2, and Blues Notehub Cellular Card.
It uses a combination of Haptic feedback based on distance of objects/obstacles near the wearer and neural networks to identify common objects. All distance data is translated to haptic feedback of varying intensities through the use of vibration motors. The Neural network classification data is transmitted to the user's ear using text to speech functionality and open-ear bone conducting headphones.
These StickC modules gather data from infrared time of flight laser sensors.
These sensors are mounted on a cross shoulder bag/pouch.
These modules translate distance data into haptic feedback via vibration hats. (vibration motor modules)
The face obstacle detection module and sensor is worn on upper strap of the Dynamic Vision Assist is angled upwards in order to detect any obstacles that would interfere with the wearer's head or face.
The stick modules are mounted on the cross body bag/housing for the Dynamic Vision Assist as outlined in the above photo. The two ground obstacle detection modules are worn below the face/head obstacle detection module. They are pointed angled towards the ground in a cross over configuration so that each module detects a line of sight in a cone from the user's center area.
The stick modules and the vibration units are mounted on the back of the DVA bag/housing.
A baseline ground distance calibration can be triggered by a long button push of button A (labeled m5) on each module.
The stick module sounds two beeps, the first lower then a higher frequency when this calibration is set. When the calibration is cleared by double pressing the A button, the tones are played in reverse.
This sets the baseline distance to the ground so that if the distance increases by more than 2 inches, the lower body vibration motors return a rapid on off vibration pattern alerting the user that there are steps or a steep decline ahead.
To return a battery percentage on each of the three modules, the B button is pressed and the vibration motor on any of the modules returns a number of quick pulses in tenths to denote battery percentage.
If the button is pressed and the battery percentage is under 10 percent, the module will not vibrate but will beep.
Face/head module and sensor1. Vibrates with full duty if an object is within 3-4 feet of a user's face or head area
Face obstacle detection2. The sensor must be angled properly so that at 3-4 feet, the vibration alarm initiates if an obstacle is at face/head level
1. Vibrates at different intensities when an obstacle is an front of the wearer on either hip or side of the body. This allows the wearer to determine which side of their body the obstacle is on.
Obstacle detection
2. Once the sensors are adjusted properly on the body, a long press of the A button on both/either sensor sets the baseline distance to the ground. This allows the wearer to detect decline/steps with a series of repetitive pulses
3. If the button A is pressed, the vibration motor returns battery percentages via vibration pulses in tenths
All Modules1. Powered on by pressing the side button on the lower left hand button of the module. Power on is followed by two fast vibration pulses.
Power on
2. Powered off by long pressing the side button on the lower left hand button of the module.
3. The M5stickC module is easily connected to its respective time of flight sensor via a grove cable on port A
4. Easy charger attachment via m5stsck magnetic charging cable
Uses Magnetic Adjustable ball joints. Fully adjustable and customizable to the user's needs.
1. Face/Head sensor mount
2. Lower body abdomen sensors.
GPS Tracking. Included in the DVA wearable is a Blues Notecarrier along with a Blues Note cellular card. This module is configured to push GPS coordinates to the Blues Hub every 3 minutes. This allows an emergency contact to access last known GPS coordinates if the wearer encounters an issue. I have Successfully tested web-hooks with both IFTTT and Notehub to push GPS coordinates to an emergency contact. The user accessing Notehub has instant access to Google Maps with the wearer's pinpointed coordinates.
And last but not certainlynotleast.......
Dynamic Vision Assist Object Recognition and Optical Character RecognitionThe Dynamic Vision Assist leverages a Luxonis Oak-d pro AI camera and the Depth-AI Model Zoo by way of a Raspberry pi 5. A wearer is able to switch between Yolov4 and Tesseract models to learn more about their environment through Object and Optical Character Recognition.
1. WiFi Access point and Tactile neural network selector
Both the M5core2 tactile controller and the Raspberry pi 5 must be connected to the same GL inet network. Additionally, a user must set the correct Raspberry pi API endpoint ip address in the M5stack Micropython script I have included.
2. Object recognition mode.
The dynamic vision assist boots into object recognition mode and begins describing surrounding objects. This information is processed through text to speech and is then sent to the bone conducting headphones that the wearer is equipped with. This allows the user to receive critical information without losing hearing acuity.
3. OCR mode
Once the user has decided they want to try OCR mode, they must push the middle mechanical key on the tactile controller
When the user presses the center key, an API request is sent to the DVA API that is running on the Raspberry Pi 5. This instructs the script to switch to Tesseract and begin optical character recognition.
This functionality allows the wearer to read signs and important trail information when hiking or navigating parks.
4. If the wearer chooses to mute neural networks/text to speech altogether, they can push the third and final button to stop neural network activity and mute the volume in the headphones.
5. When the user is ready to return to Object recognition, they will push the key on the opposite side and a unique tone will play while the green led illuminates.
Note:
Each key plays a unique tone when pushed and has a corresponding color denoted by a lit led for low vision users when the button has selected a specific neural network
Renewable Energy Charging
A solar panel can easily be detached in the case the user needs to charge any of the batteries or headphones needed for this wearable. This allows a user to recharge in a remote environment despite battery life being 8+ hours as it currently stands.
And now for the moment we have all been waiting for...
Blindfolded navigation demoConclusionThe dynamic vision assist is an effective solution to trail and park navigation. By tightening my scope to trail/park navigation, I was able to hone in on the needs that a wearer may have in this environment. Through the use of AI, microcontrollers, perseverance, and innovation, I believe the Dynamic Vision Assist is a boon for low to no vision users. Thanks to all the Judges and participants for their hard work. Together we can change the future.
Comments