In the past I worked more than ten years in the field of artificial imaging, image recognition and participated to several researches, starting by the mid of 1980 when I was living in Italy.
In the second mid of the 1990 decade I had several opportunities to face the challenges related to use the face and eye detection movement as a support for mobility-impaired persons. We know there are degenerative illnesses that progressively reduce the mobility of the patients, until they are constrained to a bed in an almost complete immobility.
As far as I know, nowadays patients can count on devices able to recognise small gestures of the face as well as eye movements. These devices has an incredible high cost, that dramatically reduce the number of people that can access to these resources. Another aspect is that most of them requires special computers or adapted ones, resulting in a certain complexity for usage, installation and setup.
After I have reviewed for Element14 the relatively new component by Omron Vision able to detect a number of face gestures and expressions I decided to see if it was possible to create an easy-to-use, IoT device specifically though for this kind of mobility-impaired users.
The key objectives of the PI Vision project can be focused in few words:
- Easy to use
- No special computer needed, a USB port is sufficient and should work on tablets and smartphones too
- Low-cost and extremely affordable (price no higher than $500)
- Easy to interface to non-computer devices
- Networked
- Open source
The recognition approach of the device starts from the data that can be acquired by the Omron Vision component. These are not final data but a series of 3D spatial information that gives precise knowledge along the time line of the user small face gestures and eyes orientation.
This step of the recognition is also the heavier part in terms of processor usage, but the Omron component can provide updates almost fast that the final data processing can be took in charge by the Raspberry PI 3B+ in a reasonable amount of time.
The resulting data are converted in simple control codes interpreted by the Arduino MKR1000.
The Arduino board, as the endpoint of the PI Vision device, covers two fundamental roles:
- Convert the control codes to emulated mouse movements and keyboard keypresses
- Generate a feedback on a small RGB NeoPixel LED bar to acknowledge the user what is the response currently in action.
As you can see in the video above the LED strip has an intuitive color coding sowing then the mouse is moving, double click selected and other features, as well as a different color when a key is pressed.
In the PI Vision prototype the LED strip is boxed together with the Arduino MKR1000 but in a production version the strip can be attached on the monitor frame easy to view without disturbing the usage.
The Test PlatformThinking to the most practical usage, the Arduino-based device is connected to the computer via USB that also power it. This means that no wiring is needed to connect the end point directly to the Raspberry PI and Omron Vision: two separate boxes connected via WiFi.
Most of the commercial devices I saw position the camera on the top side of the monitor frame. This limits the usage to a specific device and create some complications in positioning the stuff respect the point of view of the user.
This kind of solution has the advantage to freely position the monitor (or any other display device) independently by the camera sensor that should be oriented correctly for detection, to grant the better mobility-impaired user experience.
NetworkingThere is one more advantage to connect the Arduino MKR1000 endpoint via WiFi to the sensor-tracking unit than the better positioning. With this configuration the Raspberry PI B3+ - taking the advantage it has both Wifi and Ethernet available together - act as the system access point as well as acting as a router-proxy between the Ethernet and WiFi.
Comments
Please log in or sign up to comment.