People with disabilities face lot of difficulties communicating with normal people, especially the people who can’t speak or hear, i.e. the deaf and dumb people. They simply can’t engage in casual conversations with people, as the society views them differently, due to their uniqueness.
So, I have tackled the following problem by creating the system that can convert the “human breathe”into speech, enabling the deaf and dumb people to engage into a casual conversation with normal people without any difficulties. With this project, I have explained the components used for the system, and how the system actually works using diagrams and graphical representation of data.
Introduction:BreatheSense is a‘Breathe-to-Speech’ assistive device which converts an individuals breathe data into speech. This system works on a sensor called ‘Walabot’, which is manufactured by Vayyar Imaging Ltd. This is a 3D imaging sensor, which is embedded with 15 high accuracy Radio Frequency antennas. Its primary function is in-wall imaging, but due to these RF antennas, it can even detect the slightest of movements, like thumping of the human heart, or the movement of diaphragm.So, this enables it to perform the functions of Heartbeat Sensing and Breathe Sensing. I have used the Walabot Creator for my system, hence all the explanation will be with respect to the Walabot creator model.
Now, the walabot sensor will give the breath readings in terms of ‘energy’, which are in the order of 10^-5, which is due to the micro movements of our diaphragm moves. Now, these readings are plotted, analysed by a python program written by me, which converts them into text messages for a particular sequence of breath data, and sends it to the android app, which speaks it out loud (speech).
To explain the analysis process in simple terms, consider the following:
L – Long Breath
S – Short Breath
Now, let’s consider 4 cases for simplicity:
SS– 2 short breath
SL– short breath followed by long breath
LS– long breath followed by short breath
LL– 2 long breath
Now, each of these combination will represent a particular sentence or block of text, like “I need Help”, “I need Food”, and other simple sentences that a disabled person would need for a casual conversation. Now these sentences are sent to the android app, which will convert this into speech by speaking them our loud for the normal person to hear.
Detailed Explanation:To understand every detail of the system, we will divide it into certain parts, which is shown by the flow diagram, giving a detailed explanation on how the system actually works.
Now, we will go through the work flow of the system in detail.
1.Programming the Walabot: Walabot provides us with the Walabot SDK, which is utilized to program the walabot to perform various tasks. It supports 2 languages, Python and C++, and it have examples for both the languages on its website. I have used Python, so this whole paper will give you a complete guide on programming the walabot using python.
So, to program the Walabot, we follow these steps:
· Download the Walabot SDK on your PC or Laptop.
· Connect the Walabot to your PC or Laptop using a micro USB cable.
· Run the WalabotAPITutorial to make sure that your walabot is connected.
· Now, you can program the walabot via Python code just by including the WalabotSDK in your python project.
Now, you need to develop a code, which will take the energy readings, analyse them and then send the result to the android app.For the code-app communication, there are many protocols available, but I have used the MQTT protocol, which is one of the easiest to use. It will be explained in detail in the next step.
In your program, you need to develop a ‘decoding table’, which describes what sentence each combination of code represents, for example:
LLSS – Hi! How are you?
Where,
L – Long Breath
S – Short Breath
So, like this you need to create a table for all possible outcomes. But, take care that making the code more complex will have some drawbacks, like delay in processing, and the disabled person can only remember a certain number of combinations.
2.Sending the programoutput to app:
This is the mqtt client you can use for your project :
Now, after receiving the output of the program, which would be a short sentence, now we send this data to the android app, which will speak it out loud. I used the MQTT communication protocol to accomplish this task, but feel free to use any protocol you may please.
Now, when using MQTT, you have two options, to host your own platform or use a pre-existing platform based on this. For the first option, you can use Mosquitos, but it would be quite complex and time consuming. For the second option, you can choose paho-mqtt, the service I used while developing my prototype, as it has a very easy interface and its python library is easily available as its open source.
So, what actually happens is:
· The generated output is sent to the platform by the URL, user id and the password provided in the program. This is now posted on the MQTT Dashboard of that particular user id.
· Now, the app will continuously check the Dashboard, and it will display the contents of the dashboard on the app screen, and will speak it our loud using any text to speech services available for android.
3.The Android App:
Now, the android app for this system would be able to perform the following tasks: (it is a two activity android app)
· Fetch the data from the MQTT Dashboard of the defined user.
· Display that data in the form of text on the App screen.
· Convert this text into speech data by speaking our loud for the normal person.
Apart from this basic features, some additional features can be added to the app, which could include a developer mode, which would allow the person to view the breath data being plotted in real time, which would give an idea of how it is analysed.
Conclusion:
After conducting research on this project, building a prototype and experiment it with actual disabled people, the response was quite phenomenal. The people using this system were having a success rate of almost 70% with my prototype, which was only designed for 8 sentences, which is quite good for a first try.
This technology has a lot of future scope, and my research would help a lot of bright minds into developing something much better than this, which would help with the goal of achieving a society in which disabled people can live without any bias, and are viewed as normal people.
Future Scope:
· The system dimensions can be reduced a lot by using a Card sized computer like the Raspberry Pi, which would make the system not only compact, but portable too, as these computers can be easily powered by a Battery bank.
· The system cost can be decreased by using an alternative for the Walabot sensor, which is currently priced at 150 dollars, which is quite expensive. Do note that this is just used for prototyping.
Here I have added a video showing a plotting demo using python.
Will add the full tutorial video soon, currently working on it.
Please do support this project as it could change the lives of thousands of people by providing them the technology required to make their lives easier.
Comments