The idea for this project was born out of a realization that many individuals in the southern Telugu states struggle to communicate effectively with existing voice assistants like Siri and Alexa. These mainstream assistants often misunderstand mixed-language inputs, resulting in inaccurate responses and a frustrating user experience. To address this issue and improve accessibility, I decided to create a voice assistant tailored to the linguistic needs of these users.
My project aims to enhance user comfort while interacting with voice assistants, ensuring smoother and more effective communication. This voice assistant is specifically designed for people who speak mixed languages, such as Telugu and English. It not only understands and processes mixed-language speech but also adapts to the unique linguistic patterns of users from southern Telugu states.
The current state of the project is that I have successfully developed a voice assistant that can effectively process and respond to mixed-language inputs (English and Telugu). Facing challenges due to the lack of available resources and data I have decided to use OPEN AI API key to build this as of now.
I collected data on the voices of people with speech disabilities, but it wasn't sufficient. Despite this, I built a model, but its performance was not up to the mark.
As a result, I decided to use an API method instead.
I plan to integrate features that assist individuals with speech disabilities, such as stuttering. This will make the voice assistant accessible to a wider range of users and improve its inclusivity. I intend on using a continuous feedback loop for more personalized user experience. Leveraging user data and preferences, the assistant will offer more personalized interactions, adapting to individual communication styles and needs.
Comments