Inspired from this idea and its utilization in commodities like gaming, I built this proof-of-concept prototype into an early startup and this past month went to several incubators and VCs. It was recently selected by State's Department of Industries to be showcased in front of state dignitaries, and the innovation was covered by national news outlets and also selected by AWS/Campus Fund for a seed grant for making this into an MVP for commercial use (even had a veteran evaluation by them on 30 Nov in the afternoon). I also have relevant pitch decks and reports in this Drive Folder: https://drive.google.com/drive/folders/1tqgCHub_3V-1H43Wiq1N0b5FoTFKDYR0?usp=sharing
"What were the needs or pain points that you attended to and identified when you were solving problems faced by the Contest Masters?"
My own grandfather suffered from Parkinson's Disease and couldn't clearly express what he wanted to say and thus couldn't call someone in times of need. Most of the times, I couldn't address his needs because of being unaware. A recent survey of WHO estimated approximately 5.6 million people were paralyzed representing 1.9 percent of the population roughly 1 among 50. And, this does not include millions of neurological-disorder patients or senior citizens with deteriorating motor skills or numerous cases of muscle deformity/dystrophy. Prominent assistive solutions are limited in their technology and/or highly priced, especially for Non-U.S.A./third-world market. The NeuroPlay project aimed to tackle several challenges associated with motor-restrictive disorders:
Limited Accessibility to Gaming: Patients with conditions like stroke, palsy, muscular dystrophy, or quadriplegia often face difficulty accessing and enjoying gaming due to physical limitations.
High Cost of Existing Solutions: Commercially available EEG electrode headsets are often expensive, ranging from 1000-3000 USD, posing a financial barrier for many individuals, especially in developing countries.
Invasive Nature of Current Technologies: Existing solutions, such as fMRI-based or EMG-based systems, require specialized surgeries or invasive procedures, making them less accessible and less technologically advanced.
Noise and Susceptibility in Alternative Approaches: Solutions relying on eye movements (EOG), facial movements, or voice input may introduce noisy data, especially in environments common for gaming, impacting accuracy and reliability.
Lack of Open-Source and Portable Solutions: There is a need for open-source and portable alternatives that empower individuals with motor-restrictive disorders to control gaming inputs confidently and independently.
Hence, The Problem is…They are unable to perform actions independently.
And commercially available solutions are either technologically primitive and/or too expensive.
So, I thought, “Why can’t we just give them superpowers?”
And that's where the idea of converting thoughts into actions came from. And recently it seems to be getting a lot of traction as well. I even got a call from a lady who read my article in the news and wanted to enquire about the commercial availability of this device in helping her father who was suffering from Brain Cancer and couldn't really act/communicate.
NeuroPlay is 1) A headset to collect real-time data, which does not need any surgery, and 2)Companion Software to process in real-time with AI.
A single end-to-end architecture to take EEG data and convert it into input with “Hacked-together” electrodes to provide a low-cost alternative for a jumping off point.
And All this for a cost lesser than your current smartphone!
NeuroPlay uses a no-risk portable 3D-printed low-cost and non-invasive headset, which performs real-time neuroimaging on the patient to collect data with on-board microcontrollers, sensors and low-cost reusable electrodes. It uses a single end-to-end architecture to examine Electroencephalography (EEG) data, which is comparatively easier, cheaper and smaller and integrates a real-time processing deep-learning infrastructure which employs a generative and predictive model/method to map out the generated alpha-wave pattern to define user inputs in the games, namely WASD and/or other keys. While the major component is EEG data, we need to introduce assistive components for it to provide true and confident output due to the low spatial and temporal resolution of EEG and its non-medical grade electrodes. While we use pre- and post-processing like Fast Fourier Transform (FFT) and Support Vector Machine (SVM), we also use components like Motor Imagery (MI) or Visual Imagery (VI) for a multiclass control system. Rather than using muscle triggers in hand or face, a motor-restricted individual can define binary inputs with the state of the mind. That was where I started. Just 1s & 0s for states like relaxed and excited. This could work in most games, as was recently demonstrated a twitch streamer. Although, I have been working on using it for words instead and that story got much media attention. That setup can only perform 1 function (attack). I was developing an onscreen pad (accessibility keyboard-like) that takes in concentration on certain components and assigns emotion to try to predict different inputs. That was for text. In terms of gaming, that can be interpreted as 3/4 channels. This cleaned digital signal is further sent as an input to the Ensemble Deep Learning model which extracts important features from it and classifies the signals as a series of three characters. 0 ,1 or ‘/’. This is further passed on to a morse code to text converter, and the output of which is passed on to the input field. In order to enter a zero, the user will think about moving the left arm, for a one, right hand and both the hands together for a ‘/’ or a space. Three electrodes will be attached to the band which will capture the EEG waves. Thus we will have three channel-data which will be forwarded to the hardware kit. The Dataset consists of EEG waves reading and the corresponding intent of the subject. A CNN is trained on the data and features of the second last layer are extracted from it. A LSTM RNN is trained on the data and features of the second last layer are extracted from it. The extracted features are combined and this is trained on an ANN and features of the second last layer are extracted. This is done to reduce the dimensions of the extracted features. The features extracted from ANN are trained using XGB which predicts the intent of the waves. Since, I was previously working on this theory in a research paper, it is built on substantial work, but requires some more ironing out.
This approach is convoluted and originally meant for more complex tasks such as text or sentences. However, there are many different way of going about the software part of using in-game, as mentioned in the folders of the linked GitHub Repository below with their own README files based on each direction of usage. This is because I want this product to be plug-and-play whilst being modular enough that users can themselves adapt the integration based on their own needs.
For example, A simple Arduino Script mentioned can read the brain wave data from a low-level EEG Headset and outputs a keystroke in response to certain brain wave patterns. It uses the Arduino brain library to get the Theta, Low Alpha, High Alpha, Low Beta, and High Beta brain wave bands to calculate a user's emotion depending on the two-dimensional valence-arousal model typically used in modern neuropsychology. In addition to valence and arousal, it also calculates the user's engagement level, which is useful to detect if a user wants to actually output a given keystroke at a given moment. After calculating these values, the program will then calculate what emotion a user is feeling. There are four emotions that are recognized for now (Enthusiastic, Nervous, Calm, and Disappointed).
Now, there is a template function included in the program for the "Geometry Dash" video game. When a user is "Enthusiastic" the "up arrow" on a simulated keyboard is "pressed" which causes the character in game to also jump.
This program is meant to act as more of a template that can be applied to various games. In order to change controls on a game by game basis you need to simply change what the "keyboard" outputs after a given emotion is detected.
Conventional DIY EEG headsets like MindFlex Duel only being a single-channel headset, the data collected/calculated is NOT accurate and the formulas for the sentiments are based upon multiple channels. The EEG is hard to get. Because commercially available are caps (mostly for medical) using gold Cup electrodes. I made some hacked-electrods for that (basically metal surfaces for starters and wet ones - gel/ECG as a cheap starting point. Though currently I am working on DIY active electrodes for better noise-filtering.) this is to emphasize more on electrodes and bring down the cost of the headset itself. I just got to know about the this twitch streamer a few days back, and I can see she is using a commercial headset. These are more readily available in US, with players like emotiv, OpenBCI g.tec and toys like mindflex(hackster also has a video with this brand's headset) and probably built her own code on top of it. From what I saw in the VOD, she was controlling movement through the keyboard and using just attack as a trigger from her brain evoked activity.
Comments