For a while now I was considering making a project that would help paralyzed individuals to read e-books on a computer by using just eye movements. It was around this time that Xilinx announced the Adaptive Computing Challenge. I knew it was the perfect opportunity to make my project.
First IterationFirst I wanted to have a working program on a Jupyter Notebook, So I made the first iteration using OpenCV and it ran perfectly. I referenced Akshay Chandra's project on GitHub which did exactly what I was planning on making but with head movements instead of eye movements, I am linking his repository here. I later realized that with the camera I had it would not be possible to detect eye movements, so I pivoted the project to be head movement controlled.
I have used an FPGA from Xilinx before during our CADD classes in college but I am still unexperienced with FPGAs in general. So I needed to get familiar with the board first. I set up the board and ran the smart cam application by following the tutorial provided by Xilinx. The tutorial was brief and easy to understand. I was able to do the above mentioned task in under two hours.
This project is not complete :(unfortunately I haven't been able to complete this project before the contest deadline as I faced a lot of hurdles while learning Vitis AI and I will try my best to complete this project at a later date.
Comments