The word is facing a great pandemic i.e. The covid-19 Virus. Detecting this virus is becoming a great deal. And protecting the people from this virus is also a big deal. Now consider a hospital, I came to know that many of the doctors and other medical workers are getting affected by this disease while taking care of the patients. So to solve this issue, I have come up with an idea of using voice controlled robot in the hospitals to protect the spread of disease from patients to doctors. Usage voice controlled robots can reduce the contact between the doctor and the patient, thus the spread of disease can be controlled. Also the bot can measure the temperature of a person using the contact less IR temperature sensor and if the temperature is found to be high, it alerts the doctor and sends the data to the Doctor. I wanna thank Hackster.io and Intel for organizing this contest.
Problems:Now the world is facing a great issue i.e COVID19 virus.Doctors are struggling with the virus though the spread can't be controlled.Also doctors are being affected and losing their precious life.We need a solution to protect the doctors as well as the spread of virus from the one person to other.
Idea and "Iris":I've made a raspberry pi based robot named "IRIS" to solve the problem. Iris is a bot that has rover and robo arm. So the design is a combination of rover and a robo arm. Balena fin or Raspberry pi along with the Neural Compute Stick 2 will be acting as the brain of the robot. The bot can be controlled remotely using voice and through mobile phones.I found robots are rarely employed in hospitals since controlling it manually is a big deal.To make it easier, I'm planning to control the bot through voice, so that the patient as well as the doctors can use the bot to help them like a assistant. Employment of robots is necessary, since in this abnormal situation patients and doctors cannot always depend upon a person for assistance. So, to prevent the spread and to protect the lives, employment of such robots will be helpful to fight against the COVID-19.
CONSTRUCTING THE BOTRobo Arm:The above picture shows the unassembled parts of the robo arm with 4 DOF. You can use any robo arm kit or make your own customized build. And the arm looks like this after the build or assembly.
The above picture shows the unassembled parts of the Base with 2 DOF. You can use any base chasis kit or make your own customized build. And the base chasis looks like this after the build or assembly. I'm just using two wheels, for simple operation.
Now, It's time to connect the robo arm and the base. And after connecting the things, the bot will look like this.
I've used two motor drivers, one is for the base motors and the other is for the robo arm motors. The driver images are shown below.
Here I've used motor driver shield version 1. Any motor driver can be used based on the custom design of robot.
Since I'm implementing my idea into a realtime application I'm using small and miniature parts to build the bot. When manufacturing it for real time use in hospitals, we can use efficient motors and designs. And don't forget to connect the Arduino uno to the USB port of Raspberry Pi or Balena Fin.Simple Casing for Pi:
I'm just using a simple casing like structure built using acrylic sheets to place the Raspberry Pi in the bot. And using some studs to fix it in a place. The casing looks like this:
And after fixing it to the bot, the bot looks like this:
The circuit connections are made, the connections are made available in the post. The Arduino Uno with motor shield is placed under the yellow sheet that you can see in the above image.
MLX90614 IR Temperature Sensor:I'm using a contactless IR temperature sensor to measure the temperature of the person inside the hospital and person entering into the hospital. The temperature sensor looks like this:
I'm gonna fix this temperature sensor in a stick. So that the sensor will be at a maximum height to read the temperature of the human. Since I'm just making a mini prototype I'm using an extender to hold the IR sensor and the MLX90614 sensor. The IR sensor is used to detect the presence of human to measure the person's temperature. The design can be customized when designing a bigger robot. And the final arrangement looks like this:
Thunderboard Sense 2 is a compact, featured-packed development platform. It provides the fastest path to develop and prototype IoT products such as battery-powered wireless sensor nodes. I'm using the board to read the environmental charateristics like pressure, temperature and CO2 levels and publishes it to the user whenever asked for. More information about board an be found here, and the board looks like this:
I've just changed the advertizing time of the board and processed all the data in the python scripts. Soon I'll be updating this project with Voice over ble which uses the Thunder board sense2's built in mic instead of usb mic.
Balena Fin V1.0 or Raspberry Pi 4:When coming to the brain of the robot, using an embedded linux platforms are the best. I'm using Balena Fin V1.0 in the bot, also I've tried using the Raspberry Pi 4. And the Boards look like this:
Setting up the hardware can be found here:
Intel Neural Compute Stick 2:Intel Neural Compute Stick 2 is a plug and play USB device for advance computations and AI at edge processes. We'll be using the NCS2 for facemask and object detection and also for speech recognition of our robot, the code is under testing and it will be uploaded to the below repository shortly!
Pi Camera Module:Next, we're gonna take a look at the pi camera module. PI camera is used here for monitoring the patient. The video will be live streamed to the smartphone. I'm using v1.3. But advanced version also suits well.
Connecting the camera to Balena fin is as same as connecting to the raspberry pi. If you use latest Raspbian OS. you'll need to copy the dt-blob.bin to the /boot folder of the os. If you use pre-configured os from Balena fin github repository. Checkthisout for better idea about pi cameraUltrasonic Sensor:
Ultrasonic Sonic sensors are used for obstacle avoidance of bot. Here I'm using 3 ultrasonic sensor for the obstacle avoidance. We can also use rplidar for SLAM and obstacle avoidance while building a bigger advanced bot. Since this is a miniature bot I'm using Ultrasonic sensors. The sensor looks like this:
I'm using three ultra sonic sensors. Front sensor, Left sensor, Right sensor. These sensors are responsible for obstacle avoidance of the bot.
Wemos Mega:Wemos Mega will be used for integrating voice controlled automation with the bot. Wemos Mega is Arduino Mega compatible board plus additional ESP8266 with 32Mb Flash. It allows flexible configurations for connections between ATmega2560, ESP8266 and USB serial. Arduino sketches can be uploaded to ATmega2560 or ESP8266 via USB separately and they can either work together to form a system or independently. The configurations are set by the onboard DIP switches.
The things we need to know before starting to work with this board is DIP switch configuration. Refer the below picture:
An additional switch configures to which serial port (Serial0 or Serial3) the ESP8266 is to be connected. It is possible to connect USB to RX0/TX0 of ATmega2560 and ESP8266 connects to RX3/TX3 of ATmega2560 at the same time, as following:
While uploading the esp8266 code, DIP switches-5, 6, 7 should be kept ON and UART switch is to be kept at RXD0-TXD0.
While uploading the Mega 2560 code, DIP Switches 1, 2, 3, 4 should be kept ON and the UART switch is to be kept at RXD3-TXD3
The eighth DIP is NC pin, so it need not be considered.
After successfully uploading both the codes, proceed ahead.
After Completing the above procedures we need to install the packages and python libraries for our bot.
Final Hardware Setup:All connections should be made according to the schematics provided. Don't forget to refer the schematic section before wiring the bot! I'm using DC adapter as power source. A battery can be used as power source while customizing the bot.
After assembling the entire hardware, the bot will look like this:
Now let's start working on the software setup.
Setting up the Software:Before starting to install the required packages and libraries, enter sudo raspi-config
in the terminal and in the interfacing options enable VNC
, I2C
, Serial Console
, Camera
etc,.
And the python libraries used here are:
- speechrecognition
- paho mqtt
- snowboy
- pyaudio
- bluepy
- Rpi GPIO
- Flask etc,.
The requirements.txt file can be used to install all the necessary packages required for the operation of robot. Before that, Don't forget to create Python3 virtual environment. The Virtual environment can be created by using the following command:
python3 -m venv <your env> #replace <your env> with your desired name
After creating the venv (I named it as Iris), We're gonna install the required packages using the requirements.txt file. The file should be executed as mentioned below inorder to install the necessary packages. The command is:
pip3 install -r requirements.txt #Don't forget to use "-r"
The above step will install all the necessary python packages needed for the bot. After the above step we have to install certain linux packages inorder to make the usb mic to communicate with the system properly. So execute the commands in terminal as follows:
sudo apt-get install libportaudio2 portaudio19-dev
sudo apt-get install python3-pyaudio sox
sudo apt-get install libatlas-base-dev pulseaudio
sudo apt-get install libasound2-dev libportaudio-dev
sudo apt-get install libportaudio0 libportaudiocpp0
sudo apt install -y mosquitto mosquitto-clients
sudo systemctl enable mosquitto.service
Now execute the following commands in terminal and note down the "card number" and "device number" for mic and speaker respectively.
arecord -l
The above command will show you the list of input devices, note down the card and device number of the usb mic. Then execute this:
aplay -l
This command will show you a list of playback devices. Normally "hw:0, 0" for HDMI. For now it is ok to use the HDMI port since we will be using the speaker in the future development of bot.
After executing the above commands, We're gonna create a file to make the pi or fin to communicate with the usb mic. Execaute the command in the terminal:
nano ~/.asoundrc
And then copy and paste this into the terminal,
pcm.!default {
type asym
capture.pcm "mic"
playback.pcm "speaker"
}
pcm.mic {
type plug
slave {
pcm "hw:<card number>,<device number>"
}
}
pcm.speaker {
type plug
slave {
pcm "hw:<card number>,<device number>"
}
}
Change the <card number>,<device number>
according to the number found at the above step. After changing, press Ctrl
+ x
and Shift
+ y
and press Enter
.
Setting up NCS2:
The instructions for setting up the NCS2 with raspberry pi 4 can be found here,
Since my project is under construction, I'll be updating the entire setup process soon. NCS2 will be used for face detection, voice recognition of the Iris bot.
Installing OpenVino Toolkit:
The instructions can be found here for installing the Openvino Toolkit in RPi4.
Installing OpenVino Toolkit in Rpi4
Non-root permission for Bluepy:
Now, We're gonna set not root permission for Bluepy-helper inorder to execute our python code without "sudo" flag. Just enter into the bluepy directory in the python virtual environment by using the following command and execute the next command. The commands are as follows:
cd <your env>/lib/python3.7/site-packages/bluepy #replace "your env" with your env name
sudo setcap 'cap_net_raw,cap_net_admin+eip' bluepy-helper
Setting up the Camera module:
Connect the raspberry pi camera properly to the Balenafin or Raspberry pi board. And make sure the board is off when connecting the camera to CSI port. Connecting the camera while the board is on can cause the camera to get damaged.
Make sure the camera is enabled using the Sudo raspi-config Command.
Check whether the camera is detected by using the command:
vcgencmd get_camera
If detected successfully. Execute this command in terminal:
raspistill -o cam.jpg
If an image is generated successfully. Great! We're all set!
DON'TFORGETTO CLONE THE GITHUB REPOSITORY BEFORE PROCEEDING ALL THE STEPS!Running the Scripts:
Start by rebooting the board using the command:
sudo reboot
Then activate your virtual environment using
source <your venv>/bin/activate
Now,Don't forget to power up the Thunder Board Sense 2. Then run the script
python Iris_bot.py resources/Iris.pmdl
And in another tab start the server by executing
python Iris_server.py
After the execution of the script starts, go to your browser and type the URL in the format:
<PI or FIN's IP ADDRESS>:8000 #8000 is port
Here we go! You can see Iris Dashboard ! now we can see the bot in action! You can add the execution command on system's bash file to run the scripts automatically at the next boot.
IRIS bot in action!Conclusion:Hope you all found this write up useful. I made this project on keeping every Doctors who are sacrificing their lives and happiness for us. Let's fight this covid19 together and let's make a better world.
And If you have any queries feel free to contact me. And I wanna thank Adafruit, Amazon Web Services, Arduino, Arm, Avnet, balena, DFRobot, Google, Intel, Microsoft, Nordic Semiconductor, NVIDIA, NXP, Seeed, Silicon Labs, SORACOM, The Things Network, Ubidots, UNDP and UNICEF for making this "The COVID-19 Detect & Protect Challenge".
Let's Stay Home Stay Safe! Thank You!
Comments