We are living in a world where there are so many diseases that spread through human interaction and it is difficult to track their origin. COVID 19 is one such virus that's wreaking havoc on mankind. It spreads rapidly and the majority of the carriers are asymptomatic which increases the risk of getting infected more. A lot of businesses and especially front-line workers have suffered due to the lack of readiness to fight with the virus. So far, the only proven way to fight with the virus is the constant disinfection of surfaces and the environment. My robot Charlie, would be an economical and efficient way to overcome the fear of infection. UV light is most effective for its property of killing bacteria and viruses. The robot has 8 UV lamps to disinfect its surroundings and a 200 LED strip of UV Lights on the bottom to kill the viruses on the floor.
The robot uses efficient and robust SLAM and planning methods to navigate autonomously through its surroundings while constantly trying to avoid UV exposure of humans in its vicinity. While not disinfecting, the robot can stand sentry and surveil as it holds all the hardware required to do so.
The robot's layout is as follows:
Hardware
Robot Base
- The robot base consists of three major segments:
- Chassis
- Actuators and Wheels
Chassis: The chassis is supposed to be durable and strong to protect the robot from imminent collisions that might happen during regular usage. The chassis is supposed to be easily manufacturable as well. Hence for prototyping, it can be made out of a strong thermoplastic such as Nylon66 using an additive manufacturing technique called SLS (Selective Laser Sintering). For mass production, the same chassis can be built from Injection molded engineering plastic.
Actuators and Wheels: The maximum moving speed of the robot only needs to be about 50cm/s. Hence the base platform does not need high-speed motors. However, the robot needs to carry significant amounts of payload and has to be extremely maneuverable to move through tight spaces. Hence high torque DC motors were chosen to assure reliability and simplicity without compromising performance. To ensure the maximum degree of mobility and maneuverability, mecanum wheels that facilitate motion in all directions in a plane were chosen to drive the robot.
Electronics and Power Chamber: This chamber houses the main computer, motor drivers, wireless charging circuit, UV Lamp driver circuits, and other electronics:
- Main Computer: The main computer for this robot is a Raspberry Pi4. The board uses Broadcom BCM2711, Quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz with 8GB LPDDR4-3200 SDRAM 2.4 GHz which makes it powerful enough to handle all the computation for autonomous navigation and its onboard 5.0 GHz IEEE 802.11ac wireless, Bluetooth 5.0 BLE, 2 USB 3.0 ports and 2 USB 2.0 ports makes it versatile enough to connect to various peripherals. It also has GPIO pins using which PIR sensors and the motor drivers can be interfaced.
- Power: The robot uses 4 6S 16000mAh 60C Lipo batteries to power the actuators, UV lamps, and other electronics. The batteries provide 22.2-24.8v through a charge cycle. The robot consists of a step-down circuit that converts 24VDC to 5VDC to power up the main computer, the logic unit of the motor driver, the sensors, camera, and the LIDAR. The robot also houses a 24VDC to 24VAC Inverter which then steps up the 24VAC to 120VAC to power up the UV Lamp ballasts. The power is controlled through a relay circuit that can cut down power flowing to the UV lamps very quickly if necessary. The robot uses a wireless charging coil to use a homing routine to charge at a fixed location when not in use.
- Motor Driver: The motor driver circuit controls the direction and speed of rotation of each of the 4 DC Motors.
UV Illumination Unit
UV Illumination system consists of 2 main parts
- UV-C Lamps: The robot uses a tower of 8 UV-C Lamps for disinfection. The Lamp of choice is TUV36W T8 Phillips UV-C Lamp. The technical specifications are as follows:
UVC Dose Calculation:
As shown above, the lamp tower has high enough UV dose to neutralize any microorganisms within 1m distance in a matter of 60 seconds of exposure.
As there are 8 bulbs emitting UV radiation and 5 of them are focusing on any 1 direction at once, the total estimated UV Power is 610𝞵W/cm2. The 8 lamps contribute to the UV Radiation region around the tower as follows:
As there is a mirror-like tube in the center, it ensures the maximum distribution of UV radiation.
Also, the lamp emits 253nm UV light, so the TLV for that light is 60mJ/cm2. As the dose of the light at 3m distance for 600 seconds is well under the TLV, the robot has enough time to shut down the UV lamps in the case of human exposure. Please refer to the Safety section to find more details.
- UV-C light strips:
Along with the UV Lamps, the robot has UV-C Light strips attached to the chassis at a 45-degree angle facing the ground. The robot can simultaneously disinfect the ground as well.
Navigation and Guidance
The perception Unit consists of 4 major parts:
- LIDAR
- Camera
- Pan-Tilt for the Lidar and Camera
- Passive Infrared Sensors
Camera:
The robot uses an Intel RealSense Depth Camera D435 which offers the widest of the fields of view for any depth camera of its mass and form factor. It also comes with a global shutter which makes it suitable for highly dynamic environments.
LIDAR:
The robot uses Hokuyo UST-10LX compact, lightweight LIDAR for obstacle detection and localization. Even though extremely efficient, the LIDAR is only 2 dimensional and hence can only create a 2D point cloud.
Pan and Tilt: Robust Localization algorithms require 3D Point clouds to work with and hence the LIDAR needs to be actuated to capture a spatial view of its surroundings. Hence a combination of two high precision actuators is used to create a pan and tilt mechanism. The tilt motion allows the LIDAR to capture multiple field planes and the pan motion is used by the camera to look around and detect obstacles. The depth camera feed and the 2D Lidar feed can be seen side by side in the figure below.
However, the LIDAR data is insufficient to interpret anything about the surroundings. Hence the tilt motion is used and 32 planes are captured, which results in a highly interpretable map of the environment as shown below.
Autonomous Navigation
Autonomous Navigation involves 2 steps- Mapping and Planning.
- Mapping: The point cloud created by the LIDAR is to be interpreted by an algorithm to perform Simultaneous Localization and Mapping (SLAM). There are many such algorithms, but LOAM(Lidar Odometry and Mapping) has proven to be one of the most efficient real-time, low-drift and low-computationally complex LIDAR based SLAM algorithms without requiring high accuracy inertial measurements or ranging.
Source: https://ri.cmu.edu/pub_files/2014/7/Ji_LidarMapping_RSS2014_v8.pdf
Using LOAM, a map of the environment can be built ahead of time, and then a path needs to be planned to move around in the environment.
- Planning: There are many planning algorithms available, even RL is being used nowadays to perform planning but Search-Based A* has proven to be quite an effective planning algorithm. It searches for the most optimal path (least energy consumption and no possible collisions ) in any given mapped environment.
Safety Features
The robot is equipped with a 3D Depth camera and 4 PIR motion detection sensors. The depth camera runs a Fast Human Detection program that detects human presence in the field of view in the order of milliseconds. When the disinfection process is initialized, the robot scan the whole room for humans and then begins operation. The camera constantly keeps searching for humans in the room. When a human is detected in the room, the disinfection process is immediately terminated and UV lights are turned off.
Source: B. Choi, Ç. Meriçli, J. Biswas and M. Veloso, "Fast human detection for indoor mobile robots using depth images," 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, 2013, pp. 1108-1113, doi: 10.1109/ICRA.2013.6630711.
Complimenting the vision system, the motion detection system also stays on the lookout for humans. The PIR Motion detector has a range of 7m. The robot has 4 such sensors and creates a circular region of detection of diameter 14m. Any human or animal(only hot-blooded, lizards, and insects don’t emit IR) in this region will be detected and the UV lamps are shut down immediately by cutting the power to the driver circuits.
Along with the Sensing systems, the robot also consists of a strobe light that continuously emits light and sound while disinfection is going on to alert people in its surroundings.
Comments