The current pandemic situation has caused disruption of jobs and businesses and calls for urgent low-cost solutions to protect people at workplaces by preventing the spread of SARS-CoV-2 (causative agent of COVID-19). The UV radiation warps the structure of the genetic material of the pathogens and prevents the particles from making more copies of themselves. Ultraviolet germicidal irradiation (UVGI) combined with robotics is a practical solution to disinfect surfaces that may have viruses or bacteria, and also does not require human interaction for the sanitation of those high-risk contact areas. The UV radiation warps the structure of the genetic material of the pathogens and prevents the particles from making more copies of themselves.
Through the Micron UV Robot Design Contest platform, team Trishul has designed a cost-effective and reliable mobile robot that is capable of disinfecting exposed surfaces and objects using UV light. The process of designing involved a number of calculations and trade-offs were performed to decide the functionalities of the robot in order to meet the requirements outlined by the contest organizers.
This report documents the preliminary design of UVtar (pronounced as ‘avatar’), a highly maneuverable, easy to handle robot that can operate in spaces of different sizes like in offices, classrooms, nursing homes, public washrooms, etc. It is designed with the intention that it is fully open-source and can be built from easily available resources. The robot is capable of providing a kill dose of 25mJ/cm^2 in five minutes within a distance of 1.83 meters. It is a 32kg three-wheeled robot of ~1.5m in height and uses three UV tubes, powered by a rechargeable Lithium-ion battery, software developed on the robotic operating system (ROS) platform, with the main computer Jetson TX2 processor for control. It is a semi-autonomous system with a supporting mobile application required to operate the robot. It can autonomously map an area, navigate through waypoints on a preset path while avoiding obstacles on its way. The most important aspect of the robot is the safety feature while operating. An array of sensors helps it to understand its environment. Firstly, with the aim of providing reliability in safety, PIR motion sensors are used with a very high proximity threshold for any close range motion detection to stop UV tubes. In addition, human detection is accomplished by four RGB-D cameras panning 360 degrees around the robot and the UV tubes shut off if any motion is detected within 1.5m range.
This document presents a full description of the different hardware and software that constitute the robot, as well as the deciding factors that led to the final selections. The time and distance when the robot must shut off has been analyzed in detail and described in the report.
This document can also be used as a user manual to build and operate the robot.
1. INTRODUCTIONUltraviolet germicidal irradiation (UVGI) is a disinfection method that uses short-wavelength ultraviolet (UV-C) light of range 200-300nm to sanitize the air, food, water, and surfaces. UV light when absorbed by the microorganisms destroys their nucleic acids, and disrupts their DNA structure, leaving them unable to perform vital cellular functions. If the irradiation intensity is sufficiently high, UV disinfection method is 99.99% reliable and environmentally friendly, as no chemicals are involved and no hazardous by-products are formed. Furthermore, microorganisms cannot develop resistance to UV radiation [1]. According to independent studies conducted by various research groups, UV-C radiation is proven to be highly effective to inactivate SARS-CoV-2 (coronavirus) and reducing it by a factor of log 6 with a dosage of 22mJ/cm^2 at an exposure of 25 seconds.[2]
UV-C rays of the Sun’s electromagnetic spectrum are absorbed by the ozone layer in our atmosphere long before it reaches the surface of the Earth. Therefore, it is crucial to apply this property of artificial UV light technology to reduce the risk of infections caused by pathogens and improve hygiene conditions in areas such as workplaces, healthcare facilities, etc. There are a number of UV disinfecting robots in the market due to the spike in demand for such robots during this pandemic [3]. The challenge for the Micron UV Robot Contest is to design such a robot which is fully open-sourced priced not more than US$10, 000 in order to enable small business, clinics, schools, departmental stores, etc to easily build or manufacture it [4]. The advantage of such a system is that it does not involve any chemicals and is environment friendly. However, UV rays are extremely dangerous for living cells including the human skin and eyes leading to cancer. Therefore, it is necessary to take extreme precautions while handling such systems to avoid any kind of exposure. [5]
This document outlines the concept development and design of a cost-effective and safe mobile robot developed by Team Trishul. The robot named UVtar is capable of UVGI, and can be safely deployed for the purpose of disinfecting high-contact surfaces using UV-C light. It is intended to be low-maintenance, simple-handling disinfecting robot with low-costs for prototyping, manufacturing and operating.
UVtar is a three-wheeled robot with a 2WD/1WS (2 Wheel Drive/1 Wheel Steering), and a maximum operational speed of 5cm/s. The two wheels are driven by differential drive and a freely rotating pilot caster wheel for support. The structure involves three UV tubes placed on a cylindrical base and equipped with the necessary sensors such as the RGBD camera and Ultrasonic sensors to sense its environment. The structure and mechanical parts are made out of aluminium and 3D printed ABS. The design in its current state is capable of providing the following functions:
- Stable design to avoid tipping
- UV dose of 25mJ/cm^2 or more in 300seconds within a distance of 1.83m
- The vertical range for disinfection is upto 3m high
- Power storage, regulation, and distribution.
- Mapping, proximity sensing and waypoint navigation
- Data processing and storage
- Communication with user/operator through a dedicated mobile application
- Teleoperation of the main locomotive functions
- Human detection and safety indications
- Emergency shut-off within Threshold Limit Value (TLV) when motion is sensed
The different systems and subsystems of the robot are described in detail in the following pages. Depictions of the mechanical system are included for clarity purposes. Schematics of electrical and electronic systems are also included as a reference along with the repository for the software codes. Figure 1.1 shows the isometric view of UVtar along with its main characteristics and scaled alongside a human adult.
The primary objective of this contest is to design an original open-source UVGI robot with the following requirements laid out by Micron [4]:
Functionality
- Should deliver a UV irradiation kill dose of 25mJ/cm^2 or more in five minutes within a distance of 1.83 meters
- Should detect object and motion in its environment
- Should safely shut-off within the Threshold Limit Value (TLV)
- The UV tubes should be protected from collision damage
- Should operate for a reasonable time before charging, easy charging process
- Should be either Remote-controlled or autonomous
- Should be small and maneuverable in small stall bathrooms (0.6mx0.6mx2.0m WxLxH)
- Should be durable, relatively maintenance free with quality parts
Manufacturability
- Should be easy to manufacture with readily available components and software
- Should be cost-effective and Bill of Materials (BOM) less than US$10000
- The documentation should be easy to follow and aide in manufacturing
- Should be possible to manufacture in volumes rapidly
In addition, the robot can be designed to be innovative in doing the tasks and prove efficient in terms of functionality, manufacturability and costs.
2.2 UV Lamp ConfigurationsThe UVGI tubes differ in specifications and features based on the type of applications. Air purifier systems are designed to treat air, gas, or vapor streams, UV water treatment systems are designed to treat water or liquid streams while surface disinfection systems are designed to destroy microbes on the surfaces of parts or materials, and within transparent components. Lamp specifications cover spectral range, output power, and UV dose intensity, which are important to consider when selecting UV disinfection systems and UVGI sanitizers. Choices for spectral range include UVA, UVB, UVC, UVV, and full-spectrum. UVC, the germicidal or short wave UV band with wavelength 200–280nm produces germicidal irradiation by generating energy in the UV spectrum that inactivates pathogens by destroying their DNA and RNA, rendering the microbes impotent [6]. The maximum absorption of UV light by the nucleic acid, DNA, occurs at a wavelength of 260nm. Therefore, a germicidal lamp emitting UV at 254nm is operating very close to the optimized wavelength for maximum absorption by nucleic acids.
Various types of UV tubes are now available on the market such as Mercury Lamps, Quartz Lamps, Amalgam tubes and UV LEDs.
2.2.1 Optimum Dose CalculationsThe aim of this project is to construct a model which can efficiently disinfect a 3m high wall from a distance of 2m. The dose-delivery over the target surface should be as equitable as possible in order to optimise the power consumption of the model. In order to achieve the same, different types of UV sources were considered ranging from UV-LEDs to Low-Pressure lamps. In addition to that, different possible orientations were tried out to conclude upon a fair compromise between dose- delivery and practicality.
A. LEDs vs Low-Pressure Lamps:
UV-LEDs have a clear advantage over Low-Pressure (LP) tubes in terms of longevity, physical strength and moldability but unlike the normal LEDs, the commercially available UV-LEDs are not power efficient. Their claimed efficiency varies from 2% to 8% across different popular manufacturers. On the contrary the LP tubes have a claimed efficiency of upto 38%. That would imply that even the most efficient UV-LED would require at least 4 times more power than a conventional LP tube in order to achieve a certain intensity-dose. Given that there are fairly durable and cost-efficient LP tubes available in the market, they have considerable advantage over UV-LEDs in this trade-off.
Low Pressure Amalgam tubes offer high power density, and are designed for stable operations over wide ambient temperature range and long-term applications.
B. Intensity distribution around a tubular lamp:
Almost all of the UV tubes that are commercially available are tubular in shape and it is crucial to understand its intensity distribution in order to proceed with the dose-delivery-analysis. A theoretical derivation and experimental verification of the same is published as open-source at the link [7]:
The variables used in the derivation are (also shown in figure 2.1):
- h- Height of the concerned point from the base of the vertically placed tube.
- d - Perpendicular distance of the concerned point from the axis of tube.
- L- Length of the tube.
- SR - Intensity quotient of the tube.
The result is as follows:
In order to calculate the UV dose and verify the above result, one of the tubes from the list provided by Hackster.io [4] has been taken into consideration. The lamp data has been obtained from TEPRO e-commerce site [8]:
Table 2.1: Atlantic Ultraviolet’s Low Pressure Amalgam Lamp GPHA843T5L specification
C. Variation of Intensity over a Wall:
Considering a wall of height 3m at a distance of 2m from the lamp, the intensity distribution along the height of the wall for two tubes with 35 Watt of UV power each is shown in figure 2.2. As per the dimensions of the model, the lamp is placed vertically at an elevation of 0.5m from the ground.
To disinfect an area, 25 mJ/cm^2 of dose is required and to achieve this dose in 5 minutes, a minimum of 80 µW/cm^2 irradiance is required. From figure 2.2 it is observed that the irradiance at 3m height is just around 60 µW/cm^2, which is not sufficient enough. Therefore, in order to achieve the required irradiance, the angles between the two tubes were analyzed to verify if an angular placement would allow a more equitable distribution of power along the height of the wall. The analysis has been detailed in the next section.
D. Optimum angle of lamps
There are three possible ways to incline the tubes at some angle ‘θ’ as shown in the figure 2.3 below. All the three arrangements would give the same result, if the horizontal distance of the tubes from the centre of the model is ignored. Since the third case (c) has the advantage of being closest to the centre of the model, this case is taken for further calculations.
Consider the lamp along the line BC in figure 2.4 below. Let ‘P’ be the point of concern at an elevation of ‘y’ from ground and at a distance of 2m from the lamp. According to the variables of equation (i), here ‘d’ is the length PF and ‘h’ is the length PH with negative sign since it is lying below the base of the lamp.
By applying basic geometry to get the irradiance at point P in terms of ‘y’ and ‘θ’.
Putting the values of ‘d1’ and ‘h1’ from equation (v) and (vi) into equation(i) the irradiance as a function of ‘y’ and ‘θ’ is obtained for the first lamp. Similarly, putting the values of ‘d2’ and ‘h2’ from equation (v) and (vi) into equation(i) the irradiance as a function of ‘y’ and ‘θ’ is obtained for the second lamp. The total irradiance on the wall is obtained by adding both these results. Now by plotting ‘irradiance vs y’ graph for different values of ‘θ’ and hence find the optimum ‘θ’, shown below in figure 2.5.
The graphs plotted above are for different values of ‘θ’, at 45 degrees, 60 degrees, 75 degrees and 90 degrees in figure 2.5 (a), (b), (c) and (d) respectively. It can be clearly observed that in the case of θ = 90 degrees, the peak irradiance as well as the irradiance at 3m height is the highest. So, it was concluded that using the tubes vertically is the most efficient way of dose-delivery.
In order to achieve the required dose of 80 µW/cm^2 at 3m height, using three tubes instead of two was the most optimal decision. The intensity variation at the wall for three tubes has been shown here in figure 2.6 below. The value of irradiance at 3m height in this case is around 90 µW/cm^2 which is an encouraging figure to proceed ahead.
E. Irradiance at some oblique angle
Till now the calculations have been done for the portion of the wall that is at right angle from the lamp, as shown by line EF in figure 2.7(b) below. That is obviously not sufficient as the requirement is to disinfect the whole surface and not just a narrow strip. Therefore, the irradiance around point ‘D’ which is at a distance of x units from point ‘E’ is calculated. Only the points lying at 3m height have been considered because that is the point where least irradiance would fall and if the required dose at that point is ascertained, the same would be automatically ascertained for all the points lying below it.
Now according to equation (i), for irradiance at point ‘D’,
Also, since the rays are falling obliquely, the cosine component of the irradiance obtained from the formula has to be taken into account. It can be understood from the top-view, as shown in the figure 2.7(a).
Calculating the irradiance for different values of ‘x’, the following data is obtained.
Table 2.2: Irradiance for different values of ‘x’
Here, ‘x’ is varied from 0.1m to 5m, by a step of 0.1m each. Beyond x = 5m, the irradiance becomes too small to be considered for any practical application. It is noticeable that this data is in consent with the previously calculated irradiance at x=0, which is ~ 90 µW/cm^2.
F. Variation of intensity with distance from the wall
All the calculations done so far have been executed considering the lamp at a distance of 2m from the concerned wall but during operation, it would not be the case always. So it is needed to study the pattern of variation of irradiance throughout the height of the wall, as the robot gets closer or farther away. Using the equation (i) the data of irradiance at different values of ‘d’ is obtained
Table 2.3: Variation of intensity with distance from the wall
From the data it can be inferred that taking the lamp closer than 1m is not desirable as the irradiance at 3m height starts deteriorating. Secondly, it should also be noted that the peak irradiance is higher at shorter distances but the required time of exposure would be determined by the area which is least exposed, i.e., the data in the third column.
2.2.2. Safety Considerations and Design ImplicationsAs per the ACGIH(American Conference of Governmental Industrial Hygienists), the exposure dose limit for actinic UV radiation is 3 mJ/cm^2 for a work day of 8 hours. The time in which this limit is reached for different distances from the tube has been calculated. Here the peak intensity instead of least intensity has been considered because human intervention can happen at any height and it is necessary to be prepared for the worst.
Table 2.4: Variation of intensity with distance from the wall and response time
By looking at the above data it is evident that it is not desirable to operate the UV tubes at a distance shorter than 1m from any of the entry points from where humans can enter because in that case there is not enough response time to shut down the lamps. So, the model should operate at a distance of at least 1m from all the walls around it, both for effective dose delivery and safety concerns.
UVtar has been designed in a way to be operated safely in an environment where there is a chance of human or animal intervention. For this purpose, different safety sensors and indicators are implemented in the robot. Camera and Ultrasonic sensors can detect humans upto 4m range, while the Passive Infrared Sensors (PIR) detects any kind of motion within 1.5m distance.
- If a human is sensed between 1.5m to 3m distance, the robot will alert the operator by starting an alarm buzzer
- If any kind of motion is sensed within 1.5m distance, the robot immediately shuts-off the light
The working of each of these safety features have been described in later sections.
2.3 Mechanical DesignThis section describes the mechanical designing and the locomotion mechanism of UVtar in detail.
2.3.1 StructureThe structural components have been designed so that UVtar can work in narrow and confined spaces. The robot dimensions are 510mm x 510mm x 1465mm (LxWxH) and can easily pass through doors which are usually 600mm wide and 2000mm high. A circular base has been chosen so as to increase the mobility of the robot. The main structural components are the baseplate, cylindrical cover, support rods, and top plate.
The baseplate is designed to take a load of all other components so that the center of gravity of the robot is lowered and the overall stability is increased as shown in figure 2.8. The baseplate is made up of aluminium sheet metal for high strength and lightweight properties of the material. Sheet metal bracket has been welded into the baseplate for placing the electrical components viz. the battery, inverter, charger, and electronics bay. The baseplate has holes for attachment of drivetrain elements. Cylindrical brackets for the support rods are also welded into the baseplate. Four ultrasonic sensors have been attached to the periphery of the baseplate, facing radially outwards. The wire retractor mechanism which is used for charging is connected to the baseplate using brackets and spacers.
The cylindrical casing is made out of ABS plastic and is manufactured using injection molding technique (refer figure 2.9). Four cameras and four PIR sensors are attached to the casing. The power switch for the robot, emergency stop button, battery level indicator, buzzer for alarm, and the antenna for wifi reception are also fitted on the cover at convenient positions. The casing has vent holes which allow passive cooling of the internal electrical components through convection. The casing is fitted onto the baseplate using M6 screws.
Three support rods have been used in triangular configuration to reduce deflection due to jerk from the motion of the robot (refer figure 2.10). The support rods have threaded ends for screwing into the cylindrical brackets on the baseplate. The support rods are hollow having an external diameter of 10mm and internal diameter of 6mm. They are made out of structural carbon steel (SS400) for high strength properties. The hollow structure allows the passage of wires which connect the components placed on the top plate, such as LiDAR and strobe LED indicators, to the power supply unit, which is placed on the baseplate.
The top plate houses components such as the LiDAR, the strobe LEDs, and also stabilises the UV tubes (refer figure 2.11). The top plate is made out of ABS resin and can be manufactured by 3D printing. The top plate is fixed onto the support rods using M6 screws.
The UV tubes are fitted such that they start from 500mm from the ground level. To protect the tubelights from jerk during movement, they are mounted on a metal bracket (Al-5052) which connects to the baseplate on the lower side, and secured using the top plate from the upper side. For connection of the UV tubes, there are holes on the lower bracket for attaching wires to the terminals and on the top side, wires can be passed through the hollow support rods and fed in through the holes on the top plate. The support rods also act like a cage like structure making sure that the tubes do not hit something during an impact from the sides. In case of an impact from the top, the top plate will encounter the force and channel it through the support rods, keeping the tubes safe. For additional protection of the tubes from any projectile collision, a mesh or quartz shield was considered, however, it was avoided firstly, because such an incident of projectile collision is highly improbable. Secondly, the mesh would have compromised with dose delivery and the quartz glass would itself be prone to damage. Also, in case the tubes are damaged, they can be easily replaced.
The electronic components are placed on the baseplate using brackets. The battery, charger, and the inverter are placed directly on the baseplate and secured using a bracket which is welded to the baseplate (refer figure 2.8). The electronics bay which houses the power converter, PCBs, micro-controller, motor driver, and a 4-port USB hub are placed on top of the inverter inside another bracket (refer figure 2.12). That same bracket houses the onboard processor, i.e., NVIDIA Jetson TX2 developer kit.
The layout drawings of the robot and its key dimensions are displayed in figure 2.13
The drivetrain of UVtar comprises two motor driven wheels and a passive freely rotating caster wheel. (refer figure 2.14). The motor driven wheels have an outer diameter of 100mm and a shaft diameter of 6mm. They are made up of cast aluminium with a rubber ring on the periphery. These wheels are mounted on a bearing holder with a hub diameter of 22mm. The bearing holder is attached to the baseplate. It makes sure that the motor shaft does not have to bear any radial or axial stress. The motors are mounted on motor mounts which are attached to the baseplate. The shaft of the motor is directly attached to the wheel using a keyhole on the wheel to hold the shaft. The caster wheel has a wheel diameter of 50mm and attachment height of 70mm. Overall, the drivetrain is fitted so that the ground clearance of UVtar is 70mm.
The robot moves on a precise differential drive mechanism. Its direction can be changed by varying the relative rotation rate of its motor-driven wheels and hence does not require an additional steering motion. The caster wheel acts as the pilot wheel for support. This type of drive mechanism, combined with the cylindrical structure of the robot, enables it to perform 360 degree movements in constrained spaces with ease.
Torque Requirement Calculations for MotorUVtar is quite light, weighing just under 32 kg. To estimate the required torque the following three factors needs to be considered:
- Acceleration
- Rolling friction
- Slope of the operating surface
The radius of the wheel is 0.05m as shown in figure 2.15 and the model is designed to handle any slope upto 15 degrees.
The maximum speed of UVtar is 5 cm/s. Considering that the top speed is achieved in 1 second, the required acceleration would be 0.05 m/s^2.
The rolling friction can be ignored because the coefficient of rolling friction for the concerned surfaces would be very low, i.e., in the range of 0.001 to 0.004.
Since there are two motors to counter the effect of slope as well as to accelerate the mass of 32 kg, The mass to be handled by one motor-wheel system can be comfortably considered as 16 kg. So the equation of Force can be stated as:
For the given radius of the wheel, the torque required would be 42.4N x 0.05m = 2.12 Nm
The motor employed in UVtar can produce torque upto 2.5 Nm, so it can easily handle the tough slopes without compromising with the acceleration.
2.4 Electronics and SensorsThis section lists all the electronics and sensors used onboard UVtar to sense, map and navigate through its environment while operating.
2.4.1 Onboard ProcessingThe robot is to be controlled by one Onboard Computer (OBC) NVIDIA Jetson TX2 Development Kit shown in figure 2.16 that is incharge of all the high-level functionalities. Jetson TX2 is enabled with Wifi 802.11 and Bluetooth 5.0, and is appropriate for the heavy computational tasks often associated with visual data. It is fitted with two USB hubs to connect the various onboard components. The robot functionality handled by the OBC includes:
- Communication with mobile application to retrieve commands from operator
- Communication with sensors to receive and store telemetry data
- Mapping, Human-Detection and Path Planning
- Control UV Lights based on sensor data
- Communication with Arduino Microcontroller
- Safety indicators
The OBC should be running on Ubuntu Operating System (preferably 18.04LTS) and have a compatible ROS version (Melodic Morenia) installed. The wiki page on Jetson TX2 Development Kit can be found here https://elinux.org/Jetson_TX2
Additionally, the OBC is connected to a microcontroller Arduino UNO R3 (refer figure 2.17) via a serial bus for the purpose of high-level to low-level communication with the ADE for main locomotive tasks (i.e., driving the motors). This decision was made with the objective of minimizing the number of tasks handled by the OBC while utilising the General-Purpose Input Output (GPIO) pins and Pulse Width Modulation (PWM) frequency of the controller board, to sense and control the speed and direction, an essential requirement for a differential drive system.
The ADE is composed of all the different modules in charge of the control and actuation of
the robot mechanisms (i.e., driving the motors). The UVtar wheels are independently actuated by the turbo worm Geared DC Motor (model number JGY370). The rated torque is 2.5Nm and 10rpm with an auto-lock. The DC motors are interfaced to a high current, high voltage dual H-bridge driver L298N Motor Driver with Arduino as shown in figure 2.18. It can control both speed and spinning direction of two 12V geared DC motors precisely. The robot’s maximum operational speed is 5cm/s. A simple Arduino sketch for controlling two DC motors with L298N motor driver is here http://qqtrading.com.my/stepper-motor-driver-module-L298N
UVtar is equipped with an array of sensors to identify features in its environment and localize itself while operating safely.
LiDAR
A 2D LiDAR (Light Detection and Ranging) sensor is mounted on the top of the robot at a height of about 1.4m to scan and create a digital map of the area. Hokuyo R360-UST-20LX scanning laser, shown in figure 2.19 has been used due to its light-weight, high-accuracy and high-speed. It can obtain measurement data in a 270° field of view up to a distance of 20 meters and helps in Simultaneous Localization And Mapping (SLAM) that enables autonomous navigation. After a navigation route has been preset, the robot is able to navigate and sanitize the defined spots. The LiDAR is also used to detect any cliff, stairs, or steep drops.
Camera
RGB-D cameras have been implemented to view 360° of its surroundings. Four Intel® RealSense™ Depth Camera D435, shown in figure 2.20, are placed in a 360° plane at equal intervals on the body of the robot at a height of 445mm from the ground. The camera mounts are adjustable to vary the field of view (adjusted at 20 degrees for simulations shown in this project). The primary task of the cameras is to detect humans within a range of 3m around the robot. The camera is also used by the operator to view the area in real time and determine the direction of the movement of the robot. D435 provides a low cost solution that is lightweight, has a wide FOV and ideal for low light conditions, with an option to upgrade the system to visual-SLAM, enabling development of sensing and detecting solutions that can understand and interact with its environment in a superior way.
Ultrasonic Range Sensor
Ultrasonic sensors are used as proximity sensors on the robot to measure the distance of the human and obstacles around the robot. Four MaxBotix’s HRLV-MaxSonar-EZ0 Ultrasonic Sensors, shown in figure 2.21, are placed in a 360° plane at equal intervals on the body of the robot at a height of 90mm from the ground. It has a maximum range of 6m and resolution of 25mm. The distance data is fused with the camera detection system and used to control the UV lights as well as for collision avoidance in path planning. The low placement of these sensors help to identify bumps and raised platforms.
Motion Sensor
Pyroelectric “Passive” InfraRed (PIR) sensors are used as motion sensors to detect close range movements around the robot. Four low power and low cost HC-SR501 PIR Motion Sensors, shown in figure 2.22, are placed in a 360° plane at equal intervals on the body of the robot at a height of 260mm from the ground. These sensors are highly sensitive and can detect thermal infrared radiation (*wavelengths ranging from 7 μm to 14 μm) changes within their field of view. These PIR sensors are connected in parallel and solely implemented as the first layer of emergency safety module. It trips the relay switch and breaks the circuit temporarily to immediately switch off the UV lights if any kind of motion is detected within a 1.5m range. The Fresnel lens on the PIR increases its field of view and the potentiometer enables to determine the sensitivity and time for which the lights should stay off after the agent has moved away.
*Note: The PIR sensors will not be affected by the IR beams projected from the RGB-D camera which has a wavelength of 850±10nm.
Therefore, the multiple sensors on UVtar makes its design a failsafe system, enabling it to operate in varying lighting conditions and spaces of different sizes.
2.4.4 Safety IndicatorsA number of safety indicators and operations are installed on UVtar to prevent damages and accidents due to the UV lights.
- Main Power switch is operated manually when the robot needs to start and stop
- Remote safety button is provided in the mobile app interface to stop the motion of the robot in emergency situations
- UV Light switching is done through the mobile app interface
- Bright red LED indicator on robot when the UV lights are working
- Buzzer which acts as a sound alarm when a human is detected within 3m distance of the robot
- Automatic switch-off of UV lights when motion is detected within 1.5m distance
- Warning notification in mobile app interface if the robot is close to an animal or human or goes astray from the preset path
- Battery level LED indicator in mobile app interface and on robot body
- Send notification in mobile app when task is done
A schematic of the electronics is presented in figure 2.22.
The power system of UVtar has been designed with the objective to minimize weight and cost while powering the robot system for the required operational time. It guarantees that the peak power demands of all the subsystems are met and that the individual components are protected from potential power surges. It comprises the Power Supply Unit (PSU) and the Power Distribution Unit (PDU) that work together to convert input power into required output power and redistribute this to the different components in the system. To make sure the power output is always manually controlled, an emergency off switch and a toggle switch are incorporated into the design. A series of 5A fuses are allocated throughout the power lines to protect the different subsystems against overcurrent surges.
The general architecture of the power system is depicted in figure 2.24.
To achieve our power requirements, a LiFePO4 rechargeable battery weighing 10.5kg is chosen as the primary power supply. LiFePO4 battery is chosen over Lead-acid batteries due to their competitive advantages in terms of specific energy, weight, volume, and life span. The battery is charged via a charger and discharged through an inverter to supply power to the different components of the robot.
The specifications of the power supply components, shown in figure 2.25, are:
Battery: LiFePO4 Battery 12V/100Ah, by ExpertPower Direct
Charger: 45Amp Li-Ion by Progressive Dynamics
Inverter: 400W Car Power Inverter DC 12V to AC 110V Car Adapter by BESTEK
The battery, charger, and inverter are mounted on top of the chassis panel symmetrically to stabilize the robot. A retractable power cable is provided that enables the robot to be plugged into any standard AC power socket for charging. The battery can be fully charged in about 2.5 hours (150 minutes) by the charger. The charger provides reverse battery protection, i.e., prevents charger damage if battery connections are accidently reversed. The inverter is provided with an overheating protection and has 5A USB slots that can be used for additional sensors if required. The switch on the inverter is used to power up the system.
The robot can be operated for 3 hours continuously. It gives a low power early warning and the inverter cuts the power supply to all other subsystems except the motors when the last 10% of battery charge is remaining in order to move towards the charging point.
The Power Distribution Unit (PDU) consists of three different power lines required to provide power to all rover systems with their required voltages: 5V, 12V, 18V and 110V. The UV tubes are connected parallel and directly powered by the inverter at 110V, while the 5V, 12V and 18V conversions are accomplished by the means of an AC-DC step-down converter-cum-distribution module, shown below in figure 2.26, to reroute the power outputs from the PSU to the different components of the system.
All the software codes developed for UVtar are sourced from open-source resources. The mapping, human detection and navigation are based on ROS (melodic) while the mobile application has been developed on the MIT App Inventor platform.
Level of Autonomy and Future scopeKeeping users and operators in mind, as they are the one who know which areas to focus more on disinfection. The robot path is actually fully autonomous, as the planner estimates the 1.8m distance between the points and suggests the operator in the map shown on the mobile app screen and operator selects the same waypoints. Therefore, the robot follows a pre-programmed track with a possibility of allowing a human expert to take over as and when required.
The main reason of not having a fully autonomous system in the first place was that a fully autonomous system would be less flexible in terms of selection of important spots to disinfect in the area. The autonomous algorithm will usually choose the shortest waypoint as a node and not decide based on the time and circumstances. For example, during COVID-19, if a situation in which the hospitals need to disinfect the space quicker (say in ~15 minutes) than the standard time provided by the autonomous system of 30 minutes, then the drawback of a pre-made autonomous system would be it would fail to synchronize with the operators decision. So, as a solution, we build an autonomous system with human-in-loop where the waypoints calculated autonomously by the algorithm are present for the humans to select through the mobile interface. This would take care of the problem mentioned above. Furthermore, UVtar will be collecting and storing important data like the building type, eg: hospitals, situation type: emergency cleaning, regular cleaning etc and build an AI -based model that learns the expert operator’s priority waypoint selection input along with the respective situation and then train for absolute autonomy.
So, the UVtar first implements the human in loop autonomy but after necessary data collection, a fully autonomous with priority waypoints selections of disinfection path can be fully made along with learning capability for different types of environment using AI based waypoint selectors based on environment.
Basic OverviewROS(Robot Operating System) is utilized as a common framework for communication between most onboard devices.
Gazebo is used for simulation and proof of concept.
The GitHub link is :https://github.com/sandy1618/MicronUVGIRobot
Docker Image: docker pull UVtar/trishul:UVtar_client
The software architecture is shown in figure 2.27 and the message communication flow between the various nodes are shown in figure 2.28.
UVtar uses dockers to package the entire software for cross environment functional capability. The current version uses ubuntu version 18.04 and ROS Melodic Morenia version as its base environment. Further functionality for expansion to different Ubuntu versions and ROS distro versions is available inside the source code with NVIDIA support as future version updates can be made available in the future. The use of dockers can be really useful for easy deployment and achieving scalability of the system, thereby reducing costs.
The sections below give a detailed explanation.
2.6.1 Mapping and RecognitionSimulation Setup Overview
The purpose of using Gazebo-based simulation is to create the software architecture and make it plug and play type as when the real prototype gets implemented, the net implementation would just be renaming topics of communication between the nodes.
The Gazebo simulated robot will then be replaced by a real robot, rest remaining the same and hence provide plug and play capability along with the advantages of simulated environment for virtual prototyping and testing.
The open-source Turtlebot3 package has been used to simulate UVtar. Turtlebot3 simulation files have been modified with additional inclusion of 4 ultrasonic sensors, 4 RGB-D Realsense cameras and Human actor models for proof of concept.
The Intel Realsense original drivers and model properties have been used for ease of making the real prototype and software integrations.
The simulated model of Turtlebot3 robot with all the sensors is shown in figure 2.29.
Two types of environments have been implemented and tested for the simulations as shown in figure 2.30 and 2.31:
1. Simple space with Animated human actor.
2. Office space with Different human models.
Mapping
Gmapping, a popular Simultaneous Localization and Mapping (SLAM) based package has been used for building occupancy grid type environment maps like the one shown in figure 2.32.
Logging
The time-stamped ros-bag feature of ROS has been used for logging. The present implementation subscribes to map topic created from the gmapping package. Furthermore, the logging feature is able to log anytime of data communicated between various sensors for the robot.
Safety
A layered architecture is proposed to ensure safety :
Layer1
In the first layer, PIR Sensors are used for detecting any movements closer to around 1.5 meters. The PIR sends a command to the relay circuit that is connected in parallel to the power line from inverter to the UV tubes. It breaks the circuit and switches off the 3 UV tubes and the LED indicators. The UVtar has strategically placed 4 PIR sensors to identify motion at various circumstances such as sudden appearance of pets like cats or dogs has also been taken into account.
The code for this tripping circuit is programmed in Arduino IDE, in the language AVR-C, which is like C with modifications/extensions for the AVR boards and transmitted to ATtiny85 microcontroller. The code is available in Github repository https://github.com/sandy1618/MicronUVGIRobot/tree/master/PIR_Relay_code.
The schematics for the connection of PIR sensors, LED indicators, UV tubes and inverter are shown below in figure 2.32.
Layer2
The second layer uses a combination of data from camera and ultrasonic sensors. The current implementation uses Intel Realsense RGBD cameras and Ultrasonic sensors data for confirming the distance and detection of human beings.
The current implementation uses OpenCV's CPU HOG person detector for detecting a person. The threshold distance for stopping the UV tubes through layer2 is when there is a human detection and ultrasonic range distance is less then 1.5 m from the robot. Furthermore, layer 2 acts as an early warning system to both alert the operator and human with the use of a buzzer alarm when the human approaches a distance between 1.5m to 3m range as shown in the figure 2.34. Additionally any human detected beyond 3m is also informed for early tracking and safety action by the operator.
To further increase the reliability of camera based human detection, the UVtar implements face detection for upper body identification as shown in figure 2.36. However, the limitation of this implementation is that it also detects faces from human pictures on the walls. Also, since the camera height is less than an adult human height, the shadow due to light behind the human would make the face features less visible thereby affecting the detection. The future improvement is to be able to identify different body parts of humans which would make human detection more reliable. A brief introduction of such open source packages are given below.
The detection can be further improvised in the future for better safety and more knowledge extraction, for instance, using the depth camera for obtaining semantic mapping and human position estimation as shown in usage of the following open source packages.
Furthermore, as a part of the future roadmap of UVtar, OpenPtrack library along with OpenPose library can be used to detect different body parts of the human like eyes, hands, legs and faces. In addition to this, OpenPtrack library provides a multiple person tracking along with their distance by making use of depth data from realsense RGB-D data.
OpenPtrack: https://github.com/OpenPTrack
OpenPose: https://github.com/CMU-Perceptual-Computing-Lab/openpose
With additional introduction to new training data, these two libraries can be further extended for the ability to identify doors, handles and other high touch surfaces is implemented in order to improve efficiency and effectiveness of this sanitation solution.
Distributed Cloud computing ArchitectureAnother approach of delegating the computation load from on-board computing (edge-computing) towards a centralized (cloud) server would result in a cheaper on-board computing.
The advantage of enabling cloud computing are:
- Maintenance cost due to computing failure is reduced.
- More data in centralized space for more accurate detection & knowledge extraction.
- Enable future concepts like swarm robotics collaboration.
- Cheaper per unit cost after selling sufficient UVtar units. See. Cost reduction model of 2.7.1 billing section for more details.
It is critical to address the coverage path planning (CPP) that will enable the robot to efficiently disinfect the maximum area in a given room. As per the requirements, the robot should be capable of maneuvering in standard bathrooms with dimensions of 0.6mx0.6mx2m.
In order to disinfect all accessible areas around the robot with a UV irradiation kill dose reaching 25mJ/cm^2 in five minutes from a distance of 1.83m, rooms of different sizes were analyzed to implement the motion planning scenarios.
Optimum distance to achieve the required dose on the wall.
As shown in table 2.3, the ideal distance from a wall is 1.5m which is nearing the threshold at 3m height as well as the maximum peak dose. Therefore, if the exposure time is frozen at 5 minutes, the robot can operate anywhere between the distance of 1m to 2m from the wall and deliver the required dose of over 80µW/cm^2.
Case 1: The most ideal case is a cylindrical room of radius 2m as shown in figure 2.38.
In this case, the robot should be kept at the center for 5 minutes to disinfect all the walls. Similarly, for a square room of 4mx4m dimensions, one robot at the center equidistant from each wall will disinfect the whole room in 8 minutes as per the data shown in table 2.2. The increase in time is due to the farthest point which are the corners between the walls.
Case 2:
If the room is bigger the strategic points have to be defined where the robots should be placed and operated. The simplest way to decide these points is to place the robot at the center and corners of the room, as well as the center of a region that is obstructed on 3 sides as shown in figure 2.39.
However, if the room is asymmetric and has multiple obstacles such as in a meeting room, washroom, or a classroom, as shown in figure 2.40, the points have to be determined by the robot after mapping the room. Motion Planning is essential in such scenarios in order to disinfect the entire room while avoiding obstacles on its path.
The navigation is done using the ROS navigation stack and the motors are controlled by Arduino. The communication between Jetson TX2 and Arduino UNO is via a rosserial package that utilizes Arduino’s serial UART. The ros_lib Arduino library enables the Arduino board to communicate with ROS. The Jetson TX2 and Arduino interaction pipeline is shown in figure 2.41.
The planner sets a defined path by generating a number of waypoints based on the environment map obtained from the Gmapping package. These stops are at every 1.8m, and about 1.5m away from the walls to ensure no part of the room is left out. The robot will generally throw an error if the operator tries to move it outside either of these waypoints or if the area is impossible to reach. Using the Realsense cameras (with inbuilt IMU) and LiDAR data, the robot tracks it's directions and follows the path, autonomously driving itself and avoiding obstacles wherever necessary.
A differential drive controller package is used to subscribe to the waypoint node and drive the motors. However, this can only be tested on a working prototype.
The navigation code is available in the Github repository https://github.com/sandy1618/MicronUVGIRobot
2.6.3 Communication SystemUVtar can communicate with the dedicated UVtar mobile application over WiFi 802.11N in 5GHz mode inbuilt in the OBC. The connection is established through IP reconfiguration. The design provides an approachable means for the robot to communicate with a human operator within a 50m range of distance and allows real time H.264 encoded video streaming from the cameras onboard the robot.
It is also possible to connect the robot via Bluetooth 4.1 exclusively for the motor controls (video streaming is disabled in this mode). This is for critical situations if the Wifi fails or when the battery level is low and the robot needs to be moved towards the charging station.
2.6.4 Control, Operations and HandlingThe image of the room is provided to the app operator through the camera and the map can be annotated to indicate all the points the robot should stop to perform disinfecting tasks. The robot relies on SLAM to navigate, and it operates completely on its own without human intervention. The robot is capable to monitor, measure, and track individual performance of each sanitation before moving to the next point following automatically the preset path. While moving from one point to another, the UV tubes are turned off. After the robot has finished sanitizing a room, it will notify the operator that the job is complete and then shut itself down.
The robot operates only when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area as described in earlier sections. It takes about 25 minutes to disinfect a typical room of 9m*6m, with the robot spending 5 minutes in four to five different positions around the room to maximize the number of surfaces that it disinfects. The process is more consistent, efficient and safe than a human disinfecting the area.
The mobile app helps the operator to monitor the progress, take any actions to move or stop the robot, as well as turn the UV tubes on or off when required. The app displays the predetermined path of the robot to the operator and send warning notifications to the operator to take any emergency action when a human is detected within 2m or if the battery charge is low. The app uses simple logic and can easily be developed using JAVA Script. The GUI of the app is shown in Figure 2.42.
Connection of the Mobile app and Web based application to UVtar bot
The front end can be wired with the ROS system using ROS supported robotwebtools. The roslibjs library of robotwebtools is used to make the connection with the front end. The front end has the capability of streaming camera, map logging data and robot control through the local network/Internet using WiFi as shown in figure 2.43. Current implementation uses a local server for communication and control of the simulated robot. In the future, a cloud server can be set up to communicate over the internet.
*Note: The Web-app based front end will later be extended to match both the mobile based design for consistency and ease of usage for the operator.
RobotWebTools: http://robotwebtools.org/
The following procedure needs to be followed every time the robot is to be used:
NOTE:
In addition to all the safety features provided in UVtar, it is advisable that the operator always monitors the robot remotely through the mobile or web-based app or through a control room if CCTVs are installed in the area. It is extremely important that the operator makes sure not to come near the robot while the lights are working.
2.7 Cost Calculations2.7.1 Bill of MaterialsThe total price for all the components and materials required to build the UVtar is US$7000 which is well within the limit specified by the Micron UV Robotics Contest [4]. The detailed cost and mass listing is shown here in figure 2.44 as well as provided in the Github repository https://github.com/sandy1618/MicronUVGIRobot
A fascinating feature of this robot is that it is equipped with all the necessary hardware and the open-source software platform being based on ROS can be easily upgraded as per requirement without any extra costs. This makes it very inexpensive to develop, build and purchase on a small scale.
2.7.2 Cost Reduction Model versionsUsing RGBD slam
The first model includes a LiDAR sensor which costs around $2600. The main purpose of the use of lidar sensor is building maps with SLAM and Navigation. The alternative to lidar is the use of RGB-D camera like real-sense in the UVtar model.There are open-source packages available that can be easily integrated with the present software framework for SLAM based navigation. For instance, the package RGBDSLAMv2: https://github.com/felixendres/rgbdslam_v2 can be used to create point clouds/OctoMaps. This output can later be fed to Grid_Map library : https://github.com/ANYbotics/grid_map that can convert the point clouds / OctoMaps to occupancy grid map which can be fed to the Navigation package of ROS for further Navigation.
With this implementation, the cost can further be reduced below $5000 (around ~$4400). The only drawback is the computation intensive nature of the RGB-D based SLAM packages. As the testing with the first prototype is yet to be done, the NVIDIA Jetson TX2 board capability to handle human detection along with RGB-D slam is yet to be confirmed. If the feasibility is confirmed without significant heat up or system slowdown, then the use of LiDAR can be safely discarded paving way for a much more economical use of Realsense cameras without compromising the functionality of UVtar.
Distributed Cloud GPU server enabling cheaper onboard computation cost
The current version of UVtar is a fixed model with NVIDIA Jetson TX2 which cost around $380. This can be replaced by the NVIDIA Jetson Nano model that costs around $110 with subscription to online cloud-based computation GPU servers where each of the sold UVtar robots will act as a client requesting identification of detection and RGB-D slam computation in the cloud. This system assumes availability of high-speed internet and a good infrastructure.
If the necessary conditions are satisfied, under the following assumptions:
1. Each UVtar life cycle to be around 80, 000 house (standard FANUC robots life cycle)
2. Average cost of renting GPU powered server = $2 / hour
3. Load balancing capacity of simultaneous service requests to the server by each UVtar client to be 1000.
Then from the following figure, a sell quantity of more than 600 UVtar units will result in a dramatic cost reduction, as shown in figure 2.45.
The tubes are possible to be replaced by a person with limited hands-on experience in less than 30 minutes. The system should be professionally tested after every 6 months of operation.
If the sensors get clogged with dirt they may fail to operate effectively. To assure best performance, it is recommended to clean the sensors regularly with a lint free cloth or compressed air.
All the components used to build UVtar are of good quality and durable. Most of the components used are popular with developers and the research community to build open-source products.
2.8 InnovationTeam Trishul has strived to bring together a number of advantages and innovative design implementations, that will lead to the widespread adoption of UVtar by the target users. Some of the unique features of UVtar are as follows:
- Very lightweight, sleek design and highly maneuverable
- User-centric semi-autonomous functionality extended through a mobile app to control and monitor the working of UVtar remotely anytime, anywhere.
- Capability of extensions to fully AI-based autonomous systems after learning from the operator and making an waypoint-selections AI model.
- Distributed software architecture using ROS for distribution of computation power and resources such as renting cloud gpu for cost reduction etc.
- Use of dockers for ease of deployment and scalability.
- IoT based bot, can be operated remotely through a WebApp or a Mobile App using local area network or Internet.
- A connected cloud architecture enables regular system upgrades through OTA (Over-the-air)
- Multiple accounts support for disinfection monitoring
- An extended support in system failure diagnosis, unique feature in the community.
- External charging station setup not required as the charger and cable is already integrated into UVtar, enabling the robot to charge using any standard power socket.
- Optimisation of power budget to extend the working time of UV lamps for longer working time in a single charge.
- Use of a differential drive mechanism, which consists of two powered wheels and a caster wheel, instead of four mecanum wheels enables complete maneuverability with just two motors instead of four. This helps to reduce the cost and power requirements of two extra motors, and also helps to reduce maintenance and cost of the wheels, which is otherwise high in case of mecanum wheels.
[2] “Signify and Boston University validate effectiveness of Signify’s UV-C light sources on inactivating the virus that causes COVID-19, ” Signify, Eindhoven, Jun. 16, 2020.
[7] D. R. Grimes, C. Robbins, and N. J. O’Hare, “Dose modeling in ultraviolet phototherapy, ” Med. Phys. PubMed, vol. 37(10), no. 5251–7, Oct. 2010, doi: 10.1118/1.3484093.
[8] TEPRO, “TEPRO Germicidal Lamp GPHA843T5L.” https://www.tnuvir.com/gpha843t5l-2.
LIST OF ACRONYMSABS Acrylonitrile Butadiene Styrene
AC Alternating Current
ACGIH American Conference of Governmental Industrial Hygienists
ADE Actuated Drive ELectronics
BOM Bill of Materials
COVID-19 COronaVIrus Disease 2019
DC Direct Current
DNA Deoxyribonucleic acid
FOV Field of View
GPIO General-Purpose Input Output
LED Light Emitting Diode
LiFePO4 Lithium Iron Phosphate
LiDAR Light Detection and Ranging
LP Low Pressure
OBC On-board Computer
OTA Over-the-air
PCB Printed Circuit Board
PDU Power Distribution Unit
PIR Pyroelectric “Passive” InfraRed
PSU Power Supply Unit
PWM Pulse Width Modulation
RGBD Red Green Blue - Depth
ROS Robotic Operating System
SARS-CoV-2 Severe acute respiratory syndrome coronavirus 2SLAM Simultaneous Localization and Mapping
RNA Ribonucleic acid
UART Universal Asynchronous Receiver/Transmitter
USB Universal Serial Bus
UV-(A, B, C) UltraViolet A, B, C
UVGI UltraViolet Germicidal Irradiation
Comments