Due to the current pandemic situation, it has been very difficult to move around places either for work or to shop for some groceries or even for a small get-together for family and friends. The very sole reason for this problem is no vaccine is found yet. But still to carry on with our day to day tasks without any threat assessment. Hence, we need to devise a solution to tackle this by disinfecting office and home spaces in advance before using them to reduce the threat of contracting the very dangerous corona virus.
The SolutionThe solution to this problem proposed by us is an Autonomous rover mounted with our custom-designed UV-C Light mechanism. It will be equipped with UV-C lights and will have the ability to be remotely controlled too. The primary function would be to disinfect known spaces autonomously, which is done using the LidaR sensor mounted on the Rover.
Moreover, this UV-C Rover has an ability to go into the corners of small office rooms to disinfect the place using its UV rays spreading into each corner, walls, and under the table. The utmost priority is given to protecting the people from these kind of viruses. The flexible mechanism helps us achieve this feat.
Now, let's start with the project. We will cover the whole project in 2 sections:
1. Mechanical Design Understanding
2. Electronics Part of the Project
So, let's start with the Mechanical portion of the Rover. We will discuss in detail the design of the Rover, the design of our UV-C Light controlling mechanism, and other parts too.
Mechanical Design UnderstandingThe Design understanding consists of two main components:
- Rover-Rocker Bogie mechanism
- Lead Screw Mechanism
We will first start with the Rocker Bogie Mechanism.
The Rover has six wheels, each with its own individual motor. The two front and two rear wheels also have individual steering motors. This steering capability allows the vehicle to turn in place, a full 360 degrees. The four-wheel steering also allows the rover to swerve and curve, making arching turns.Rocker, one each on the left and right side of the rover, which connects the front wheel to the differential and the bogie in the rear.The Bogie connects the middle and rear wheels to the rocker. TheRocker bogie mechanism also provides maximum stability. Aluminium T6 grade is selected as thematerial for structural members of the rover because of its very good strength to weight ratio and good machinability characteristics. The direction of the robot is controlled via software commands given to motors through the controllers and distribution boards.
Dimensions of rover (L x W x H): 0.725 x 0.460 x 0.440 m
Steering Mechanism:
All six wheels are powered by Johnson DC Motors for forwarding and backward movements. Moreover, two front wheels and two rear wheels are having one more DoF provided with a NEMA 23 stepper motor for steering and on-spot rotating purpose. As a result, the front and rear wheels will rotate about thevertical axis. The arrangement of the same is shown in the figure (2, 3).
Hydraulics and pneumatic based mechanisms are also an option but the rover is a small device with mobility, so using hydraulics and pneumatics will be an obstruction in its working; not only this, the hydraulic and pneumatic circuits are bigger in size and difficult to maintain which contradicts the effective objective of this project. Moreover, they are crazy expensive with also hinders with the cost-effectiveness of the project.
The major selection criteria were the compact size, simple design, and low cost all this was accomplished by the lead screw mechanism. In addition to this, the lead screw has a self-locking mechanism which will help if the DC geared motor fails and also give a larger mechanical advantage. The number of parts required for the lead screw mechanism is few, along with the larger load-carrying capacity. Also, the fact that they are easy to maintain and manufacture reduces the overall cost of the rover which in turn increases the accessibility for the people to purchase the product.
Lead screw mechanismThe lead screw mechanism will fulfill the functional requirement i.e. speed and amount of lift for the UV Source plate. The main advantage of the lead screw mechanism is the gain of vertical height. This lead screw can reach up to a height of 4 and a half foot which can almost reach the ceiling.
The construction for lead screw mechanism goes as; base plate on which the mechanism should lie. The lead screw is attached with the geared DC motor using a belt drive which in turn rotates the lead screw.
There are in total 3 supports attached for holding and stability and one will serve the purpose of the guide which is necessary to constrain the rotation of the top plate. As the UV plate moves linearly up and down continuously it will remove the material at a solid surface gradually. So, it is better to use wear resistive material for guide. Wear resistive material has good advantages like longer life at a given cost, lower noise level, reduced driving power consumption, and more uniform wear on the rod. The material selected for the guide is phosphor bronze.
The UV-light is placed horizontally; the reason to place it horizontally lies in the fact that it will cover more area compared to its vertical placement on the plate which in turn increases the effective volume exposure and also reduces the time to disinfect a particular area.
The rover is an autonomous device which has capability to act according to different situations in the surroundings. These capabilities are achieved by attaching a number of sensors to the rover. Centrally the rover is controlled by Jetson Nano which is a compact and a powerful desktop in a card sized form factor. For the detection of its surroundings, the Rover uses a LiDAR sensor which is placed high up on the top of upper plate of lead screw which detects 360 degree around the area. It continuously collects the data and sends it to the Nano who makes different decisions like speed and direction in which the Rover needs to move. Once the data is acquired and rover moves, then obstacles are detected using ultrasonic sensors which are installed all over the Rover which gives it the capability to immediately detect any obstacle, be it stationary or moving and take immediate action like stopping or changing its direction. Another feature is that it remembers the path it takes every time so it can be trained to move in a particular room just by controlling it manually for the first few times and then let it use its own created map to navigate.
Now the most important process is sanitization which is done by UV-light. Why UV-light? The reason lies behind the fact that UV has shorter wavelength and consist higher energy which is capable enough to kill harmful bacteria’s and germs. It is highly effective at decontamination because it destroys the molecular bonds that hold together the DNA of viruses and bacteria, including "superbugs, " which have developed a stronger resistance to antibiotics. Using UV-light for sanitation has been clinically approved for home and office purposes.
Since the rover has a smaller footprint, hence, as a result it can reach smaller and compact spaces to sanitize the area. Due to the use of powerful sensors the motion and detection becomes more precise. The sensors used can also detect the most frequently touched objects in its vicinity which makes it easier to determine where and how much amount of time is required to sanitize a particular area. The interesting part is it also sanitizes the path it traces so that no germs or bacteria attached to the wheels of rover can spread. This is rover is built using standard parts which a readily available in market for easy replacement and maintenance.
For calculating the distance between the rover and the surface to be disinfected, we will use high accuracy ultrasonic sensors.
As seen, the overall dimension of the system ensures the reachability of the system to each and every corner of the room and also it can fit through standard bathroom stall door.
This completes the Mechanical Design understanding part of this project. Now, we will move onto Electronics setup and also setup our Nano with the Lidar sensor and also connect a slave processor (An Arduino Mega) and connect all the basic sensors to it too.
Electronics part - Hardware and Software SetupFirst, we will see the details of the Hardware components used for this project.
RequiredHardware
1.Jetson Nano Development Kit. NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
2.SlamTec RP-LIDAR A1: RPLIDAR A1 is a low cost 360 degree 2D laser scanner (LIDAR) solution developed by Slamtec. The system can perform 360 degree scan within 6 meter range. The produced 2D point cloud data can be used in mapping, localization and object/environment modelling.
RPLIDAR A1’s scanning frequency reached 5.5 Hz when sampling 360 points each round. And it can be configured up to 10 Hz maximum. RPLIDAR A1 is basically a laser triangulation measurement system. It can work excellent in all kinds of indoor environment and outdoor environment without sunlight.
3. Li-Po battery to power the Rover
lithium polymer battery, or more correctly lithium-ion polymer battery (abbreviated as LiPo, LIP, Li-poly, lithium-poly and others), is a rechargeable battery of lithium-ion technology using a polymer electrolyte instead of a liquid electrolyte. High conductivity semisolid (gel) polymers form this electrolyte. These batteries provide higher specific energy than other lithium battery types and are used in applications where weight is a critical feature, like mobile devices and radio-controlled aircraft.
We will use Li-Po or Lithium Polymer Batteries for our rover. The initial proposal is to use 2 batteries to power the whole rover, each of a rating of 13 V and a battery capacity of more than 5000 mah.
4. Ultrasonic Sensors for obstacle avoidance
An ultrasonic sensor is an electronic device that measures the distance of a target object by emitting ultrasonic sound waves, and converts the reflected sound into an electrical signal. Ultrasonic waves travel faster than the speed of audible sound (i.e. the sound that humans can hear). Ultrasonic sensors have two main components: the transmitter (which emits the sound using piezoelectric crystals) and the receiver (which encounters the sound after it has travelled to and from the target).
In order to calculate the distance between the sensor and the object, the sensor measures the time it takes between the emission of the sound by the transmitter to its contact with the receiver. The formula for this calculation is D = ½ T x C(where D is the distance, T is the time, and C is the speed of sound ~ 343 meters/second).
For our rover, we will use a number of these sensors which will continuously collect data and based on that data, we will detect any obstacles that are present around the rover and take the necessary actions like stopping the rover or changing the direction of the rover.
HC-SR04 Sensor Features
- Operating voltage: +5 V
- Theoretical Measuring Distance: 2 cm to 450 cm
- Practical Measuring Distance: 2 cm to 80 cm
- Accuracy: 3 mm
- Measuring angle covered: <15°
- Operating Current: < 15 mA
- Operating Frequency: 40 Hz
5.Arduino Mega as the slave Microcontroller
The Arduino Mega 2560 is a microcontroller board based on the AtMega2560. It has 54 digital input/output pins (of which 15 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with an AC-to-DC adapter or battery to get started.
In our rover, the Mega will be connected as a slave to the Jetson Nano. It will run a minimal version of ROS, will be connected with all the sensors and motors due to the large number of IO pins it has, the IMU, Ultrasonic sensors, GPS sensor and the Motor drivers. It will collect the sensor data and constantly publish it on the relevant programmed ROS topics so that Jetson Nano can make the decisions based on this data and instruct the Mega to control the Motors accordingly.
Now, we will look at the technical specifications of the Arduino Mega in detail using the table below.
6.IMU Sensor for Direction detection
An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers,gyroscopes, and sometimes magnetometers. IMUs are typically used to maneuver aircraft (an attitude and heading reference system), including unmanned aerial vehicles (UAVs), among many others, and spacecraft, including satellites and landers. Recent developments allow for the production of IMU-enabled GPS devices. An IMU allows a GPS receiver to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present.A wireless IMU is known as a WIMU.
We use the IMU in our project to figure out the direction in which I rover will be moving. This is a necessity for autonomous navigation which our rover supports.
7.GPS Module to get the current location of the robot
We use the GPS sensor to add navigation features in our bot. We want to create a system in which we just have to provide the destination coordinates to the bot and the rover will do the rest of the work, that is figuring out the path from its current location to the destination, obstacle avoidance and the sense of direction it is moving in.
This is only possible if the rover has a GPS sensor on board so that it can sense the distance between its current location at any given instant to the destination and also detect when it reaches the destination, which is an indication to stop.
8.Motors
Motors will also be different for the different designs so I am keeping this on hold too like the motor driver.
Software SetupNow comes the interesting and a rather tedious task, the software part of the project! This setup is divided into several parts, so let us get started.
i) OS Setup on the Nano
First things first, when we get the Nano out of the box, we need to install an Operating System. Just follow the tutorial linked below.
Getting Started with Jetson Nano
ii) Install ROS on the Nano
This is the tricky part. There are a number of tutorials on ROS installation in Linux, but most of them are outdated so, I had to spend quite a lot of time to figure this one out.
Some important things to note:
Firstly, most tutorials cover the installation of ROS Kinetic, but it is outdated and hence you can not directly install it using pip install. So, we will have to go with the ROS Melodic Full Desktop Version.
- Firstly, most tutorials cover the installation of ROS Kinetic, but it is outdated and hence you can not directly install it using pip install. So, we will have to go with the ROS Melodic Full Desktop Version.
So, just follow this tutorial below but replace kinetic with Melodic.
- So, just follow this tutorial below but replace kinetic with Melodic.
ROS Installation on Jetson Nano
If everything goes alright, you would be able to see the catkin_ws folder has 3 sub folders and you can successfully compile using catkin_make command in this.
Moreover, you can run the command "roscore" in the terminal to start ROS service.
iii) Setup software to run our Car
Now, we will have to install all the dependencies required to run the rover. So, just follow the below tutorial to setup this:
1. Install Dependencies
ssh into your vehicle. Use the terminal for Ubuntu or Mac. Putty for windows.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential python3 python3-dev python3-pip libhdf5-serial-dev hdf5-tools nano ntp
Optionally, you can install RPi.GPIO clone for Jetson Nano from here. This is not required for default setup, but can be useful if using LED or other GPIO driven devices.
2. Setup Virtual Env
pip3 install virtualenv
python3 -m virtualenv -p python3 env --system-site-packages
echo "source env/bin/activate" >> ~/.bashrc
source ~/.bashrc
3. Install OpenCV
To install Open CV on the Jetson Nano, you need to build it from source. Building OpenCV from source is going to take some time, so buckle up. If you get stuck, here Is another great resource which will help you compile OpenCV.
Note: In some cases Python OpenCV may already be installed in your disc image. If the file exists, you can optionally copy it to your environment rather than build from source. Nvidia has said they will drop support for this, so longer term we will probably be building it. If this works:
mkdir ~/mycar cp /usr/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so ~/mycar/ cd ~/mycar python -c "import cv2"
Then you have a working version and can skip this portion of the guide. However, following the swapfile portion of this guide has made performance more predictable and solves memory thrashing.
Note: In some cases Python OpenCV may already be installed in your disc image. If the file exists, you can optionally copy it to your environment rather than build from source. Nvidia has said they will drop support for this, so longer term we will probably be building it. If this works:
mkdir ~/mycar cp /usr/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so ~/mycar/ cd ~/mycar python -c "import cv2"
Then you have a working version and can skip this portion of the guide. However, following the swap file portion of this guide has made performance more predictable and solves memory thrashing.
The first step in building OpenCV is to define swap space on the Jetson Nano. The Jetson Nano has 4GB of RAM. This is not sufficient to build OpenCV from source. Therefore we need to define swap space on the Nano to prevent memory thrashing.
# Allocates 4G of additional swap space at /var/swapfile
sudo fallocate -l 4G /var/swapfile
# Permissions
sudo chmod 600 /var/swapfile
# Make swap space
sudo mkswap /var/swapfile
# Turn on swap
sudo swapon /var/swapfile
# Automount swap space on reboot
sudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab'
# Reboot
sudo reboot
Now you should have enough swap space to build OpenCV. Let's setup the Jetson Nano with the pre-requisites to build OpenCV.
# Update
sudo apt-get update
sudo apt-get upgrade
# Pre-requisites
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
Now you should have all the pre-requisites you need. So, lets go ahead and download the source code for OpenCV.
# Create a directory for opencv
mkdir -p projects/cv2
cd projects/cv2
# Download sources
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.0.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.0.zip
# Unzip
unzip opencv.zip
unzip opencv_contrib.zip
# Rename
mv opencv-4.1.0 opencv
mv opencv_contrib-4.1.0 opencv_contrib
Let's get our virtual environment (env) ready for OpenCV.
# Install Numpy
pip install numpy==1.16.4
Now let's setup CMake correctly so it generates the correct OpenCV bindings for our virtual environment.
# Create a build directory
cd projects/cv2/opencv
mkdir build
cd build
# Setup CMake
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
# Contrib path
-D OPENCV_EXTRA_MODULES_PATH=~/projects/cv2/opencv_contrib/modules \
# Your virtual environment's Python executable
# You need to specify the result of echo $(which python)
-D PYTHON_EXECUTABLE=~/env/bin/python \
-D BUILD_EXAMPLES=ON ../opencv
The cmake command should show a summary of the configuration. Make sure that the Interpreter is set to the Python executable associated to your virtualenv. Note: there are several paths in the CMake setup, make sure they match where you downloaded and saved the OpenCV source.
To compile the code from the build folder, issue the following command.
make -j2
This will take a while. Go grab a coffee, or watch a movie. Once the compilation is complete, you are almost done. Only a few more steps to go.
# Install OpenCV
sudo make install
sudo ldconfig
The final step is to correctly link the built OpenCV native library to your virtualenv.
The native library should now be installed in a location that looks like /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.cpython-36m-xxx-linux-gnu.so.
# Go to the folder where OpenCV's native library is built
cd /usr/local/lib/python3.6/site-packages/cv2/python-3.6
# Rename
mv cv2.cpython-36m-xxx-linux-gnu.so cv2.so
# Go to your virtual environments site-packages folder
cd ~/env/lib/python3.6/site-packages/
# Symlink the native library
ln -s /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so cv2.so
Congratulations! You are now done compiling OpenCV from source.
A quick check to see if you did everything correctly is
ls -al
You should see something that looks like
total 48
drwxr-xr-x 10 user user 4096 Jun 16 13:03 .
drwxr-xr-x 5 user user 4096 Jun 16 07:46 ..
lrwxrwxrwx 1 user user 60 Jun 16 13:03 cv2.so -> /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so
-rw-r--r-- 1 user user 126 Jun 16 07:46 easy_install.py
drwxr-xr-x 5 user user 4096 Jun 16 07:47 pip
drwxr-xr-x 2 user user 4096 Jun 16 07:47 pip-19.1.1.dist-info
drwxr-xr-x 5 user user 4096 Jun 16 07:46 pkg_resources
drwxr-xr-x 2 user user 4096 Jun 16 07:46 __pycache__
drwxr-xr-x 6 user user 4096 Jun 16 07:46 setuptools
drwxr-xr-x 2 user user 4096 Jun 16 07:46 setuptools-41.0.1.dist-info
drwxr-xr-x 4 user user 4096 Jun 16 07:47 wheel
drwxr-xr-x 2 user user 4096 Jun 16 07:47 wheel-0.33.4.dist-info
To test the OpenCV installation, run python and do the following
import cv2
# Should print 4.1.0
print(cv2.__version__)
iv) Setting up the Rp-Lidar A1
Now that everything else is done, the only setup that is left is for our Lidar Sensor. First, lets know a little more about the device that we are using:
About RPLIDAR
RPLIDAR is a low cost 2D LIDAR solution developed by RoboPeak Team, SlamTec company. It can scan 360° environment within 6 meter radius. The output of RPLIDAR is very suitable to build map, do slam, or build 3D model.
You can know more information about rplidar from SlamTec Homepage (http://www.slamtec.com/en).
You can know more information about rplidar from SlamTec HomePage(http://www.slamtec.com/en).
How to build rplidar ROS package
- Clone this project to your catkin's workspace src folder
- Running catkin_make to build rplidarNode and rplidarNodeClient
How to run rplidar ros package
Check the authority of rplidar's serial-port :
ls -l /dev |grep ttyUSB
Add the authority of write: (such as /dev/ttyUSB0)
sudo chmod 666 /dev/ttyUSB0
There're two ways to run rplidar ROS package
I. Run rplidar node and view in the rviz
roslaunch rplidar_ros view_rplidar.launch
You should see rplidar's scan result in the rviz.
II. Run rplidar node and view using test application
roslaunch rplidar_ros rplidar.launch
rosrun rplidar_ros rplidarNodeClient
You should see rplidar's scan result in the console
How to remap the USB serial port name
Maybe you should change the usb device port mode with the authority of read and write, more than make the port remap to an fixed name. install USB port remap:./scripts/create_udev_rules.sh
Maybe you should change the usb device port mode with the authority of read and write, more than make the port remap to an fixed name. install USB port remap:./scripts/create_udev_rules.sh
check the remap using following command:
ls -l /dev | grep ttyUSB
check the remap using following command:
ls -l /dev | grep ttyUSB
Once you have change the USB port remap, you can change the launch file about the serial_port value.<param name="serial_port" type="string" value="/dev/rplidar"/>
Once you have change the USB port remap, you can change the launch file about the serial_port value.<param name="serial_port" type="string" value="/dev/rplidar"/>
RPLidar frame
RPLidar frame must be broadcasted according to picture shown in rplidar-frame.png
How to install rplidar to your robot
rplidar rotate with clockwise direction. The first range come from the front ( the tail with the line).
rplidar rotate with clockwise direction. The first range come from the front ( the tail with the line).
Please keep in mind the orientation of the Lidar sensor while installing it onto your robot. Here is what the axes look like:
Now that this is done, just try to run a sample launch file "view_rplidar" and see the scan results of the topic "/scan" in rviz. It will look something like this:
Rviz Visualization
Now, to add slam to our application, there are several available algorithms. But, we choose Hector Slam for our project. So, let us move on:
Hector Slam Setup for Jetson Nano
We cannot use the native git hub repo for Hector slam is the nano as we did for the rplidar. We need to make some changes in the Hector slam files to get it up and running. First, here are the commands you need to run:
cd catkin_ws/src/
git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam
Now, after it has been cloned, we will need to make the following changes:
In catkin_ws/src/rplidar_hector_slam/hector_slam/hector_mapping/launch/mapping_default.launch
replace the second last line with <node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link laser 100" />
the third line with <arg name="base_frame" default="base_link"/>
the fourth line with<arg name="odom_frame" default="base_link"/>
In catkin_ws/src/rplidar_hector_slam/hector_slam/hector_slam_launch/launch/tutorial.launch
replace the third line with <param name="/use_sim_time" value="false"/>
Now that this changes are done, we can go ahead and run the following:
cd ..
catkin_make
Now if everything goes as planned, this will be successfully compiled. This means we are ready to run hector slam on our Nano. Follow these steps in order to do so:
- Install ROS full desktop version (tested on Kinetic) from: http://wiki.ros.org/kinetic/Installation/Ubuntu
- Create a catkin workspace: http://wiki.ros.org/ROS/Tutorials/CreatingPackage
- Clone this repository into your catkin workspace
- In your catkin workspace run source /devel/setup.bash
Run chmod 666 /dev/ttyUSB0
or the serial path to your lidar
- Run
roslaunch rplidar_ros rplidar.launch
- Run r
oslaunch hector_slam_launch tutorial.launch
- RVIZ should open up with SLAM data
The RVIZ Slam Data will look something like this:
Hence, now our system can map the surroundings. This is currently a work in progress and new features will be added soon.
Now, let's have a look at the UV-C Light specifications we ware going to use for this project:
UV-C specificationUV-C Rover is the sterilization system on wheels which disinfecting rooms. 2x 25-Watt UV-C lamps with a T18 diameter, with a 2-pin double terminal are installed on the UV-C Rover. Each lamp has an average UV-C germicidal band of 25 nm with an emission of 90 µW/cm2 at a meter distance.
Now, we will have a look at the rough estimate on how much does building the whole project cost. As we have done above, we will divide the BOM into 2 parts:
1.Mechanical Component
2.ElectronicComponents
Hence, from the above BOM, it is clear that we can essentially build a fully functioning prototype of this for less than 800 dollars. Actually we have actually built the first prototype of the Rover with the Rocket Bogie Mechanism and the images of that are shown below:
We are currently working on the Lead Screw Mechanism and conducting tests with the UV-C Lights we have used for this project. We will be updating all the details soon. The code is also under progress and the git repo will be updated soon.
This completes our project explanation. Please do like and share this project.
Thank You!
Comments