link="https://youtu.be/d7OSQgakiqA"
TAISIM2
is a Python-based simulation library for Multi-Robot Systems and for testing and developing computer vision applications with a primary focus on autonomous driving systems that rely on virtual sensor inputs, it provides a versatile platform for a variety of tasks, from lane keeping to complex navigation in agricultural environments.
The reason I decided to make a simulator is the unavailability of Simulators in python that's why , for TAISIM2 we will focus on :cross platform, low processing , easy to install , easy to use
Minimum System RequirementsOS: Windows/Linux/MacOS
CPU: Intel Core Duo
RAM: 100mb/simulated robot
Programming Language: Python3.x
Dependencies- OpenCV
- NumPy
- Pygame
- OpenGL
- Virtual Cameras (COLOR and DEPTH)
- Virtual GPS
- Virtual Compass
Q
-> Quit
from 1
to 9
-> select robot
W
-> move forward
S -> move backward
A
->rotate to left
D
-> rotate to right
left click
and drag
-> rotate simulation space horizontaly
scroll up , down ->
zoom in zoom out from robot
In addition to default maps, TAISIM2 allows for the import of custom maps. This flexibility facilitates testing across diverse environments and scenarios.
Simulator.track("path_to_your_image.png")
Optimized & CrossPlatform:Efficient performance on single-core computers makes TAISIM2 accessible to a wide range of users and potentially suitable for real-time applications or embedded systems:
To install via pip
use:
pip install taisim2 #Python2.x
pip3 install taisim2 #Python3.x
Basic Usage Single-Robot example- Robot Initialization
example_robot=Robot(tag="helloOpenCV")
- Sensor Atachment
example_camera=Camera(robot=example_robot,tag="car camera",pos_x=0,pos_y=1)
example_gps=GPS(robot=example_robot,tag="t1 GPS")
example_compass=COMPASS(example_robot,"my compass")
- Sensor read
#inside a loop
frame=example_camera.read() $only color
frame,depth_frame=example_camera.read(depth=True) color and depth frame
x,y,z=example_gps.read()# localization
angle=example.compass.read()# orientation
- Robot movement
example_robot.move(linear_velocity=0.1,angular_velocity0.1,altitude0.1)
Lets jump to the Multi-Robot&Sensor partThe usage of the package is very easy and it was designed so if you know OpenCV, you are comfortable working with TAISIM2.
Every robot has a tag, initial position, initial rotation and size.
example_robot=Robot(tag="helloOpenCV",x=0,y=-10,z=0,size=0.3,rotation=0)
tank1=Robot(tag="Tornado",size=0.2)
car=Robot(tag="LineFollower",x=-5,size=0.2)
drone=Robot(tag="EAGLE BOT",x=3)
- Sensor Initialization
Every sensor is position is relative to its robot position.
#each sensor is customisable and gets atached to a robot
#initialize camera
camera1=Camera(tank1,tag="birdy",)
camera2=Camera(example_robot,tag="FPV",frame_width=600,frame_height=600)
camera3=Camera(car,tag="car camera",pos_x=0,pos_y=1)
#initialize gps
gps=GPS(tank1,tag="t1 GPS")
gps1=GPS(drone,tag="drone GPS")
gps2=GPS(example_robot,tag="GPS")
#initialize compass
compass1=COMPASS(drone,tag="compass")
compass1=COMPASS(tank1,tag="imu")
- Simulation Architecture
Based on the initialization the simulator would generate at the beginning of the program a simulation architecture.
The architecture hierarchy will be displayed in the terminal (for our example it will look like this):
This architecture will display all robots with all sensors and their tags in order to distinguish them properly
Render the Simulator Enviournment
while Simulator.isRunning():
Simulator.render()
It will create a window that will display our track , robots and tags.
For the example above this is how the code looks like:
from taisim2.simulator import Simulator,Robot,InputHandler
from taisim2.sensors import Camera,GPS,COMPASS
import cv2
# Window dimensions
def main():
#initialize robots
example_robot=Robot(tag="helloOpenCV",x=0,y=-10,z=0,size=0.3,rotation=0)
tank1=Robot(tag="Tornado",size=0.2)
car=Robot(tag="LineFollower",x=-5,size=0.2)
drone=Robot(tag="EAGLE BOT",x=3)
#initialize cameras
camera1=Camera(tank1,tag="birdy",)
camera2=Camera(example_robot,tag="FPV",frame_width=600,frame_height=600)
camera3=Camera(car,tag="car camera",pos_x=0,pos_y=1)
#initialize gps
gps=GPS(tank1,"t1 GPS")
gps1=GPS(drone,"drone GPS")
gps2=GPS(example_robot,"GPS")
#initialize compass
compass1=COMPASS(drone,"compass")
compass1=COMPASS(tank1,"imu")
#Set the track
Simulator.track('logo.jpg')
while Simulator.isRunning(): #check if the simulator is still running
world=Simulator.render() # Render
example_robot.move(1,1,0)
if __name__ == '__main__':
main()
#-----------------------------------------------------------------------------
another onefrom taisim2.simulator import Simulator,Robot,LEVEL1
from taisim2.sensors import Camera,GPS,COMPASS
import cv2
# Window dimensions
def main():
#initialize robots
example_robot=Robot(tag="helloOpenCV",x=0,y=-10,z=-0,size=0.3,rotation=0)
little_tank=Robot(tag="TANK",y=5)
drone=Robot(tag="drone", z=5,x=0.2)
#initialize cameras
camera1=Camera(robot=example_robot, tag="Example Camera",near_clip=0.1,far_clip=100)
camera2=Camera(robot=little_tank, tag="Left Camera",pos_x=0,pos_y=0,pos_z=0.1,rotationXY=90,fov=60,frame_width=640,frame_height=480)
drone_camera=Camera(robot=drone,tag="BirdEyeView",pos_z=-0.2,rotationZX=90,far_clip=300)
gps=GPS(robot=example_robot,tag="example_gps")
compass=COMPASS(robot=example_robot, tag="example COMPASS")
#Set the track
#Simulator.track(LEVEL1)
Simulator.track("logo.jpg")
while Simulator.isRunning(): #check if the simulator is still running
Simulator.render() # Render
frame=camera1.read()
#if we don't have depth=True , we return only the color frame
drone_frame=drone_camera.read()
color_frame, depth_frame =camera2.read(depth=True)
#with depth=True we return the color and depth frame
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_frame, alpha=255), cv2.COLORMAP_JET)
x,y,z =gps.read()
angle =compass.read()
drone.move(linear_velocity=0,angular_velocity=-0.5,altitude=5)
cv2.imshow("frontal_camera",frame)
cv2.imshow("left_frame",color_frame)
cv2.imshow("depth_frame",depth_colormap)
cv2.imshow("drone_view",drone_frame)
if __name__ == '__main__':
main()
Simulator ExamplesTAISIM2 is suitable for a range of computer vision applications, including but not limited to:
- Lane Keeping: Test and develop algorithms for keeping a vehicle within the boundaries of a lane.
- Line Following: Test and develop the simplest algorithm for following a line.
- Maze Running: Develop and evaluate navigation algorithms capable of finding a path through complex environments.
- Agricultural Crop Following: Ideal for tasks like crop identification, health monitoring, or autonomous navigation between crop rows.
- the Python version of the simulator will serve as a POC soo what I want to do Is migrate the code to C , build it as a shared library (.dll , .so) and making it available to C,C++,Python, Java Users
Comments