The Caridina and Neocaridina Shrimp Detection Project aims to develop and improve computer vision algorithms for detecting and distinguishing between different shrimp varieties. This project is centered around aquarium fish keeping hobbyist and how computer vision can be beneficial to improving the care of dwarf shrimp. In this particular case the implementation of a zone counter/tracker. The aim is to monitor breeding shrimp that are berried(pregnant). Why dwarf shrimp? they have a relatively low bio-load and have quite large variations color and pattern. As bottom feeders in a heavily planted tank essentially creates a self contained ecosystem. Which provides an opportunity for very low maintenance hobby.
Caridina and neo-caridina shrimp are two distinct species that require different water parameters for optimal health. Neocaridina shrimp are generally more hardy and easier to keep than caridina species, while caridina shrimp are known for their striking distinctive patterns. The body structure of both species are similar(as can be seen from subject mask).
The dataset for this project includes thirteen different class types. The neo-caridina species have been grouped together to test if the model can distinguish between caridina and neo-caridina shrimp. The remaining classes are all different types of caridina shrimp.
The RGalaxyPinto and BGalaxyPinto have quite similar pattern with the main difference being their color: one is wine-red while the other dark-blue-black. Both varieties have distinctive spots on the head region and stripes on their backs, making them ideal for testing the model’s ability to distinguish between color.
The CRS-CBS Crystal Red Shrimp and Crystal Black Shrimp have similar patterns to the Panda Bee shrimp, but the hues are different. Panda shrimp tend to be a deeper and richer color than CRS-CBS shrimp, CRS-CBS tend to have thicker white rings.
The Panda Bee variety, on the other hand, is known for its panda-like pattern white and black/red rings.The color rings tend to be thicker and more pronounced than the Crystal Red/Black Shrimp.
Within the Caridina species, there are various tiger varieties. These include Fancy Tiger, Raccoon Tiger, Tangerine Tiger, Orange Eyed Tiger (Blonde and Full Body). All of these have stripes along the sides of their bodies. Fancy Tiger shrimp have a similar color to CRS, but with a tiger stripe pattern.
Raccoon Tiger and Orange Eyed Tiger Blonde look very similar, but the body of the Raccoon Tiger appears larger, and the Orange Eyed Tiger is known for its orange eyes. Tangerine Tigers vary in stripe pattern and can often be confused with certain neo-caridina, specifically yellow or orange varieties.
The remaining are popular favorites for breeding and distinct color patterns namely Bluebolt, Shadow Mosura, White Bee/Golden Bee, and King Kong Bee.
ConclusionThis project was accomplished with little coding, Roboflow provides tons of resources in order to get the job done. The zone counter was made possible with roboflow notebooks on their github repository a trigger is set such that objects within polygon zone are counted:
Roboflow Notebooks:
https://github.com/roboflow/notebooks
Yolov8 Tracking Zone
https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-detect-and-count-objects-in-polygon-zone.ipynb
Yolov8 Segmentation Mask
https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb
Pre-Process:- Upload data to Roboflow for annotation and added augmentations to increase dataset size.
- Export dataset to Yolov8 format.
- Train dataset to obtain weights, this can be done on colab.(https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-object-detection-on-custom-dataset.ipynb)
Processing Dataset for Zone Tracking/CountingInstall YOLOV8
!pip install ultralytics
from IPython import display
display.clear_output()
import ultralytics
ultralytics.checks()
Install Detectron2
!python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
from IPython import display
display.clear_output()
import detectron2
print("detectron2:", detectron2.__version__)
Install Supervision
!pip install supervision==0.2.0
from IPython import display
display.clear_output()
import supervision as sv
print("supervision", sv.__version__)
Load the model: Here is where you can provide your own custom model normally it will be stored in /run/detect/train/weights folder. You can use either last.pt or best.pt pytorch models.
from ultralytics import YOLO
model = YOLO('yolov8s.pt')
Upload a video that contains object reflective of your trained model.Tune polygon dimensions according to your usecase.
import numpy as np
import supervision as sv
# initiate polygon zone
polygon = np.array([
[1000, 1000],
[1000, 1000],
[1000, 1000],
[1000, 1000]
])
video_info = sv.VideoInfo.from_video_path(VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)
# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)
def process_frame(frame: np.ndarray, _) -> np.ndarray:
# detect
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
#detections = detections[detections.class_id == 0]
zone.trigger(detections)
# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)
return frame
sv.process_video(source_path=VIDEO_PATH, target_path=f"{HOME}/result.mp4", callback=process_frame)
from IPython import display
display.clear_output()
Trained Model: YOLOV8You can try the trained model out for yourself using Roboflow Universe:
https://universe.roboflow.com/dee-dee-b9kev/aquarium-shrimp-detection-caridina_neocaridina
Github Repository of the trained yolov8 models available via github:
https://github.com/dfunkapostal/Aquarium-Shrimp-Detection/tree/main
Tensorflow JS: Web DeploymentThe model can be deployed via web using tensorflow.js or onnxruntime with the following https://github.com/Hyuto repositories.
https://github.com/Hyuto/yolov8-seg-tfjs
https://github.com/Hyuto/yolov8-seg-onnxruntime-web
The camera can be adjusted to front camera adjusting webcam setting facingMode to "user"
https://github.com/Hyuto/yolov8-seg-tfjs/blob/master/src/utils/webcam.js
open = (videoRef) => {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices
.getUserMedia({
audio: false,
video: {
facingMode: "environment",
Your model should be uploaded in the public folder:
https://github.com/Hyuto/yolov8-seg-tfjs/tree/master/public
model.json should be replaced with the.json file in saved web folder(tensorflow.js output folder)
https://github.com/Hyuto/yolov8-seg-tfjs/blob/master/src/App.jsx
useEffect(() => {
tf.ready().then(async () => {
const yolov8 = await tf.loadGraphModel(
`${window.location.href}/${modelName}_web_model/model.json`,
labels.json should be updated to the classes that are present in your project.
src/utils/labels.json
IoT DeploymentIt was intended to deploy edge device use-case on a 75 gallon fish tank, however this is not possible at the moment. Instead I processed the software component on actual saved video footage of the 75 gallon tank. Read more on the IoT component can be read at the following link.
https://www.hackster.io/dfunkapostal/computer-vision-tinkerboard-rpi-streaming-c52dca
Links to External Resources:
Caridina Shrimp: https://en.wikipedia.org/wiki/Bee_shrimp
Neo-Caridina Shrimp: https://en.wikipedia.org/wiki/Neocaridina
Roboflow Polygon Zoning/Tracking/Counting:https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-detect-and-count-objects-in-polygon-zone.ipynb
Roboflow SAM: https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb
Ultralytics Hub:https://github.com/ultralytics/hubc
Comments
Please log in or sign up to comment.