Wild animation detecting and monitoring has always been a challenging topic and an active research area. Most of the current wild animation detecting and monitoring processes rely on commercial wild camera trap to take wild animal pictures which are triggered by some sort of sensor techniques (e.g IR sensors). However, those taken images still need human to collect and get analyzed with tremendous amount of effort. In a wild environment, the cost for deploying+collecting+analyzing is quite significant. With the recent progress of AI technique, there are mature tools that we can use to analyze the collected images. However, there are still no good stories of how we can utilize the AI output to really solve the wild animal detecting and monitoring problem.
In this project, I'm trying to propose an end-to-end solution which could potentially solve some basic problems in wild animal detecting and monitoring process. The idea is simple yet powerful: run AI on Raspberry Pi Zero locally to detect a wild animal and then send out the detected result (could be just a few bytes) through Hologram Cellular with no need of internet connection. The local AI computation and cellular connection are the keys to this project.
Wild Animal Monitoring ChallengesThere are a few notable challenges in the wild animal detecting monitoring:
- Data transferring. There won't be internet in the wild. We have cellular or radio. But we cannot afford sending/receiving too much data through cellular/radio channels.
- AI model on a Raspberry Pi Zero. There are sophisticated AI running in a cloud. There are powerful AI models running on a desktop machine. But AI model running on a Raspberry Pi or even a RPi Zero?
- Hardware reliability in the wild. Yes, wild could be really wild.
In this project, I'll elaborate how my prototype solves the first two challenges and my thoughts on solving the third one (and I sincerely invite anyone who is interested in this topic sharing your thoughts/feedbacks.)
Basic Hardware SetupThis project has a basic hardware setup:
- Raspberry Pi Zero W: this is the brain for running AI model and classify/detect a wild animal;
- A Pi Camera module. I'm using a noIR Pi Camera V2 for night visions (it also need some IR light sources);
- Hologram Nova. This is to provide cellular connection to send out the detected result;
- A motion sensor to trigger camera when wild animal passes by;
They are quite standard hardware components so I'll not elaborate the connection/wire up of these components in detail here to keep my post at a reasonable length :-).
SoftwareThe key to this project is the usage of Microsoft Embedded Learning Library (ELL) to classify the images/videos taken by the PiCamera. With this running locally on a Raspberry Pi Zero, we don't need to worry about sending out big image data through our Hologram Nova. Instead we can simply send out the detected animal result efficiently on our cellular channel.
Install ELL on Raspberry Pi Zero W
Microsoft ELL has very detailed instructions on running the library on a Raspberry Pi 3 Model B which has quad-core ARMV7 SoC and 1GB memory. However, we have a Raspberry Pi Zero which is a ARMV6 512M board that is much more limited than RPi3. The standard ELL does not support Raspberry Pi Zero. Fortunately ELL is an open source library so I can hack through the code to make it work on my Raspberry Pi Zero W. I have a very detailed tutorial in my forked branch of ELL for ELL RPiZero support.
Warning: it needs tons of patience to make it work on the Raspberry Pi Zero because you have to build a bunch of packages from source to target ARMV6. Here is a brief summary of time cost to install dependencies of ELL on a Raspberry Pi Zero:
- Install OpenBLAS: ~4 hours;
- Install NumPy: ~1 hour;
- Install OpenCV: ~10 hours;
I hope you will not give up if you see above figures. There are lots of fun.
Update 1: My hack to support ELL on Raspberry Pi Zero has been officially accepted by ELL team!! Now you can find my tutorial in the official ELL repository. Yay!
Update 2: It's such a painful process to prepare the Raspberry Pi Zero to run ELL. Therefore, I decided to create a Docker image to help anyone who wants to try ELL on Raspberry Pi Zero. The image contains all the necessary packages such as OpenBLAS, NumPy and OpenCV so you don't need to wait forever before your first try. If you are even lazier to setup Docker on your Raspberry Pi Zero, no worries. I have wrapped up a script file which streamlined all the necessary steps to run a ELL model. You can find the tutorial for running the script from my GitHub project ELL-Docker. It also contains a Dockerfile to build a image to run ELL on your own if you like to do it by yourself.
IR motion triggered PiCamera Recording
I have a very basic setup of IR motion triggering function which hooked up with my PiCamera recording functionality. The IR motion sensor is quite noisy in my testing. I'm planning to find an alternative for this purpose so that I could trigger the camera as needed.
Here is the code snippet for IR motion triggered PiCamera Recording:
import RPi.GPIO as GPIO
import time GPIO.setwarnings(False)
from picamera import PiCamera
import datetime
GPIO.setmode(GPIO.BOARD)
PIR_PIN = 11 GPIO.setup(11, GPIO.IN) #Read output from PIR motion sensor
def trigger_camera(PIR_PIN):
camera = PiCamera()
camera.resolution = (640, 480)
time_stamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
video_name = "video-{}.h264".format(time_stamp)
print("start recording for 15 seconds...")
camera.start_recording(video_name, format='h264')
camera.wait_recording(15) camera.stop_recording()
print("recoding ended and saved into {}!".format(video_name))
try:
print "add GPIO event"
GPIO.add_event_detect(11, GPIO.BOTH, callback=trigger_camera)
while 1:
time.sleep(100)
except KeyboardInterrupt:
print "Quit"
finally: GPIO.cleanup()
Processing Video Using ELL
Now you have the video taken when there is IR motion detected. You have the "brain", ELL installed on the Raspberry Pi zero. Let's do the wild animal detecting from the taken video using ELL.
In my prototype, I'm using the pre-trained ImageNet model provided from ELL gallery (Unfortunately I haven't had the chance to really train a model from real wild animal photos due to time limit). I'm using OpenCV to get each frame from the video taken and use the ELL to predict the object inside the frame. Once the result is predicted, I can send out the detected result from Hologram.
Here is the code snippet for the core functionality of processing image frame using ELL:
def process_frame(frame, categories, frame_count, output_frame_path):
if frame is None:
print("Not valid input frame! Skip...")
return
input_shape = model.get_default_input_shape()
output_shape = model.get_default_output_shape()
predictions = model.FloatVector(output_shape.Size())
input_data = helpers.prepare_image_for_model(
frame, input_shape.columns, input_shape.rows)
start = time.time()
model.predict(input_data, predictions)
end = time.time()
# Get the value of the top 5 predictions
top_5 = helpers.get_top_n(predictions, 5)
if (len(top_5) > 0):
# Generate header text that represents the top5 predictions
header_text = ", ".join(["({:.0%}) {}".format(
element[1], categories[element[0]]) for element in top_5])
helpers.draw_header(frame, header_text)
# Generate footer text that represents the mean evaluation time
time_delta = end - start
footer_text = "{:.0f}ms/frame".format(time_delta * 1000)
helpers.draw_footer(frame, footer_text)
# save the processed frame
output_file_path = os.path.join(output_frame_path, "recognized_{}.png".format(frame_count))
cv2.imwrite(output_file_path, frame)
print("Processed frame {}: header text: {}, footer text: {}".format(frame_count, header_text, footer_text))
return (header_text, output_file_path)
else:
print("Processed frame {}: No recognized frame!")
return None
def analyze_video(input_video_path, output_frame_path):
# Open the video camera. To use a different camera, change the camera
# index.
camera = cv2.VideoCapture(input_video_path)
output = []
# Read the category names
with open("categories.txt", "r") as categories_file:
categories = categories_file.read().splitlines()
i = 0
while (camera.isOpened()):
# Get an image from the camera.
start = time.time()
image = get_image_from_camera(camera)
end = time.time()
time_delta = end - start
print("Getting frame {}, time: {:.0f}ms".format(i, time_delta*1000))
if not (image is None):
result = process_frame(image, categories, i, output_frame_path)
if result is not None:
output.append(result)
else:
print("WARNING: frame is not supported! Skip!")
i += 1
return output
You can find the full script here.
Here are the outputs from sample real wild animal photos:
Hologram SDK triggered by Python 3
Now we are ready to send out our recognized data through Hologram! Hologram SDK supports python 2.7. Unfortunately, ELL requires Python 3.4 on Raspberry Pi Zero. So I did a little trick which has all Hologram functions saved in a separate module and then trigger a Python 2.7 subprocess from my ELL core functionality.
Here is the code for sending message using Hologram SDK in Python 2.7, send_to_hologram.py
:
#!/usr/bin/env python2.7
import sys
from Hologram.HologramCloud
import HologramCloud
credentials = {'devicekey': ''}
hologram = HologramCloud(credentials, network='cellular')
def send_messages(messages):
if len(messages) < 1:
return
result = hologram.network.connect()
if result == False:
print ' Failed to connect to cell network'
for message in messages:
response_code = hologram.sendMessage(message)
print('{} : {}'.format(hologram.getResultString(response_code), message))
hologram.network.disconnect()
def main(argv):
if (len(argv) < 1):
print("No messages to send")
return
else:
print("Messages to send: {}".format(str(argv)))
send_messages(argv)
if __name__ == "__main__":
main(sys.argv[1:])
Now we need another function in Python 3 to trigger the above function in a subprocess. Here is the code snippet:
def send_to_hologram(messages):
# Hologram SDK only works in Python 2.7 enviroment.
# So we have to call its function in this way
call_hologram_command = "sudo python2.7 send_to_hologram.py " + messages with subprocess.Popen(call_hologram_command, shell=True, \
stdout=subprocess.PIPE,stderr=subprocess.PIPE, \
universal_newlines=True) as proc:
for line in proc.stdout:
print(line.strip("\n"), flush=True)
for line in proc.stderr:
print(line.strip("\n"), flush=True)
Hologram with Custom Cloud
Hologram has a very neat HologramCloud integrated so you can easily send out data to HologramCloud and setup further rules in the nice WebConsole to handle your data. However, there are still cases where you need to actually send out data to your own server or Private Cloud, and sometimes it's even necessary.
Considering the following architecture for the wild animal monitoring and detecting project:
Once the data has been collected from the IoT device (which is the RaspberryPi here), we send to our data to a private cloud (this could be a Microsoft IoT Edge Core or Amazon GreenGrass core device). Why this is important? There are two obvious benefits you can see:
- Data efficiency. With a local Private Cloud setup, you can send out data much easier and more reliable to your own private cloud than to a public cloud. You can even build a mesh network to make it better. In the wild environment, data transferring reliability is a critical factor to success.
- Save cost. If you have thousands of IoT devices deployed, you have lots of data transferring going on. If all these data sent to Public Cloud, it will be a big financial burden on you. Now with private cloud, you can have all these data gathered, analyzed and processed in your own Private Cloud before you sending to public cloud. This could be a life saver for your wallet.
Fortunately, Hologram supports CustomCloud so that you can send out data directly to a WebSocket server as you defined. In my project, I prototyped a solution to send out the recognized result and the image encoded as Base64 string (In real wold scenario you probably don't want to do this) to my WebSocket server through Hologram Cellular.
On Raspberry Pi Zero, I changed the above send_to_hologram.py
script to the following:
#!/usr/bin/env python2.7
import argparse
import os
from Hologram.HologramCloud import HologramCloud
from Hologram.CustomCloud import CustomCloud
credentials = {'devicekey': ''}
hologram_cloud = HologramCloud(credentials, network='cellular')
custom_cloud = CustomCloud(dict(), send_host='192.168.1.14', send_port=9999, network='cellular')
def send_messages(messages, is_custom_cloud=False):
if len(messages) < 1:
return
cloud_obj = custom_cloud
else:
cloud_obj = hologram_cloud
result = cloud_obj.network.connect()
if result == False:
print ' Failed to connect to cell network'
return
for message in messages:
m = message + "\n"
response = cloud_obj.sendMessage(m)
if is_custom_cloud:
result_string = cloud_obj.getResultString(response)
cloud_obj.network.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Send messages to HologramCloud or CustomCloud through Hologram Nova')
parser.add_argument('-m', '--messages', nargs='+', help='the messages to be sent out')
parser.add_argument('-d', '--input-dir', help='A folder path including files of contents to be sent out')
parser.add_argument('--custom-cloud', action='store_true', \
help='send to custom cloud (default: send to HologramCloud)')
args = parser.parse_args()
for file_name in os.listdir(args.input_dir):
with open(file_path, 'r') as input_file:
message_to_send = input_file.read()
print('Sending data from file {}'.format(file_path))
send_messages([message_to_send], args.custom_cloud)
else:
send_messages(args.messages, args.custom_cloud)
In the updated script, I added a CustomCloud option to send out data to my WebSocket server.
Now on my test WebSocket server, I used a small but super neat tool called websocketd
to open a web socket for accepting data from Hologram Celluar. To create a WebSocket server, run the following:
websocketd --port=8080 --staticdir=./webserver nc -k -l 9999
The above command is basically saying running a WebSocket server on port 8080 and take any stdin inputs from netcat listening on port 9999 (which will be the port that our Hologram send to). The result will be showing in a website served from the root folder webserver.
Here is how it looks like when put everything together (The following is a Gif image and please click on it if you didn't see it running):
You can find all the source code for the web in my GitHub Project here.
Things Un-DoneAlthough this is a contest entry, I feel like I should include all the things I haven't got the time to complete because they are necessary for a practical wild animal detection solution. I also value this section as I believe this could potentially inspire anyone who is interested in this topic to solve these problem:
- A real model trained from real wild animal photos: There are many tools now you can utilize to train the model. You can also write one if you are a ML expert who is familiar with AI frameworks such as Tensorflow, CNTK etc.;
- A good case to hold all the hardwares: this will make this prototype project beautiful and practical. I'm aiming the wild environment, so a good case should be water-proof while have flexible setup for cameras and IR sensors. This is actually a step I'm doing now (YES, I'm learning 3D printing....);
- Power consumption: this is a topic that I want to tackle but haven't really figured out a clue yet. I've looked into solar power panel but I'm not convinced by its performance in the wild environment. The Raspberry Pi Zero running AI + Hologram could easily eat lots of power. If it needs replacing power in every few hours, the whole project would become meaningless. So if you have any good ideas on this topic., please let me know.
This is a prototype project which I personally believe is meaningful for real world wild animal monitoring and detection. There are many projects on the internet to utilize Raspberry Pi to build a DIY camera trap. But these projects don't really answer some of the key questions for wild animal detecting/monitoring. I hope my prototype here can shed some lights on this topic which could really help people build a practical solution.
Beyond wild animal detecting and monitoring, the AI + IoT + Cellular is such a powerful model with which you could build much more applications. You've probably heard about Google Clips Camera and Amazon DeepLens. With all the tools in the project here, you could easily build your own Clips or DeepLens with much less budget. Even better, with Hologram Nova, you can build a cellular version of these AI-backed smart camera (I'll probably write up another post to create such a thing) which is cooler than Google or Amazon does :-) I'm sure you will try it on, right?
That's all what I've got so far. Hopefully you enjoyed my project!
Thanks!
Eric
Comments