In Colorado and other states with large wildlife populations, it's not uncommon to have human-wildlife conflicts (bears and moose in towns, traffic accidents involving elk, etc.)
I decided to help prevent some of the problems that arise when wildlife comes near people by creating a device that detects motion, takes a picture and analyzes it with a machine learning algorithm to take some sort of action (scaring it away, lighting signs, notifying authorities). In addition, the device keeps track of environmental data to help researchers follow migration patterns and conditions that can cause more wildlife encounters, and it also detects wild fires to prevent more damage than necessary.
The main focus on this project is machine learning wildlife classification. I have accomplished this by training TensorFlow to identify four specific species of animal with data from Google Images. All of the initial training is accomplished on a computer using a pre-configured docker machine image for TensorFlow and over a thousand wildlife pictures. Once TensorFlow has been trained, the graph file is optimized to run on mobile devices, then it can be used to run on an Android Things device by taking an image, resizing it, and then running it through TensorFlow to determine if that image contains one of our pre-trained classifications, and to what confidence level.
Once an animal has been identified, something has to be done with that data. Depending on your situation, you can flash lights, play a sound or do anything else with additional hardware. For this prototype I hooked Android Things into a Firebase backend in order to simply save the image and the classification information.
Once the information is stored on Firebase, you can create a helper application that receives a notification about the presence of an animal, or simply store that data for research purposes.
In addition, if you want to be able to see the results without having to check Firebase, you can add an LCD screen to this project. You can find an easy to use driver by Gautier Mechling here for the 1601 series LCD screen.
Creating the TensorFlow Classification FilesThe first thing you will want to do is ensure that TensorFlow is on your computer and working. This can be fairly complex, and the easiest way I have found to make it work properly through the entire process of generating trained files is to install and use Docker. This program will allow you to run virtual machines on your computer that are preconfigured for TensorFlow.
Once you have installed Docker and have it running on your computer, you should open its preferences and set memory usage for your virtual machines. I set mine to use 7 GB of memory, which may be more than you need, but I spent days attempting to get TensorFlow to properly create the required trained graphs without crashing before I realized that the virtual machine was running out of memory.
Once you've installed Docker and started it on your machine, you'll need to run it from a terminal and pull down an image. For this example I am running under macOS, so the commands may be a bit different for your platform.
docker run -it -v $HOME/tf_files:/tf_files gcr.io/tensorflow/tensorflow:latest-develcd /tensorflowgit pullgit checkout v1.0.1
When everything has finished setting up, you should have a prompt in your terminal like this:
root@1643721c503b:/tensorflow#
At this point you'll need a set of images to train TensorFlow with. I used the Fatkun Batch Download Image Chrome plugin to bulk download returned images from a Google search. When you have the plugin installed, you can search for whatever you want to classify and start to select images that you would want to save.
To make naming a little easier, you may also want to go into the More Options section and let the plugin rename the images while downloading them.
Next you will need to move the images that you are using into a folder in tf_files under your home directory, which is a folder we created while initializing our docker machine. For this example, my images directory is called TensorFlowTrainingImages. Each classifiable item should have its own folder within that directory, as shown below.
Once your directories are set up, you can start retraining with the following command from your Docker terminal:
python tensorflow/examples/image_retraining/retrain.py \
--bottleneck_dir=/tf_files/bottlenecks \
--how_many_training_steps 3000 \
--model_dir=/tf_files/inception \
--output_graph=/tf_files/graph.pb \
--output_labels=/tf_files/labels.txt \
--image_dir /tf_files/TensorFlowTrainingImages
The above command will generate bottlenecks, which are essentially data used by the final classification data pass, and a graph and labels file that are used for classification.
From this point forward, the operations we run with TensorFlow can take anywhere from a few minutes to over an hour, depending on the speed of your computer. As the retraining command runs, you should see a lot of output in your terminal similar to this:
Step 130: Train accuracy = 95.0%2017-04-12 18:21:28.495779:
Step 130: Cross entropy = 0.2503392017-04-12 18:21:28.748928:
Step 130: Validation accuracy = 92.0% (N=100)
Once your bottlenecks have been generated, you will have a graph.pb file and a labels.txt file that represents your data. While these formats work great when running classification on your computer, they tend to not work when placed into an Android app. You will need to optimize them.
Start by running the /configure
command. Accept all the default values.
Once configuration has finished, run the following command to set up the optimization tool. This step took about an hour to finish on my machine.
bazel build tensorflow/python/tools:optimize_for_inference
Once your optimization tool is built, you can use it to optimize your graph file with bazel.
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/tf_files/graph.pb \
--output=/tf_files/optimized_graph.pb \
--input_names=Mul \
--output_names=final_result
Now that your optimized graph is generated, you can find it with your labels in the tf_files folder in your home directory.
Once you have your optimized graph, you can include it in your Android Things project to interact with it. If you look in the source code for this project, you can see the Java code that takes an image using the Camera2 API and passes it to TensorFlowImageClassifier.java for classification. You can also find the code for uploading the classified image to Firebase, like so
private void uploadAnimal(Bitmap bitmap, final Detection detectedAnimal) {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream);
byte[] data = outputStream.toByteArray();
FirebaseStorage storage = FirebaseStorage.getInstance();
StorageReference storageReference = storage.getReferenceFromUrl(
FIREBASE_STORAGE_URL).child(System.currentTimeMillis() + ".jpg");
UploadTask uploadTask = storageReference.putBytes(data);
uploadTask.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception exception) {
}
}).addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() {
@Override
public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) {
handleNotificationForImage(taskSnapshot.getDownloadUrl(),
detectedAnimal);
}
});
}
Environmental SensorsWhile being able to identify wildlife is great, there's an opportunity here to do more. If one of these devices is set up in the wilderness, you can also add sensors to save weather information and report that to your backend. For my prototype, I included a humidity sensor, temperature and pressure sensor, a flame detector (to theoretically inform someone of wildfires when no one is around), an air quality sensor and a UV sensor. Some of these had their own unique challenges for attaching to Android Things, which I will cover next.
Digital SensorsThe most simple sensors to attach to the wildlife detector would be the flame detector and motion detector. These sensor idles on low, but switch to high when detecting a flame or motion. Using Android Things' peripheral I/O API, these sensor was the easiest to support, as the device only needs to listen for changes in state.
Support for the HC SR501 and flame detector both follow the pattern of this class
@SuppressWarnings({"unused", "WeakerAccess"})
public class HCSR501 implements AutoCloseable {
public enum State {
STATE_HIGH,
STATE_LOW;
}
public interface OnMotionDetectedEventListener {
void onMotionDetectedEvent(State state);
}
private Gpio mMotionDetectorGpio;
private OnMotionDetectedEventListener mOnMotionDetectedEventListener;
private boolean mLastState;
public HCSR501(String pin) throws IOException {
PeripheralManagerService pioService = new PeripheralManagerService();
Gpio HCSR501Gpio = pioService.openGpio(pin);
try {
connect(HCSR501Gpio);
} catch( IOException | RuntimeException e ) {
close();
throw e;
}
}
private void connect(Gpio HCSR501Gpio) throws IOException {
mMotionDetectorGpio = HCSR501Gpio;
mMotionDetectorGpio.setDirection(Gpio.DIRECTION_IN);
mMotionDetectorGpio.setEdgeTriggerType(Gpio.EDGE_BOTH);
mLastState = mMotionDetectorGpio.getValue();
mMotionDetectorGpio.setActiveType(mLastState ? Gpio.ACTIVE_HIGH : Gpio.ACTIVE_LOW);
mMotionDetectorGpio.registerGpioCallback(mInterruptCallback);
}
private void performMotionEvent(State state) {
if( mOnMotionDetectedEventListener != null ) {
mOnMotionDetectedEventListener.onMotionDetectedEvent(state);
}
}
private GpioCallback mInterruptCallback = new GpioCallback() {
@Override
public boolean onGpioEdge(Gpio gpio) {
try {
if( gpio.getValue() != mLastState ) {
mLastState = gpio.getValue();
performMotionEvent(mLastState ? State.STATE_HIGH : State.STATE_LOW);
}
} catch( IOException e ) {
}
return true;
}
};
public void setOnMotionDetectedEventListener(OnMotionDetectedEventListener listener) {
mOnMotionDetectedEventListener = listener;
}
@Override
public void close() throws IOException {
mOnMotionDetectedEventListener = null;
if (mMotionDetectorGpio != null) {
mMotionDetectorGpio.unregisterGpioCallback(mInterruptCallback);
try {
mMotionDetectorGpio.close();
} finally {
mMotionDetectorGpio = null;
}
}
}
}
Analog SensorsAnalog sensors are a little trickier with Android Things. While a device may have an onboard analog to digital converter (ADC), it is not enabled on the Android Things platform. To get around this, I used an MCP3008 ADC chip to read analog input and convert it to an int that could be read on the Android Things board. This was done in a quick and dirty "bit banged" method, so you can change the pins to match whatever you have available. Using an ADC, I was able to add support for the air quality sensor and UV sensor. The following code is what I used for the MCP3008, and has proven to be valuable in multiple Android Things projects that I have worked with.
public class MCP3008 {
private final String csPin;
private final String clockPin;
private final String mosiPin;
private final String misoPin;
private Gpio mCsPin;
private Gpio mClockPin;
private Gpio mMosiPin;
private Gpio mMisoPin;
public MCP3008(String csPin, String clockPin, String mosiPin, String misoPin) {
this.csPin = csPin;
this.clockPin = clockPin;
this.mosiPin = mosiPin;
this.misoPin = misoPin;
}
public void register() throws IOException {
PeripheralManagerService service = new PeripheralManagerService();
mClockPin = service.openGpio(clockPin);
mCsPin = service.openGpio(csPin);
mMosiPin = service.openGpio(mosiPin);
mMisoPin = service.openGpio(misoPin);
mClockPin.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
mCsPin.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
mMosiPin.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
mMisoPin.setDirection(Gpio.DIRECTION_IN);
}
public int readAdc(int channel) throws IOException {
if( channel < 0 || channel > 7 ) {
throw new IOException("ADC channel must be between 0 and 7");
}
initReadState();
initChannelSelect(channel);
return getValueFromSelectedChannel();
}
private int getValueFromSelectedChannel() throws IOException {
int value = 0x0;
for( int i = 0; i < 12; i++ ) {
toggleClock();
value <<= 0x1;
if( mMisoPin.getValue() ) {
value |= 0x1;
}
}
mCsPin.setValue(true);
value >>= 0x1; // first bit is 'null', so drop it
return value;
}
private void initReadState() throws IOException {
mCsPin.setValue(true);
mClockPin.setValue(false);
mCsPin.setValue(false);
}
private void initChannelSelect(int channel) throws IOException {
int commandout = channel;
commandout |= 0x18; // start bit + single-ended bit
commandout <<= 0x3; // we only need to send 5 bits
for( int i = 0; i < 5; i++ ) {
if ( ( commandout & 0x80 ) != 0x0 ) {
mMosiPin.setValue(true);
} else {
mMosiPin.setValue(false);
}
commandout <<= 0x1;
toggleClock();
}
}
private void toggleClock() throws IOException {
mClockPin.setValue(true);
mClockPin.setValue(false);
}
public void unregister() {
if( mCsPin != null ) {
try {
mCsPin.close();
} catch( IOException ignore ) {
// do nothing
}
}
if( mClockPin != null ) {
try {
mClockPin.close();
} catch( IOException ignore ) {
// do nothing
}
}
if( mMisoPin != null ) {
try {
mMisoPin.close();
} catch( IOException ignore ) {
// do nothing
}
}
if( mMosiPin != null ) {
try {
mMosiPin.close();
} catch( IOException ignore ) {
// do nothing
}
}
}
}
Unsupported HardwareUnfortunately, the DHT11 humidity sensor uses a protocol of signals that are triggered in intervals of nanoseconds, but the Android platform only supports down to milliseconds. Because of this, Android Things is not able to directly support the DHT11. However, there is a workaround. Using an ATMega328p flashed with Arduino, I was able to read the information from the DHT11 to the Arduino chip and send that information to the Android Things device using I²C.
You can read I²C with a custom address by opening a connection to that address in Android Things like so
PeripheralManagerService service = new PeripheralManagerService();
mHumidity = service.openI2cDevice(BoardDefaults.getHumidityI2cBus(), 0x08);
In the above sample, the custom address we are using is 8. In your Arduino code running on the ATMega328p, you can use that address with the Wire library.
#import <Wire.h>
#include <dht11.h>
dht11 DHT11;
#define DHT11PIN 13
uint8_t humidity;
void setup() {
Wire.begin(8); // join i2c bus with address #8
Wire.onRequest(requestEvent); // register event
}
void loop() {
int chk = DHT11.read(DHT11PIN);
humidity = DHT11.humidity;
delay(1000);
}
void requestEvent() {
Wire.write(humidity);
}
Pre-Written DriversWhile adding support for digital I/O is easy, there is one thing that is even easier: using code that's already written. For the BMP280, I was able to bring in a pre-written driver from Google's official repo of drivers, allowing me to read information from that environmental sensor without much work or time. You can find Google's pre-written samples here, as there are a few that could be useful for any project you are putting together with Android Things. These drivers are as simple to get into your project as adding them to your gradle dependencies
dependencies {
compile 'com.google.android.things.contrib:driver-bmx280:0.2'
provided 'com.google.android.things:androidthings:0.4-devpreview'
}
For this project, I also used the GPS driver to add location data to Firebase, though you may be able to retrieve that data from whatever network connection you are using.
Network ConnectionsIn its current state, the wildlife detector simply uses wireless for an internet connection. While this is not the most practical, especially given that it would most likely be used in an area without a readily available wireless connection, it can easily be modified to support a cellular connection or any other connection that would be more appropriate.
CameraThe camera here gets a little tricky. Currently USB cameras are not fully supported by Android Things, but you can treat them as a UART device. If you don't want to essentially write a driver for your camera, you can also use a bluetooth camera, as Android Things does support bluetooth connections. If you are using a Raspberry Pi with Android Things, you can use the standard ribbon cable camera using Android's camera API without much hassle.
More DetailsWhile there is a lot going on with this project, more details can be found in the attached source project. Once you have generated your TensorFlow graph, as explained above, you should be able to download the source and replace the graph file with whatever you are attempting to detect. For Firebase support, you will also need to create your own free Firebase project and copy the credentials into the project.
Comments