The idea of this project is to enable people to find their belongings which they left somewhere in the house but can't seem to find them when needed.
To save your time in searching these items, we've developed a system wherein you just have to ask Alexa the location of that object and get the precise location of the item! Be it under a pile of clothes on your bed or inside a cupboard! Alexa will be there to the rescue! It can be very useful at times when you're getting late and can't find your mobile phone or wallet or some other item you immediately have the use for.
The Raspberry Pi Camera keeps track of these small objects (input by the user) in the room with respect to bigger things in the room like table, chair, bed etc. It can also help you make sure your possessions are in the right place as they should be!
Setting up the systemThe Raspberry Pi (RPi) module is connected to RPi camera, Arduino Uno and a server machine via the Internet. RPi continuously transmits the video recorded by RPi camera to the server machine which tracks the position of smaller, already marked as important objects and saves their last known location in an online database. Arduino Uno has also been connected to RPi for controlling a servo motor to manage the orientation of the camera to ensure that maximum area is covered and all objects are recorded in the best possible conditions. The camera is embedded onto the motor.
YOLO's Darknet trained on COCO dataset has been used for object recognition and can be set-up on any (x86/x64) machine by following the amazing tutorial given by Joseph Redmond at https://pjreddie.com/darknet/yolo/.
In case you need to train the model for your own models, keep following the code and we will try to add a well documented version for the same very soon!
After launching the Object Finder skill on your Alexa device using the following command:
"Alexa, Open Object Finder."
You can then simply ask it about the whereabouts of your stuff. Alexa would then connect to our database and answer your question with the item's last known location.
You need to alter some of the files in the darknet repository for setting up YOLO to complete this project (this particularly includes darknet/src/image.c)
Altered program can be found in the code section of this project. The lines which have been altered simply save the coordinates of the bounding boxes of the objects recognized in a text file. The function within image.c which has been modified can be found separately added in the code section! We have provided useful comments wherever necessary!
Creating an Amazon Alexa Skill for this ProjectInternal Implementation
- Create an Alexa Skill in the Alexa Skill Builder portal and define the interaction models as well as the intent schema. This sets up the framework for the behavior of the skill.
- Create an AWS Lambda function that interfaces with the Amazon Alexa and translates the various intents into requests that can be passed on. This lambda function is written in Node.js using Alexa Skills Kit. Axios module is used in the lambda back end to connect to our smart home device. Dashbot module has also been used for analytics.
How to get it up and running?
Setting up Your Alexa Skill in the Developer Portal
To link it with your Amazon Echo Device, go to your Amazon developer console.
- Create a new skill. Call it
Where's my stuff??
. Give the invocation name aswhere's my stuff
. Click next.
- Click on the Launch Skill Builder (Beta) button . This will launch the new Skill Builder Dashboard.
- Click on the "Code Editor" item under Dashboard on the top left side of the skill builder.
- In the textfield provided, replace any existing code with the code provided in the
speechAssets/intentSchema.json
and click on "Apply Changes" or "Save Model".
- Click on the Save Model button, and then click on the Build Model button.
- If your interaction model builds successfully, click on Configuration button to move on to Configuration. Now, we will be creating our Lambda function in the AWS developer console, but keep this browser tab open, because we will be returning here later.
Setting Up A Lambda Function Using Amazon Web Services
- Go to http://aws.amazon.com and sign in to the console and choose the Lambda service from the search box. AWS Lambda only works with the Alexa Skills Kit in two regions: US East (N. Virginia) and EU (Ireland). Choose one of them.
- Click the "Create a Lambda function" button. Choose "Blueprints", then choose the blueprint named "alexa-skill-kit-sdk-factskill". And give your function a name.
- Set up your Lambda function role to "lambda_basic_execution" and click on Create Function.
- Configure your trigger. Look at the column on the left called "Add triggers", and select Alexa Skills Kit from the list.
- After you create the function, the ARN value appears in the top right corner. Copy this value for now.
- Scroll down to the field called "Function code", and replace any existing code with the code provided in the
lambda/index.js
. You can also copy the code to your local machine,run npm install
and upload the zip file containing yourindex.js, package.json and node_modules
using Upload a .zip file in the "Function code" section.
- Make sure you've copied the ARN value. The ARN value should be in the top right corner. If you haven't already, copy this value for use in the next section.
Connecting Your Voice User Interface To Your Lambda Function
- Go back to the Amazon Developer Portal and select your skill from the list.
- Open the "Configuration" tab on the left side, if you didn't keep it open as mentioned earlier, and select the "AWS Lambda ARN" option for your endpoint.
- Select "North America" or "Europe" as your geographical region and Paste your Lambda's ARN (Amazon Resource Name) into the textbox provided.
- Click Save and Next.
Your Skill is Up and Running Now! You can test it on https://echosim.io
For more details visit: https://github.com/TheDreamSaver/alexa-where-is-my-stuff
Object TrackingOnce an object has been recognized in an image, our system saves its whereabouts in real time and then it is only a matter of keeping track of its movements!
Simply put, locating an object in successive frames of a video is called tracking.
For object tracking, there are many different types of approaches which can be used. These include:
- Dense Optical flow: These algorithms help estimate the motion vector of every pixel in a video frame.
- Sparse optical flow: These algorithms, like the Kanade-Lucas-Tomashi (KLT) feature tracker, track the location of a few feature points in an image.
- Kalman Filtering: A very popular signal processing algorithm used to predict the location of a moving object based on prior motion information. One of the early applications of this algorithm was missile guidance! Also as mentioned here, “the on-board computer that guided the descent of the Apollo 11 lunar module to the moon had a Kalman filter”.
- Meanshift and Camshift: These are algorithms for locating the maxima of a density function. They are also used for tracking.
- Single object trackers: In this class of trackers, the first frame is marked using a rectangle to indicate the location of the object we want to track. The object is then tracked in subsequent frames using the tracking algorithm. In most real life applications, these trackers are used in conjunction with an object detector.
- Multiple object track finding algorithms: In cases when we have a fast object detector, it makes sense to detect multiple objects in each frame and then run a track finding algorithm that identifies which rectangle in one frame corresponds to a rectangle in the next frame.
For the purposes of this tutorial, we will stick to Kalman Filter. Other algorithms can be easily switched into via these lines of code in the file real_time.py:
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
tracker_type = tracker_types[2]
We then use RPi Camera's real time feed along with YOLO's predicted bounding boxes to track the objects with ease! The rest of the well commented code has been included in the Github repository!
Demo VideoThe demo videos of this skill in action can be found here:
https://goo.gl/uShABj
Comments