Machine learning is one of the most exciting areas of computing, but the advanced concepts involved may be out of reach for the average non-techie. ML Mona Lisa attempts to bridge the gap by demonstrating Image Classification concepts in an interactive art installation that's accessible by all. π¨β¨
Getting Started π° π©βπ»The NVIDIA team have an astonishing collection of learning resources, including an excellent interactive course called Getting Started with AI on Jetson Nano which will get you up and running with all of the required software and knowledge for this project.
nb: when flashing your mSD card, be sure to use the NVIDIA DLI AI Jetson Nano SD Card Image v1.1.1 image https://developer.download.nvidia.com/training/nano/ainano_v1-1-1_20GB_200203B.zip
as specified in the course, NOT the "default" Jetson Nano Developer Kit SD Card Image image from https://developer.nvidia.com/jetson-nano-sd-card-image-r3231
I was completely blown away by how easy it was to execute my idea after completing the above Getting Started course. I was able to quickly adapt the Emotions Project to achieve my vision through a series of simple modifications to their example.
Load the Emotions Project notebook classification_interactive.ipynb
as directed in Getting Started, including the changes made from the previous Thumbs Project. Add an array of images, corresponding to the emotions in CATEGORIES to the Live Execution cell, along with an Image widget to display them:
# an array of images corresponding to CATEGORIES
ml_files = [
open("images/ml-none.jpg", "rb").read(),
open("images/ml-happy.jpg", "rb").read(),
open("images/ml-sad.jpg", "rb").read(),
open("images/ml-angry.jpg", "rb").read()
]
ml_widget = ipywidgets.Image(value=ml_files[0], format='jpg', width=240, height=240)
Then, create an images
folder using the File Browser tool, and use Upload FIles to upload your images (you can grab mine from https://github.com/ishotjr/ml-mona-lisa/tree/master/images). Update ml_files
as needed to match your image file paths.
Lastly, add ml_widget.value = ml_files[category_index]
to the live()
function in order to keep the image widget updated:
def live(state_widget, model, camera, prediction_widget, score_widget):
global dataset
while state_widget.value == 'live':
image = camera.value
preprocessed = preprocess(image)
output = model(preprocessed)
output = F.softmax(output, dim=1).detach().cpu().numpy().flatten()
category_index = output.argmax()
prediction_widget.value = dataset.categories[category_index]
for i, score in enumerate(list(output)):
score_widgets[i].value = score
ml_widget.value = ml_files[category_index]
If you have already Run each cell, just re-run Live Execution and the final cell under "Execute the cell below to create and display the full interactive widget."; otherwise, Run each cell in order.
If you'e already trained emotions per the course, you are all set; if not, either train as directed by the course, or grab emo.pth
from https://github.com/ishotjr/ml-mona-lisa and press evaluate to load the model (ymmv since this model was trained using my face, not yours!). Now, ensure that state is set to live, and watch the Mona Lisa mimic you!
I'm absolutely blown away by how easy it is to harness powerful machine learning concepts using the Jetson Nano after spending just a few hours working through their excellent course. I can't wait to explore all of the other learning resources and unlock the real power of this amazing piece of hardware! π€―
Comments