Applied & received the AMD KR260 Hardware kit (Thank you AMD x Hackster!).
Unboxing video:
There are many useful getting started with documentation via the AMD site for the KR260 Robotics Starter Kit. I had difficulty setting up and flashing the image [Kria™KR260 AMD Ubuntu Software Development Kit: iot-limerick-kria-classic-desktop-2204-20240304-165.img.xz] with my Ubuntu system, so I had to switch to Windows. More troubleshooting on Setting Up by Matevž (click here)!
My initial plan was to test the KR260 as the new brain for my open-source desktop Robot Study Companion (RSC), which currently uses the RPi3/4 for development. My thoughts were: perhaps we can deploy a ChatGPT-like experience to the KR260 using Ollama and OpenWebUI.
For anyone getting started with docker, I recommend downloading the desktop version to your local system first to explore. Note: the desktop version of docker is not needed for the KR260.
Installing Docker Ollama CPU-only version on the KR260 was quite straightforward.
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
src: https://hub.docker.com/r/ollama/ollama
Ollama Github with docker installation instructions was also easy to follow (ref. https://github.com/ollama/ollama last accessed July 30th 2024)
Success with Moondream 2 (Parameters: 1.4B; Size: 829MB) running on the KR260:
But then it started to glitch, after all, it's a vision-based model so its understandable?
I tried Phi3 Mini (Parameters: 3.8B & Size: 2.3GB), but when trying to run the KR260 didn't have enough RAM.
Assume that all Ollama models will not run on the KR260 because of...:
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
src: https://github.com/ollama/ollama
Installing LocalAI was successful, but I need to run and test models that is within the KR260 memory. Right now my plans are to build a small customise model that have the necessary features to be the study companion.
To run ollama/moondream on the KR260:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
To run Open WebUI on your host machine:
docker run -d --network=host -p 3000:8080 -e OLLAMA_BASE_URL=192.168.0.30:11434/ -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
I want to deploy a Vitis compiled moondream xmodel on the FPGA and compare in btop performance stats in the future.
KR260 + Testing other HardwareI initially wanted a full STT & TTS using the respeaker and some smaller speaker (Figure 1). Build an enclosure around that and continue to add different sensors similar to the current RSC prototype.
I was unsuccessful with getting the respeaker to work on the KR260 - I would have to install Vivado (~100GB!) and learn how to prepare and export a custom overlay to make it.
However, any USB microphone is a plug-and-play solution. I use the Samson Go Mic along with a headphone for audio, and it works great!
We will document our future progress on Hackster and our KR260 YT playlist.
Comments
Please log in or sign up to comment.