Virtual Breadboard (VBB) is a Windows Store App (UWP) for creating virtual circuit prototypes to support the development of real Microcontroller firmware.
Something I have been contemplating for some time is ways in which Machine Learning might be deployed to improve ease-of-use.
Symbol CompletionVBB is a visual tool so there are lots of components and just as many icons. So much so that they can be hard to find.
When we type in the browser we get suggestions in real time. A feature praised for it's easy of use and feel.
So I thought what if you could draw the component and get suggestions as you draw?
Ideally using stylus but a mouse or touch screen will work
- In Circuit Scribble mode start drawing at the reference pin
- Draw a sketch of the component
- The icons of the best matching components are shown as suggestions
- Select the short cut icon button to place it
From a UI perspective it's super nice because you don't need to switch context to think about how to find the component, how dragging and dropping works and well it's just feels cool.
There are some challenges of course
- How do you know what the components look like in the first place?
- How accurate can the suggestions actually be?
- How quickly can suggestions be shown?
The first of the challenges. How do users get to know which components are available can be turned into a feature not a bug.
During onboarding training drawing components as a way to increase familiarity with what's available and how components can be used offers the chance to increase user engagement and generate the training data stream at the same time. Bonus.
The onboarding training module will
- Show components and allow tracing components
- Flash Components to be drawn memory to add randomness
- Use drawn components as captcha's as verification before being added to the traiing set
The result will be a very large set of sketches for training.
There are a large number of components in VBB and this will only grow as users add custom components of their own. Image recognition can be assisted in several ways for example:
- Use of green dot for pin 1 provides some partitioning
- Pen colors can be used to color partition
But the size of the dataset will inevitably grow very large.
There are interesting services like Azure Machine Learning as Service which can consume the data stream and generate ONNX models in an ongoing pipeline.
How accurate can the suggestions actually be?Machine Learning was born for this. Well constrained symbol models and large datasets. I think we can expect quality accuracy.
Especially since the symbol being searched only needs to make it to the top 5 candidates before the user can make the final choice.
Also users will come to learn how to draw components such that they are better recognised. Humans can learn too!
Search Engine companies invest considerable effort to making sure suggestions are fast and relavent. The faster the better, ideally as you draw the suggestions would be updated continously even with partial sketches.
Cloud or Edge?The question is how big will this model need to become. Will it in fact become too large for local inferencing to be practical. Understanding the options at scale is essential.
EdgeUWP has a nice ONNX machine learning engine. It consumes ONNX models and rapidy inferences them.
Computers with GPU's will likely perform well with local inference. However not all computers have GPU's especially in educational settings which is a primary VBB sector.
Cloud Data CenterThis is the use case that caught my attention with the Hackster competition. High performance shared data service could be cost effective solution.
- The model would always be up to date
- Everyone would have the same inference performance giving consistancy
- The resources to load the model only is needed by the servers
- Scales
In order to run a Cloud service you need a way to communicate between the server and client.
SignalR is a websocket technology with pub/sub characteristics with python support.
Refering to the circuit-scribble-server github project
The SignalR Server
- Registers as a Server
- Receives Image Messages and puts them on a worker queue
- The interference engine reads Images from the worker queue
- processes the message and puts there response on the outgoing queue
It was very exciting to read that there is a Vitis runtime provider in the ONNX runtime model because it handles alot of the custom adaptor code that simplifies development. Even more exciting was Azure support for the Xilinx Alveo u250.
The potential of a plug-and-play solution evaluation loomed large..
Azure NP 10 u250The Azure NP series offers access to the Xilinx u250 hardware in a cloud data center.As someone familar with Azure but new to Xilinx the NP10 option seemed ideal. There are even prebuilt VM's to bootstrap you. However.. I really struggled to get anything to work.
The initial intention was to use the ONNX Vitis runtime engine because that's what I am using the UWP App. There are so many moving parts to a project like this that being able to use a standard like ONNX to handle the plumbing in and out of the models is a huge plus.
I had prototyped a SignalR server to use ONNX and had hoped the Azure Xilinix solution was going to be 'plug-and-play'. However after several days of confusion I finally figured out the ONNX Vitus engine requires the DPUV1 firmware which is not compatible with the DPUCADF8H installed on the Azure NP's. Probably for a Vitis wiz updating the default dpu-azure.xclbin but I couldn't figure it out.
Then I tried a more 'bare bones' version using the SignalR service to work with the pytorch model directly. However I still struggled to get the u250 hardware to run models.
I thought the best idea was to get the latest of everything, latest vitis, latest VM images. But the examples would not run on the actual hardware.
Finally I was able to get a reference example to work on the Azure u250 with the following steps.
1. Don't use the Xilinx prebuild VM's - Instead build a clean install according to the Azure doc
2. Use a prebuilt known 1.4 compatible Docker version
docker pull xilinx/vitis-ai:1.4.1.978
3. Follow the Vitis AI Tutorials
git clone https://github.com/Xilinx/Vitis-AI-Tutorials.git
From the Design_Tutorials/09-mnist_pyt Tutorial
In compile.sh add a u250 new target
elif [ $1 = u250 ]; then
ARCH=/opt/vitis_ai/compiler/arch/DPUCADF8H//U250/arch.json
TARGET=u250
In target.py add a new u250 target
ap.add_argument('-t', '--target', type=str, default='zcu102', choices=['zcu102','zcu104','u50','u250','vck190'], help='Target board type (zcu102,zcu104,u50,u250,vck190). Default is zcu102')
In setup.sh edit the overlay to point to the azure version of the xclbin
DPUCADF8H | dpuv3int8)
export XLNX_VART_FIRMWARE=/opt/xilinx/overlaybins/DPUCADF8H/dpu-azure.xclbin
;;
Compile ErrorOne strange error I seem to get is that the input batch must be 4 during quantization for the compile to complete.
This is fixed by adding a --batchsize 4 parameter to the --quant_mode test quantisation step
(vitis-ai-pytorch) Vitis-AI /workspace > python -u quantize.py -d ${BUILD} --batchsize 4 --quant_mode test 2>&1 | tee ${LOG}/quant_test.log
You should then be able to follow the tutorial as written.
This was the only way I could get the example to run. I tried the latest version of Vitus and other Xilinx VM's but the all caused the application to Abort when it was run on real hardware.
However, something seems not quite right even with this running example.
The accuracy seems too low to be correct even though the fps performance is amazing!
One thing that was interesting is when I compiled on the latest version of Vitis (out of curiousity) it didn't complain about the batch = 4 and the model ran on the hardware but it was still not accurate.
So at that point I decided what I really needed to do is wait for Vitis 2.0 and see if these issues are already solved before grinding further.
Learning CurveDespite hitting significant friction I learned alot about the Xilinx Vitis solution.
Ideally in Vitis 2.0 the ONNX Vitus DPU Provider will be updated to support DPUCADF8H and be compatible with Azure. Stay Tuned.
Comments