The project is an example of how to not only obtain telemetry data with an MT3620 based Azure Sphere device, but also how to consume and visualize the data in an extremely secure manner.
The project will track the mood of the office and report it to the cloud for further analysis.
The project will have the following features:
- Interesting form factor so that a passer by would take notice of the device.
- An array of buttons with which a person could indicate their mood.
- Good: Press the Green button.
- Average: Press the Yellow button.
- Bad(less than average, etc...): Press the Red button.
- There will be a proximity detector on the device to detect when there is motion in front of the device. That way we can have some sort of data of how many times a person was in range of the device and a button was or was not pushed.
- Must be created using the Azure Sphere MT3620 Starter Kit https://www.element14.com/community/community/designcenter/azure-sphere-starter-kits
- Device must be secure from end to end.
- Must not require an instruction manual for the end user.
- Enough data must be collected to be useful in the cloud.
- Windows 10 Anniversary Update or better. Although later releases of the SDK do not require Windows 10, this project was built just as that we getting released, so that is why it is required.
- Visual Studio 2019 Community Edition or better: https://visualstudio.microsoft.com/
- Node, in order get to npm: https://nodejs.org/en/download/
- Azure Sphere SDK: http://avnet.me/Azure_Sphere_SDK_Install
- An azure account: https://azure.microsoft.com/
- 3D printer, or access to one to be able to print the enclosure
- General electronics tools (DMM, wire cutter/stripper, etc...)
You will first need to get your development box ready to communicate with the MT3620 dev board.
Install the needed drivers by following the information on Microsoft's site: https://docs.microsoft.com/en-us/azure-sphere/install/install-sdk#azure-sphere-sdk-preview-for-visual-studio
You have a choice here on which SDK to install. Choose the "for Visual Studio" version, which is in the link in the requirements. It doesn't matter all that much as both SDK's should get everything installed that you need.
Next, if you don't have an Azure account, it is time to get one. So, head over to https://azure.microsoft.com/ and sign up. Yes, it does ask for a credit card. Remember to set your spending limit to 0, if you are only going to be using the free services. If you are a student or a first time Azure user there are free options available to you. Look for the "free account" link and click it.
Ok, now that you have an Azure account you can claim your device.
Depending on when your device was manufactured, you may have to update the OS on the device before you can claim your device. If the device is newer this process will happen on the first power up and plugin after the SDK has been installed.
In order to update your device, search for "azure sphere" in your start menu. You are looking for the program "Azure Sphere Developer Command Prompt". Click that and prepare to type in that new window:
azphere device recover
Let the process execute. It will download the new firmware and install it on your device. Unless you have a acceptable OS installed on the board, your device will not be able to talk to any Azure service.
Once the process is complete, you should be able to then login to your Azure account with your credentials.
azphere login
and once logged in you should get something like:
If you've never logged in to Azure Sphere before or have just installed a SDK, you will need to add a parameter along with your Microsoft account email address:
azsphere login --newuser <email-address>
if the login works you now can claim your device. That command looks like:
azsphere device claim
When that finishes you will get a message telling you that device is now claimed along with and Id of that device.
If you ever wonder what your device ID is, plug in the device, launch the command window and issue the command:
azsphere dev show-attached
Gets you output like:
Now lets setup the wifi connection.
Issue the command:
azsphere device wifi show-status
You will see that you probably do not have a connection. You can enter that command anytime to see to what access point your device is connected and it's current connection status.
To see all the wifi access points available to your device issue the command:
azsphere dev wifi scan
that will show all of the available access points that your devices sees.
Did you notice I shortened "device" to "dev"? There are a number of shortcuts, so you may not have to type everything out all of the time.
To add a chosen wifi access point issue the following command:
azsphere device wifi add --ssid <yourSSID> --psk <yourNetworkKey>
This may take several seconds to complete. When it does, verify your connectivity with the "show-status" command you used above.
The last thing you need to do is put your device in development mode. If you don't you will not be able to send application to the device from your development box. It will only take application from an authorized place like "Azure". Issue the command:
azsphere dev enable-development
It will take a moment as some additional packages are applied to the device for debugging local. Once your device has rebooted, you should be ready to load some sample applications.
The EnclosureI don't really believe a project is complete without an enclosure to make the prototype a real live product. Since it will take a while to print this enclosure, I thought I would address it now, before going into the electronics.
I had a of possible ideas in my head. Since it is easier and faster to make virtual model of what I had envisioned, I made a few.
The first was to make models of all of the components. I didn't go to great detail on the MT3620 as I really only cared about the overall size and hole placement.
This allowed me to make a lot of potential ideas fairly quickly.
I had a lot of potential ideas:
Here are the first round of ideas for any sort of public consumption:
I wanted the object to be interesting enough that a person walking by may take a moment and give it a second glance. In order to test that out, I reached out to a group of people that normally critique website at my employer. They do this during their lunch our to stretch their critical eye skills. I asked if they would like something different to critique. They jumped at it.
I received a large amount of feedback. Some of that feedback was from the questions they asked, as opposed to whether they liked a feature of not.
I admit people not liking what I did was sort of hard to handle. However, I knew they were not attacking me. Rather they were being very critical, which is the information that I needed to hear.
The biggest question that was asked was "Does this sit on a table, or hang on a wall?". I didn't ever even think about hanging the thing. After that and the rest of the feedback I came back a couple days later with this design:
That was well liked, but the feedback that due to the placement of the buttons the middle lower one might get hit more than the others. So taking that, I came up with the design I have now:
After I had the enclosure built, everyone asked about my buttons. They were actually pretty easy to make. There was enough room between the colored touchable part and the white diffuser below. I just modeled up a 0.2 mm model of the emoji's and printed them out. Here are some expanded pictures of how that works.
In the end I needed a little more height for the buttons. I made these extensions to put between the button and the top of the enclosure. They seemed to work out well.
The wiring for this project seems more complicated than it actually is required. My main challenge was that I needed access to a number of GPIO pins. 4 for the button and PIR inputs and 3 to control the LEDs independent of the buttons themselves. Those pins were in use by various other things on the development board.
So instead of finding places to plug in on the click modules or connecting to the local buttons already provided, I decided to add a bank of dedicated GPIO. That bank came in the form of a MCP23017 module.
The down side is that that module didn't have a library that made it easy for me to access the GPIO. I made my own. See the MCP23017 section below for more of and explanation of what I did what code I added to make it easy to read/write that module.
The Display is on the same I2C chain as the MCP23017. There is a place on the dev board for it, but I needed to extend the chain a bit to fit the enclosure.
Connecting the buttons is up to you on the connections. I used quick disconnects with solid core hookup wire to plug into the breadboard. I didn't want to solder these on as I wanted as much flexibility as I could get.
The buttons have 4 connectors 2 - connect to the LED and 2 - connect to the switch.
No need for current limiting resistor on these as they have a 1K ohm built series with the 2 LEDS.
That leads to another issue. I wanted the buttons to be bright, so used following to allow current to flow directly from the 5V source instead of the 3.3V GPIO pins
So you your breadboard insert the 3 NPN transistors, Attach the emitter to ground, the collector to the negative side of the button LED, and the base to the GPIO through a 220 ohm resistor.
Connect one side the button switches to GND, and the other to respective port on the MCP23017
This setup makes the buttons Active Low. We need to know that when we start programming.
The proximity Proximity detector is just another button. Same setup as the buttons, but use your PNP transistor on this one. This makes the input non-inverting and how I decided I wanted to use that input.
Here is everything all wired up in the box:
More pictures of the build:
The software is based on the reference design from Avnet. You can get that code on github: https://github.com/Avnet/AvnetAzureSphereStarterKitReferenceDesign
This example uses only the "High Level" programming model as the real time in this project is not really needed.
As with any other C program the function "Main" is called first. The program then will initialize all of the components needed:
and set and wait until and exit is needed.
At the time of this work, there was no hardware interrupts available in the high level app, so this application is polling and timers are used to control the checking of buttons and sending of data.
As mentioned above the Proximity sensor is just effectively a button, so it is treated that way.
Since we are polling we need to hold to the previous value and if nothing is changed there will be nothing to do.
Now we have to update the LEDS. They are all default to ON unless pressed, so during the press we want to turn them off. We send the output bank as a single byte and here is the code that sets that byte.
After every event we need to telemetry to IoT Central. Whether it is a button press or the current vote count, IoT Central doesn't care it just needs a name and a value. We will work out what the values mean once data gets to the cloud. The button information is sent immediately when in the button is pressed in the Button Handler above. The periodic data also needs to be sent. So to get the data there, here is that method:
IoT central has a lot to offer. While I will show you my setup screens in this story, you will need to visit the QuickStart Guides in the Microsoft documentation to get setup to access what you see below.
So what am I doing with all of that data? Here is a screen shot of the dashboard of my current testing.
IoT Central ingests all of the data sent to it. If it can parse it, it makes the key value pairs sent available as data. All you need to do is "add a value" to the stream, wait a bit and then you should be able to add that to your chart.
TroubleshootingWhen you are sending all of this data, you need to make sure that is actually getting there. The best way to do this is iotc-explorer and see that IoT Central is seeing.
To install, issue the following command
npm install -g iotc-explorer
Then go get and access token from your IoT Central applicatoin
Copy that token and issue the command:
iotc-explorer login "<Token value>"
followed by the command:
iotc-explorer login
After that command completes successfully, you can monitor all the messages with following command:
iotc-explorer monitor-messages
You can also get other information, but this pretty much saved me a lot of headaches.
Final ThoughtsAlthough this project didn't turn out as I had envisioned. I did learn a lot from writing my own library to access the MCP23x17. You can see the two code file mcp23x17.c and mcp23x17.h in the github repository.
I should've taken into account the button connections and modeled them. It would saved me a ton of time.
Comments