Real-Time Visualization in Field Research
Ok, so nobody wants to be a "Glasshole" but where Google really missed their calling was with task-specific research tools and various other business uses. In fact, it seems they are going back and looking at that demographic of need more carefully through the Glass at Work program. The "helpful tools for researchers and industry was the approach I took for my NSF-funded Arctic Glass project. Over time, the need to integrate with IoT became evident too, so I adjusted the app to include sensor monitoring options.
When presented with a Google Glass device that was non-Arctic specific, researchers were impressed by the technology but did not envision how they themselves would use Glass. The feedback process included mostly "it would be great if" statements. The following field season the researchers were presented with an Arctic-specific Glass app based on the feedback they provided. Almost every researcher presented with the targeted application found value in the features of the Arctic Glass app itself, but also were able to produce their own ideas for applications they would use themselves.Glassware: Arctic Glass
The "Arctic Glass" application is an Arctic-specific software developed for researchers and their assistants. The application is primarily controlled through voice command, and includes a wildlife identification feature, retrieval of archival time-lapse imagery, and real-time sensor monitoring. Researchers also have access to native Google functionality and commercial apps that provide voice-controlled reference look-up, field notes.
Application Design
Language: Java
IDE: Android Studio + GDK Plugin
Researchers working in the field were most interested in the voice capabilities of Google Glass, so the Glassware I built is almost all voice activated commands. It's also designed to be modular, where each of the features:
- Wildlife Guide
- Check a Trap
- Check a Sensor
- Image Archive
could potentially be split out into their own Glassware. Or, the code (written in Java) structured so that an additional feature can be added fairly easily by cut-paste and then adjust specific variables as needed. The code itself was based on Google demo code already hosted on Github. The storyboard / UI experience was mapped out using Google's Glassware Flow Designer.
Getting Started
The application voice navigation is my best attempt at self-explanatory. You can toggle the "ok glass" voice prompt on and off once you are in the Arctic Glass app by touching the control pad one time.
The voice command flow would go something like this:
"ok glass" -> "Arctic Glass" -> Wildlife Guide" -> "Mammals"
The user could then swipe through the collection of mammals listed. Full integration with the back-end database, including "Tap for Info" is not yet integrated due to the availability of the data source.
Some Lessons Learned
The biggest roadblock for this project was the scalability of the Glass Development Kit (GDK) itself. I should note that I did not use the mirror API as it has even more limitations, but used the GDK instead. My goal for the app was to make everything voice command driven, however when creating multiple nested menus I had to cut and paste manually laid-out sections of code for the menus. This instead of nesting the menus and calling on variable lists when needed. So, when I wanted a user to select the image gallery, then select the year and date to view....there was way to much manual labor involved. (If anyone knows a way to make this type of functionality work, I would love feedback!)
I've also learned the mindset required when developing an app that is more streaming (as in stream of language flow) is quite a bit different than designing for a static interface that leaves more time for user decisions, like a mobile phone.
Comments
Please log in or sign up to comment.