Life today is much different than it was pre-COVID-19. Adults and children alike are forced into isolation which has proven to be not only physically harmful but also takes a toll on mental and emotional. We aim to provide a means of entertainment that is not only safe but also entertaining and challenging.
MOTIVATE is our creation that provides a virtual maze in which a user competes among other users and AI bots to complete the labyrinth without being captured. Completing the labyrinth not only requires physical abilities but also spatial and logical reasoning.
Players begin by selecting their maze. They can either select an active game or request a newly generated maze. Active games can have other active players or bots while new games are local and not multi-player. Players then proceed to select their character class. Classes are: Wizard, Rogue and Fighter. Wizards send rogues back to start, Rogues send Fighters back to start and Fighters send Wizards back ( roshambo style). Players can also set the move and turn sensitivity as well as 'back' and 'left/right slide' motion at any time by selecting the game tab and using the sensitivity sliders. Once the user has made his/her selections, they proceed to the game by swiping and selecting the game tab.
Game PlayThe game is played in real-time with the player using motion to navigate the maze. The player's goal is to make it to the end of the maze in the shortest amount of time. There are obstacles however. The player must figure out the shortest path, while navigating around other players that may be of a competing character class. Shortcuts do exist. The mazes will contain 'high' and 'low' walls in which the user can pass if the correct motion is used. High walls are green, and can be passed by 'squatting' while low walls are yellow and can be passed by 'jumping'.
Game MovementPlayers can navigate the maze using the following motions:
- forward - walking forward. Move player forward 1 step
- left/right turn - pivot left or right. Change players orientation 90 degrees left or right.
- *backward - walking backward. Move player backward 1 step
- *left/right slide - side step left or right. Moves player 1 step left or right.
*Optional and may be toggled on the Game Tab.
MazeTab
The maze tab is the main tab where the game is played. Players have two maps; the main, relative map that is centered and oriented about the player as well as the smaller, absolute map that has a static orientation and shows the entire map as well as all players and positions. The lower right portion of the tab has the information pane. In this pane, players can see their step count, the active movement classification, opponent's name, time elapsed as well as the stability and movement LEDs. The stability LED indicates that the device is being held in the correct position. If the device isn't being held in the correct position, this LED will blink. The movement LED indicates that the classifier is running and processing signal from the IMU.
TrainTab
The training tab is used to collect training samples to be used in building the motivate CNN model. Users can select their class action with the right control button and turn collection on/off with the middle control button. Samples will be collected @ 30Hz. and provided to the motivate back-end via AWS IoT (MQTT).
To BuildMotivate has three main components; the motivate model, back-end and application. The model is a basic CNN classifier that is run locally on the Core2 and provides real-time classification of the IMU measurements to determine player movement. The back-end is used to generate and distribute mazes, collect training data and multiplayer game management. The application is the set of code and packages that are compiled and running locally on the Core2.
Model
The model data used in training is; 20 samples of normalized IMU readings @ 30 Hz. (~ 2/3 of a second training samples). Normalization is accomplished by:
(Sa-Samin)/(Samax-Samin) * 255 and (Sg-Sgmin)/(Sgmax-Sgmin) * 255
where:
- Sa - Accelerometer XYZ samples
- Sg - Gyroscope XYZ samples
- Samin/max - Accelerometer min/max values (found by inspection)
- Sgmin/max - Gyroscope min/max values (found by inspection)
The model data is labeled as follows:
- 0 - Rest
- 1 - Forward
- 2 - Backward
- 3 - Left turn
- 4 - Right turn
- 5 - Up (jump)
- 6 - Down (squat)
- 7 - Left side step
- 8 - Right side step
The model is a simple Convolutional Neural Network with the following architecture:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 20, 6, 16) 208
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 6, 2, 16) 0
_________________________________________________________________
dropout (Dropout) (None, 6, 2, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 6, 2, 16) 1040
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 2, 16) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 2, 2, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 64) 0
_________________________________________________________________
dense (Dense) (None, 16) 1040
_________________________________________________________________
dropout_2 (Dropout) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 9) 153
=================================================================
Total params: 2,441
Trainable params: 2,441
Non-trainable params: 0
Training Results
The following are the F1 scores per label. A.95 F1 composite accuracy was achieved.
class precision recall f1-score support
0 1.00 0.95 0.98 44
1 0.90 0.94 0.92 95
2 0.93 0.91 0.92 109
3 1.00 1.00 1.00 52
4 1.00 1.00 1.00 41
5 1.00 1.00 1.00 38
6 0.99 1.00 0.99 71
7 0.88 0.91 0.89 54
8 0.95 0.91 0.93 45
accuracy 0.95 549
macro avg 0.96 0.96 0.96 549
weighted avg 0.95 0.95 0.95 549
Deploy Model to device
The final step in the provided jupyter notbook downloads the model results. In import the model into your application please use the `xxd` tool to convert the tflite model file to a c source file and copy the results into your project.
xxd -i mot-imu-quant.tflite > mot-imu-model.cc
sed -i 's/mot_imu_quant_tflite/g_model/g' mot-imu-model.cc # change model name to match code
cp mot-imu-model.cc <to your project>
Back-end
The back-end consists of a handful of different AWS services as follows:
- Maze Proxy - Provides an HTTP front-end to the Maze Service.
- Maze API - HTTPS Restful interface for maze generation and dissemination.
- Maze Generator - Lambda used to retrieve and generate mazes.
- MOT MQTT - MQTT middle ware used to define topic topology, generate MOT device keys and MQTT message dissemination for game play.
- Game Manager - EC2 system that runs the MOT bots and game managers.
The code for maze generation and game bot/managers can be found in the mot-play repository. The back-end currently exists and is available. Please reach out for a set of keys if you wish to build this project and not provide the back-end.
Application
The application consists of a set of c++ and c code in a platformio project. Steps to recreate the application are as follows:
- Install visual code
- Install platformio
- Clone the motivate repository.
- Open Visual Code at the root of the motivate repository.
- Generate an sdkconfig
- Update the sdkconfig using menuconfig
pio run -t menuconfig
# Update the following to be unique
MOT MQTT Config -> MOT Client ID
# Update the following with your WiFi Config
WiFi Configuration -> SSID
WiFi Configuration -> WiFi Password
- Request (or generate) MOT certificates for AWS IoT connectivity and copy to:
.../motivate/maze-app/certs
- Build and flash your device
Comments