I have always been fascinated by Wii controllers and I wanted to build a project based on a controller's IMU motion. So when I got selected to participate in the Google Summer of Code (GSoC) under TensorFlow, I proposed building a Unity game that could be controlled using a microcontroller. Here's how you could also build one. Happy Tinkering!
The idea behind the project:I wanted to build a horror game with huge mountains, trees, fog, and a purple and black theme to set the tone in the game. I also wanted to be able to use the controller to look around alongside using the keyboard for controlling the player's movements.
This tutorial in a nutshell:- The first part will explain how to build a Unity Game.
- The second and third parts involve building the controller and integrating it with Unity.
- The fourth part covers how I rebuilt the controller using a different library to avoid gyroscope drift.
- The fifth part talks about some improvements I made to the game to make it more immersive.
- The sixth part explains the current problems with the game and also lists out a few suggestions for taking this idea forward.
- The seventh part is for fun ;)
- The last part concludes the tutorial.
1.a Building the terrain
These are the assets that were used in the project:
Download these assets and import them into your project.
To create a terrainRight-click on the hierarchy view -> 3D object-> terrain.
To sculpt the terrainClick on the terrain object -> paint terrain -> raise or lower terrain.
Choose a brush of your liking and start painting the terrain.
Note: To get a wider range of tools to sculpt the terrain, go to the package manager and install the "terrain tools" package. If you can't find it there, you'll need to enable preview packages under advanced project settings in the Package Manager window
To add texture to the terrainClick on the terrain object -> paint terrain -> paint texture -> add layer
, and start painting the texture.
To add trees to the terrainClick on the terrain object -> paint trees -> edit trees -> add trees
and start painting the trees.
To add grass onto the terrainClick on the terrain object -> paint details -> edit details -> add grass texture -> choose detail texture
and start painting the grass.
1.b Creating the Player and Input System
To create a capsuleRight-click on the hierarchy view -> 3D object -> capsule.
Assign the Player
tag to this capsule over in the inspector.
To add a character controller to this capsule, Right-click on add component -> character controller.
Make the main camera a child to this game object. Now, change the position of the main camera to X:0, Y:0.6, Z:0
The Input system:
To install the new input system, go to Window -> package manager -> Packages: Unity Registry -> Input System -> install
Under Assets, create a new folder called Input. Inside the folder, create an Input actions file. Name it anything you want. In my case, I have named it as PlayerInput
.
Action maps: These are a set of actions a player can do depending on the current state of the game (example: walking, driving, etc.)
The WASD movement here is 2D vector composite meaning that the values are normalized between -1 to 1.
The jump action is of the type button and takes the spacebar key as input.
The look action is of the type Vector2 and it takes in mouse delta as input.
The shoot action is also of the type button and takes in the mouse's left button as input.
1.c Coding the player movements - Look and Move
The InputManager.cs file handles all the inputs and calls the necessary functions from other files as needed. A few notes about this file:
1. Remember to use using UnityEngine.InputSystem;
to be able to use the new input system
2. The Awake
function is used here. It's called before the Start
function in the order of event functions
3. The Update
function runs once per frame.
4. The FixedUpdate
function can run once, zero, or several times per frame, depending on how many physics frames per second are set in the time settings, and how fast/slow the frame rate is.
5. The LateUpdate
function is called after all Update functions have been called. This is useful to order script execution. For example, a follow camera should consistently be implemented in LateUpdate because it tracks objects that might have moved inside Update.
6. The OnEnable
function is called when the object becomes enabled and active
7. The OnDisable
function is called when the object is destroyed and can be used for any cleanup code
Now it will be easier to understand why these functions exist here. All the required objects, to be used later, are called in the Awake function. The ProcessMove function is called under the FixedUpdate function because we are applying velocity. The OnEnable and OnDisable are used to enable/disable the input actions depending on whether the Player game object is destroyed or not. The ProcessLook, which handles the look function, is called under the LateUpdate function.
//InputManager.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.InputSystem;
public class InputManager : MonoBehaviour
{
private PlayerInput playerInput;
private PlayerInput.OnFootActions onFoot;
private PlayerMotor motor;
private PlayerLook look;
// Start is called before the first frame update
void Awake()
{
playerInput = new PlayerInput();
onFoot = playerInput.OnFoot;
motor = GetComponent<PlayerMotor>();
look = GetComponent<PlayerLook>();
onFoot.Jump.performed += ctx => motor.Jump();
}
void Update(){
}
// Update is called once per frame
void FixedUpdate()
{
motor.ProcessMove(onFoot.Movement.ReadValue<Vector2>());
}
private void LateUpdate(){
// look.ProcessLook(onFoot.Look.ReadValue<Vector2>());
}
private void OnEnable(){
onFoot.Enable();
}
private void OnDisable(){
onFoot.Disable();
}
}
The PlayerMotor.cs file handles the jump and movement of the Player. A few notes about this file:
1. It is constantly checking if the Player is grounded every time the Update function runs. This is stored as a bool variable.
2. The X & Z values are normalized X-Y values and transform.TransformDirection changes the direction from a local scale to a global scale. If an object doesn't have a parent, the local and global rotations are equivalent
3. The Y velocity of the player depends on if they are grounded or not. If they are grounded, the velocity is a small value else it equals gravity * deltaTime (which is a small value). If velocity is always set to gravity * deltaTime, it would be difficult for the player to move around.
4. The Jump function is used to restrict the number of jumps to 1. If the player isn't grounded, it wouldn't be possible for it to jump. Without the if condition, infinite jumps are possible
//PlayerMotor.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PlayerMotor : MonoBehaviour
{
private CharacterController controller;
private Vector3 playerVelocity;
private bool isGrounded;
public float gravity = -9.8f;
public float speed = 5f;
public float jumpHeight = 3f;
public Camera cam;
// Start is called before the first frame update
void Start()
{
controller = GetComponent<CharacterController>();
}
// Update is called once per frame
void Update()
{
isGrounded = controller.isGrounded;
}
public void ProcessMove(Vector2 input){
//this part of the code is responsible for movement
Vector3 moveDirection = Vector3.zero;
moveDirection.x = input.x;
moveDirection.z = input.y;
controller.Move(transform.TransformDirection(moveDirection) * speed * Time.deltaTime);
playerVelocity.y += gravity * Time.deltaTime;
if(isGrounded && playerVelocity.y < 0){
playerVelocity.y = -2f;
}
controller.Move(playerVelocity * Time.deltaTime);
}
public void Jump(){
if(isGrounded){
playerVelocity.y = Mathf.Sqrt(jumpHeight * -2.0f * gravity);
}
}
}
The PlayerLook.cs file handles the "look" part of the Player. A few notes about this file:
1. xSensitivity and ySensitivity can be modified to increase or decrease the sensitivity of the "look" (basically how fast the camera moves in either direction).
2. The Mathf.Clamp function clamps the given value between the minimum and maximum float values.
- It returns the given value if it is within the minimum and maximum range.
- It returns the minimum value if the float value is less than the minimum.
- It returns the maximum value if the given value exceeds the maximum value.
3. cam.transform.localRotation changes the local rotation of the camera
4. Quaternion.Euler returns a rotation that rotates z degrees around the z-axis, x degrees around the x-axis, and y degrees around the y-axis; applied in that order.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PlayerLook : MonoBehaviour
{
public Camera cam;
private float xRotation = 0f;
private float xSensitivity = 30f;
private float ySensitivity = 30f;
public void ProcessLook(Vector2 input){
float mouseX = input.x;
float mouseY = input.y;
xRotation -= (mouseY * Time.deltaTime) * ySensitivity;
xRotation = Mathf.Clamp(xRotation, -80f, 80f);
cam.transform.localRotation = Quaternion.Euler(xRotation,0,0);
transform.Rotate(Vector3.up *(mouseX * Time.deltaTime) * xSensitivity);
}
}
Drag these scripts onto the player and press the play button. The player will now be able to look around, move and jump.
Note: The input debugger is a great way to debug issues with the controls in your game.
1.d Coding the player movements - Shoot
The Gun.cs file handles the shooting. A few notes about this file:
1. Every time the shoot function is called, it sends out a ray cast from the center of the camera If the ray cast has hit an object, it triggers the TakeDamage function.
2. It also applies a force in the direction of the hit.
3. Additionally, this handles the gun's muzzle flash and impact effects.
using UnityEngine;
public class Gun : MonoBehaviour
{
public float damage = 10f;
public float range = 100f;
public float impactForce = 30f;
public Camera fpsCam;
public ParticleSystem muzzleFlash;
public GameObject impactEffect;
// Update is called once per frame
void Update()
{
if(Input.GetButtonDown("Fire1")){
Shoot();
}
}
public void Shoot(){
muzzleFlash.Play();
RaycastHit hit;
if(Physics.Raycast(fpsCam.transform.position, fpsCam.transform.forward, out hit, range)){
Debug.Log(hit.transform.name);
Target target = hit.transform.GetComponent<Target>();
if(target != null){
target.TakeDamage(damage);
}
if(hit.rigidbody != null)
{
hit.rigidbody.AddForce(-hit.normal * impactForce);
}
GameObject ImpactGo = Instantiate(impactEffect, hit.point, Quaternion.Look Rotation(hit.normal));
Destroy(ImpactGo, 2f);
}
}
}
The Target.cs file handles the take damage function. Some notes about it are:
1. It assigns a health variable to the objects it's attached to. Every time the TakeDamage function is called, the health value is reduced by some amount.
2. If the health value is less than or equal to zero, the game object is destroyed using the Die function.
using UnityEngine;
public class Target : MonoBehaviour
{
public float health = 50f;
public void TakeDamage (float amount){
health -= amount;
if (health <= 0f){
Die();
}
}
void Die(){
Destroy(gameObject);
}
}
2. Making the controller:At this point, our controller has to do two things:
1. Send gyro data to orient the camera
2. Run an ML model to detect the shoot gesture. If it's detected, the controller then needs to send the same info to the game.
2.a Exploring the "Adafruit_MPU6050" library
The Adafruit_MPU6050 library is a library to interface with the MPU6050 sensor via an i2c bus.
The basic_demo.ino file visualizes the accelerometer and gyroscope data from the MPU6050 sensor.
//basic_demo.ino
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
// instantiates an object of the Adafruit_MPU6050 class
Adafruit_MPU6050 mpu;
void setup(void) {
Serial.begin(115200);
while (!Serial) {
delay(10); // will pause Zero, Leonardo, etc until serial console opens
}
// Try to initialize!
if (!mpu.begin()) {
Serial.println("Failed to find MPU6050 chip");
while (1) {
delay(10);
}
}
mpu.setAccelerometerRange(MPU6050_RANGE_16_G);
mpu.setGyroRange(MPU6050_RANGE_250_DEG);
mpu.setFilterBandwidth(MPU6050_BAND_21_HZ);
Serial.println("");
delay(100);
}
void loop() {
/* Get new sensor events with the readings */
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
/* Print out the values */
Serial.print("AccelX:");
Serial.print(a.acceleration.x);
Serial.print(",");
Serial.print("AccelY:");
Serial.print(a.acceleration.y);
Serial.print(",");
Serial.print("AccelZ:");
Serial.print(a.acceleration.z);
Serial.print(", ");
Serial.print("GyroX:");
Serial.print(g.gyro.x);
Serial.print(",");
Serial.print("GyroY:");
Serial.print(g.gyro.y);
Serial.print(",");
Serial.print("GyroZ:");
Serial.print(g.gyro.z);
Serial.println("");
delay(10);
}
Use the above code to figure out a threshold value above which the inference will be called.
2.b Collecting data for the Tiny ML Model
We only need the accelerometer data to train the model. Also, we don't want inference to run all the time as it will stop sending gyro data to the game. Open up the serial plotter and use the below code to figure out a threshold value above which the inference will be called.
The acc_visualize.ino file visualizes the accelerometer data from the MPU6050 sensor. Trying out the gestures and figuring out the threshold is done via this (we'll be figuring out the threshold value through trial and error).
//acc_visualize.ino
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
// instantiates an object of the Adafruit_MPU6050 class
Adafruit_MPU6050 mpu;
void setup(void) {
Serial.begin(115200);
while (!Serial) {
delay(10); // will pause Zero, Leonardo, etc until serial console opens
}
// Try to initialize!
if (!mpu.begin()) {
Serial.println("Failed to find MPU6050 chip");
while (1) {
delay(10);
}
}
mpu.setAccelerometerRange(MPU6050_RANGE_16_G);
mpu.setGyroRange(MPU6050_RANGE_250_DEG);
mpu.setFilterBandwidth(MPU6050_BAND_21_HZ);
Serial.println("");
delay(100);
}
void loop() {
/* Get new sensor events with the readings */
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
/* Print out the values */
Serial.print("AccelX:");
Serial.print(a.acceleration.x);
Serial.print(",");
Serial.print("AccelY:");
Serial.print(a.acceleration.y);
Serial.print(",");
Serial.print("AccelZ:");
Serial.println(a.acceleration.z);
delay(10);
}
Now that you have a threshold value to work with let's start logging the data
The log_data.ino file handles the logging of data on the serial monitor. Some notes about this file are:
1. The calibrate_mpu function runs in the setup part of the code and it takes in 10 different readings to average them
2. The detect_motion function reads one set of reading from the MPU, and if the sum of the absolute values of the accelerations is greater than a threshold value, it triggers the read_data function
3. The read_data function logs 40 (this value varies for different gestures) sets of accelerometer data to be used for training purposes.
//log_data.ino
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
Adafruit_MPU6050 mpu;
int count =1;
float x_initial, y_initial, z_initial;
void setup() {
Serial.begin(115200);
while (!Serial)
delay(10); // will pause Zero, Leonardo, etc until serial console opens
//Serial.println("Adafruit MPU6050 test!");
// Try to initialize!
if (!mpu.begin(0x69)) {
//Serial.println("Failed to find MPU6050 chip");
while (1) {
delay(10);
}
}
//Serial.println("MPU6050 Found!");
mpu.setAccelerometerRange(MPU6050_RANGE_8_G);
Serial.print("Accelerometer range set to: ");
switch (mpu.getAccelerometerRange()) {
case MPU6050_RANGE_2_G:
Serial.println("+-2G");
break;
case MPU6050_RANGE_4_G:
Serial.println("+-4G");
break;
case MPU6050_RANGE_8_G:
Serial.println("+-8G");
break;
case MPU6050_RANGE_16_G:
Serial.println("+-16G");
break;
}
mpu.setGyroRange(MPU6050_RANGE_500_DEG);
//Serial.print("Gyro range set to: ");
switch (mpu.getGyroRange()) {
case MPU6050_RANGE_250_DEG:
//Serial.println("+- 250 deg/s");
break;
case MPU6050_RANGE_500_DEG:
//Serial.println("+- 500 deg/s");
break;
case MPU6050_RANGE_1000_DEG:
//Serial.println("+- 1000 deg/s");
break;
case MPU6050_RANGE_2000_DEG:
// Serial.println("+- 2000 deg/s");
break;
}
mpu.setFilterBandwidth(MPU6050_BAND_21_HZ);
// Serial.print("Filter bandwidth set to: ");
switch (mpu.getFilterBandwidth()) {
case MPU6050_BAND_260_HZ:
// Serial.println("260 Hz");
break;
case MPU6050_BAND_184_HZ:
//Serial.println("184 Hz");
break;
case MPU6050_BAND_94_HZ:
//Serial.println("94 Hz");
break;
case MPU6050_BAND_44_HZ:
//Serial.println("44 Hz");
break;
case MPU6050_BAND_21_HZ:
// Serial.println("21 Hz");
break;
case MPU6050_BAND_10_HZ:
// Serial.println("10 Hz");
break;
case MPU6050_BAND_5_HZ:
//Serial.println("5 Hz");
break;
}
// Serial.println("");
delay(100);
calibrate_mpu();
}
void loop() {
detect_motion();
}
void read_data(){
for(int i =0;i<40;i++){
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
Serial.print(a.acceleration.x );
Serial.print(",");
Serial.print(a.acceleration.y );
Serial.print(",");
Serial.println(a.acceleration.z);
delay(10);
}
Serial.println("");
Serial.println("--------");
Serial.println(count);
Serial.println("--------");
count++;
}
void calibrate_mpu(){
float totX, totY, totZ;
sensors_event_t a, g, temp;
for (int i = 0; i < 10; i++) {
mpu.getEvent(&a, &g, &temp);
totX = totX + a.acceleration.x;
totY = totY + a.acceleration.y;
totZ = totZ + a.acceleration.z;
}
x_initial = totX / 10;
y_initial = totY / 10;
z_initial = totZ / 10;
Serial.println("Calibrated");
}
void detect_motion(){
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
if( abs(a.acceleration.x - x_initial) +abs(a.acceleration.y - y_initial) + abs(a.acceleration.z - z_initial) > 20){
read_data();
}
else{
delay(5);
}
}
Once you get data from the serial monitor, copy-paste them into a .csv
file
2.c Making the Machine Learning model
Open up this colab notebook, run the cells and download the model.h file from the sidebar.
The model architecture:
I experimented with different hyperparameters like the number of hidden layers, units, dropout, activation functions, and batch size. I also tried different optimizers, loss functions and finally settled on this.
I didn't use a CNN as the fully connected layers gave good enough accuracy.
from tensorflow.keras import regularizers
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(10, activation='relu')) # relu is used for performance
model.add(tf.keras.layers.Dense(10, activation='relu',kernel_regularizer= regularizers.L1(l1=1e-4)))
model.add(tf.keras.layers.Dense(NUM_GESTURES, activation = 'softmax')) # softmax is used, because we only expect one gesture to occur per input
model.compile(optimizer=tf.keras.optimizers.Adam(), loss='mse', metrics=['mae'])
history = model.fit(inputs_train, outputs_train, epochs=300, batch_size=1, validation_data=(inputs_validate, outputs_validate))
2.d Building the application code for the ML model
The Adafruit_MPU6050_model.ino file is the application code for our model. Some notes about this file are:
1. The calibrate_sensor function acts in the same way as the last file
2. micro_error_reporter and error_reporter It provides a mechanism for logging debug information during inference. We’ll call it to print debug information and the interpreter will use it to print any errors it encounters.
3. tflInputTensor and tflOutputTensor are pointers to the input and output tensors, respectively.
4. The all_ops_resolver allows the application to access all the operations available to TFLite Micro.
5. micro_ops_resolver is used in place of the all_ops_resolver to add operations selectively to reduce the size of the application.
6. run_inference() is the function where inference is run and the output is handled.
7. The detect_motion function reads one set of reading from the MPU, and if the sum of the absolute values of the accelerations is greater than a threshold value, it triggers the read_data function
8. micro_interpreter is the piece of code that runs our model. It takes in the pointer to our model, the resolver,
9.tflModel is a pointer to our model.
10. Tensor arena is the area of memory that will be used to store the model’s input, output, and intermediate tensors
//Adafruit_MPU6050_model.ino
#include <TensorFlowLite_ESP32.h>
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
//#include "tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "model.h"
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
#define THRESHOLD 20
Adafruit_MPU6050 mpu;
int samples = 0;
float ax;
float ay;
float az;
float baseAx;
float baseAy;
float baseAz;
const tflite::Model* tflModel; //
tflite::ErrorReporter* tflErrorReporter; //
constexpr int tensorArenaSize = 102 * 1024; //
uint8_t tensorArena[tensorArenaSize];
const char* GESTURES[] = {
"shoot",
};
TfLiteTensor* tflInputTensor; //
TfLiteTensor* tflOutputTensor; //
tflite::MicroInterpreter* tflInterpreter; //
#define NUM_GESTURES 1
void setup() {
Serial.begin(115200);
while (!Serial)
delay(10); // will pause Zero, Leonardo, etc until serial console opens
Serial.println("Adafruit MPU6050 test!");
// Try to initialize!
if (!mpu.begin(0x69)) {
Serial.println("Failed to find MPU6050 chip");
while (1) {
delay(10);
}
}
Serial.println("MPU6050 Found!");
// Set Accelaration Range
mpu.setAccelerometerRange(MPU6050_RANGE_8_G);
calibrate_sensor();
Serial.println("");
// put your setup code here, to run once:
static tflite::MicroErrorReporter micro_error_reporter; // NOLINT
tflErrorReporter = µ_error_reporter;
tflModel = tflite::GetModel(model);
//if (model->version() != TFLITE_SCHEMA_VERSION) {
// error_reporter->Report(
// "Model provided is schema version %d not equal "
// "to supported version %d.",
// model->version(), TFLITE_SCHEMA_VERSION);
//return;
//}
static tflite::MicroMutableOpResolver micro_mutable_op_resolver; // NOLINT
micro_mutable_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED());
static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
tflInterpreter = &static_interpreter;
tflInterpreter->AllocateTensors();
Serial.print("setup complete");
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
}
void loop() {
// put your main code here, to run repeatedly:
detectMovement();
}
void run_inference(){
sensors_event_t a, g, temp;
for(int i =0;i<40;i++){
mpu.getEvent(&a, &g, &temp);
ax = a.acceleration.x - baseAx;
ay = a.acceleration.y - baseAy;
az = a.acceleration.z - baseAz;
tflInputTensor->data.f[i * 3 + 0] = (ax + 8.0) / 16.0;
tflInputTensor->data.f[i * 3 + 1] = (ay + 8.0) / 16.0;
tflInputTensor->data.f[i * 3 + 2] = (az + 8.0) / 16.0;
delay(10);
}
TfLiteStatus invokeStatus = tflInterpreter->Invoke();
float out = tflInterpreter->output(0)->data.f[1];
if(out >= 0.80){
Serial.println("Shoot");
}
else{
Serial.println("Unknown");
}
}
void detectMovement() {
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
if( abs(a.acceleration.x - baseAx) +abs(a.acceleration.y - baseAy) + abs(a.acceleration.z - baseAz) > 20){
run_inference();
}
else{
delay(5);
}
}
void calibrate_sensor() {
float totX, totY, totZ;
sensors_event_t a, g, temp;
for (int i = 0; i < 10; i++) {
mpu.getEvent(&a, &g, &temp);
totX = totX + a.acceleration.x;
totY = totY + a.acceleration.y;
totZ = totZ + a.acceleration.z;
}
baseAx = totX / 10;
baseAy = totY / 10;
baseAz = totZ / 10;
}
So, our model outputs a 2D tensor with two elements of this form
[[probability of class 1, probability of class 2]]
//float out1 = tflInterpreter->output(0)->data.f[0];
//float out2 = tflInterpreter->output(0)->data.f[1];
where class 1 is unknown
and class 2 is shoot.
We get the second value, check if it's greater than 0.80, and print if the gesture was detected.
3.a Exploring the Uduino Plugin for Unity
The Uduino plugin is used to send data to/from a microcontroller via Serial/WiFi. The Plugin is explained in depth in these tutorials.
3.b Basics
Arduino starter code:
Here, we need to note three things:
1. The uduino.h header file declaration needs to be included
2. Anything inside Serial.print(), under the if statement, will be sent as a string to Unity.
#include<Uduino.h>
Uduino uduino("myArduinoName");
void setup(){
Serial.begin(9600);
}
void loop(){
uduino.update();
delay(10); // Delay of your choice or to match Unity's Read Timout
if (uduino.isConnected()) {
// your code goes here
}
}
C# starter code:
In the start function, any data received is sent to Function1 as a string. All the processing is done inside function1.
using UnityEngine;
using Uduino;
public class FileName : MonoBehaviour
{
void Start () {
UduinoManager.Instance.OnDataReceived += Function1;
}
void Update() { }
public void Function1 (string data, UduinoDevice device) {
}
}
3.c C# code for the game
The ReadMPU.cs file handles the processing of MPU data from the microcontroller. Some notes about the file are:
1. The string is parsed to obtain the gyro data, and the same is used to control the camera movement.
//ReadMPU.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Uduino;
public class ReadMPU : MonoBehaviour {
Vector3 position;
Vector3 rotation;
public Gun gun;
public Vector3 rotationOffset ;
public float speedFactor = 15.0f;
public string imuName = "r"; // You should ignore this if there is one IMU.
public Camera cam;
void Start () {
UduinoManager.Instance.OnDataReceived += ReadIMU;
// Note that here, we don't use the delegate but the Events, assigned in the Inpsector Panel
}
void Update() { }
public void ReadIMU (string data, UduinoDevice device) {
//Debug.Log(data);
string[] values = data.Split(',');
if (values.Length == 3)
{
float x = float.Parse(values[0]) -10;
float y = float.Parse(values[1]) - 10;
float z = float.Parse(values[2]) - 10;
//Put these values into a Vector3 and use it to rotate the camera
} else if (values.Length ==1)
{
gun.Shoot();
}
else{
Debug.LogWarning(data);
}
//cam.transform.parent.transform.eulerAngles = rotationOffset;
// Log.Debug("The new rotation is : " + transform.Find("IMU_Object").eulerAngles);
}
}
3.d Arduino code for the microcontroller
The Adafruit_MPU6050_uduino.ino file runs inference and sends data to the plugin. There's not a lot of difference between this file and the Adafruit_MPU6050_model.ino file. All the functions are written inside these statements, under the loop function.
uduino.update();
delay(10);
if (uduino.isConnected()) {
// your code goes here
}
//Adafruit_MPU6050_uduino.ino
#include <TensorFlowLite_ESP32.h>
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
//#include "tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "model.h"
#include <Adafruit_MPU6050.h>
#include <Adafruit_Sensor.h>
#include <Wire.h>
#include<Uduino.h>
Uduino uduino("mpu6050");
#define THRESHOLD 20
Adafruit_MPU6050 mpu;
int samples = 0;
float ax;
float ay;
float az;
float baseAx;
float baseAy;
float baseAz;
const tflite::Model* tflModel; //
tflite::ErrorReporter* tflErrorReporter; //
constexpr int tensorArenaSize = 102 * 1024; //
uint8_t tensorArena[tensorArenaSize];
const char* GESTURES[] = {
"shoot",
};
TfLiteTensor* tflInputTensor; //
TfLiteTensor* tflOutputTensor; //
tflite::MicroInterpreter* tflInterpreter; //
#define NUM_GESTURES 1
void setup() {
Serial.begin(115200);
while (!Serial)
delay(10); // will pause Zero, Leonardo, etc until serial console opens
// Serial.println("Adafruit MPU6050 test!");
// Try to initialize!
if (!mpu.begin(0x69)) {
Serial.println("Failed to find MPU6050 chip");
while (1) {
delay(10);
}
}
//Serial.println("MPU6050 Found!");
// Set Accelaration Range
mpu.setAccelerometerRange(MPU6050_RANGE_8_G);
mpu.setGyroRange(MPU6050_RANGE_500_DEG);
mpu.setFilterBandwidth(MPU6050_BAND_21_HZ);
calibrate_sensor();
//Serial.println("");
// put your setup code here, to run once:
static tflite::MicroErrorReporter micro_error_reporter; // NOLINT
tflErrorReporter = µ_error_reporter;
tflModel = tflite::GetModel(model);
//if (model->version() != TFLITE_SCHEMA_VERSION) {
// error_reporter->Report(
// "Model provided is schema version %d not equal "
// "to supported version %d.",
// model->version(), TFLITE_SCHEMA_VERSION);
//return;
//}
static tflite::MicroMutableOpResolver micro_mutable_op_resolver; // NOLINT
micro_mutable_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED());
static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
tflInterpreter = &static_interpreter;
tflInterpreter->AllocateTensors();
//Serial.print("setup complete");
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
}
void loop() {
uduino.update();
if(uduino.isConnected()){
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
detectMovement();
uduino.print(g.gyro.x + 10);
uduino.print(" ");
uduino.print(g.gyro.y + 10);
uduino.print(" ");
uduino.println(g.gyro.z + 10);
//delay(100);
}
}
void run_inference(){
sensors_event_t a, g, temp;
for(int i =0;i<40;i++){
mpu.getEvent(&a, &g, &temp);
ax = a.acceleration.x - baseAx;
ay = a.acceleration.y - baseAy;
az = a.acceleration.z - baseAz;
tflInputTensor->data.f[i * 3 + 0] = (ax + 8.0) / 16.0;
tflInputTensor->data.f[i * 3 + 1] = (ay + 8.0) / 16.0;
tflInputTensor->data.f[i * 3 + 2] = (az + 8.0) / 16.0;
delay(10);
}
TfLiteStatus invokeStatus = tflInterpreter->Invoke();
float out = tflInterpreter->output(0)->data.f[1];
if(out >= 0.80){
uduino.println(1);
}
else{
uduino.println(0);
}
}
void detectMovement() {
sensors_event_t a, g, temp;
mpu.getEvent(&a, &g, &temp);
if( abs(a.acceleration.x - baseAx) +abs(a.acceleration.y - baseAy) + abs(a.acceleration.z - baseAz) > 20){
run_inference();
}
else{
delay(5);
}
}
void calibrate_sensor() {
float totX, totY, totZ;
sensors_event_t a, g, temp;
for (int i = 0; i < 10; i++) {
mpu.getEvent(&a, &g, &temp);
totX = totX + a.acceleration.x;
totY = totY + a.acceleration.y;
totZ = totZ + a.acceleration.z;
}
baseAx = totX / 10;
baseAy = totY / 10;
baseAz = totZ / 10;
}
4. Ironing out the problems:4.a Gyro drift and accelerometer noise
The MPU6050 uses a MEMS gyroscope and accelerometer. The gyroscope drifts over time and the accelerometer is unreliable due to noisy data. The video below has demonstrated very well the problems with both.
There are two different approaches to solving this problem: orientation from the DMP or a complementary filter.
The DMP is the MPU6050’s onboard processor that combines the data coming from the accelerometer and gyroscope.
4.b Sensor Fusion & Complementary filters
From the demonstration below, we can see that the value from DMP has a better lag than using a complementary filter, so we'll go with using the data from the DMP.
4.c Quaternions vs. Euler angles
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration. This can be avoided by using quaternions.
4.d Exploring the "i2cdevlib" library
The i2cdevlib library lets us retrieve data from the DMP of the MPU6050. it's explained in-depth in the next section
Here's a nice article to understand MPU6050 and the i2cdevlib.
4.e Redoing everything with the "i2cdevlib" library🙃
The log_data.ino file handles the logging of data. It reads a set of accelerations and if the sum of their absolute values is greater than a certain threshold, it logs the next 40 values
//log_data.ino
#include "I2Cdev.h"
#include "MPU6050_6Axis_MotionApps20.h"
#include "Wire.h"
MPU6050 mpu(0x69);
// MPU control/status vars
bool dmpReady = false; // set true if DMP init was successful
uint8_t devStatus; // return status after each device operation (0 = success, !0 = error)
uint16_t packetSize; // expected DMP packet size (default is 42 bytes)
uint16_t fifoCount; // count of all bytes currently in FIFO
uint8_t fifoBuffer[64]; // FIFO storage buffer
int count =1;
// orientation/motion vars
Quaternion q; // [w, x, y, z] quaternion container
VectorInt16 aa; // [x, y, z] accel sensor measurements
VectorInt16 aaReal; // [x, y, z] gravity-free accel sensor measurements
VectorInt16 aaWorld; // [x, y, z] world-frame accel sensor measurements
VectorFloat gravity; // [x, y, z] gravity vector
float euler[3]; // [psi, theta, phi] Euler angle container
float ypr[3]; // [yaw, pitch, roll] yaw/pitch/roll container and gravity vector
void setup() {
// put your setup code here, to run once:
Wire.begin();
Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
Serial.begin(115200);
while (!Serial); // wait for Leonardo enumeration, others continue immediately
mpu.initialize();
devStatus = mpu.dmpInitialize();
mpu.setXGyroOffset(21); //++
mpu.setYGyroOffset(-22); //--
mpu.setZGyroOffset(28);
if (devStatus == 0) {
mpu.setDMPEnabled(true);
// set our DMP Ready flag so the main loop() function knows it's okay to use it
dmpReady = true;
// get expected DMP packet size for later comparison
packetSize = mpu.dmpGetFIFOPacketSize();
} else {
// Error
Serial.println("Error!");
}
}
void loop() {
// put your main code here, to run repeatedly:
if (!dmpReady) {
Serial.println("IMU not connected.");
delay(10);
return;
}
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
detect_motion();
}
}
void detect_motion(){
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
if( abs(aaWorld.x) +abs(aaWorld.y) + abs(aaWorld.y) > 15000){
read_data();
}
else{
delay(5);
}
}
void read_data(){
count_no =0;
for(count_no =0;count_no<40;){
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
SendWorldAccel();
}
}
Serial.println("");
Serial.println("--------");
Serial.println(count);
Serial.println("--------");
count++;
}
void SendWorldAccel() {
// display initial world-frame acceleration, adjusted to remove gravity
// and rotated based on known orientation from quaternion
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
Serial.print(aaWorld.x );
Serial.print(",");
Serial.print(aaWorld.y);
Serial.print(",");
Serial.println(aaWorld.z);
count_no;
}
The i2cdevlib_MPU6050_model.ino file is similar to the Adafruit_MPU6050_model.ino file.
#include <TensorFlowLite_ESP32.h>
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
//#include "tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "model.h"
#include "I2Cdev.h"
#include "MPU6050_6Axis_MotionApps20.h"
#include <Wire.h>
int pushButton = 27;
MPU6050 mpu(0x69);
// MPU control/status vars
bool dmpReady = false; // set true if DMP init was successful
uint8_t devStatus; // return status after each device operation (0 = success, !0 = error)
uint16_t packetSize; // expected DMP packet size (default is 42 bytes)
uint16_t fifoCount; // count of all bytes currently in FIFO
uint8_t fifoBuffer[64]; // FIFO storage buffer
int count =1;
// orientation/motion vars
Quaternion q; // [w, x, y, z] quaternion container
VectorInt16 aa; // [x, y, z] accel sensor measurements
VectorInt16 aaReal; // [x, y, z] gravity-free accel sensor measurements
VectorInt16 aaWorld; // [x, y, z] world-frame accel sensor measurements
VectorFloat gravity; // [x, y, z] gravity vector
float euler[3]; // [psi, theta, phi] Euler angle container
float ypr[3]; // [yaw, pitch, roll] yaw/pitch/roll container and gravity vector
int count_no;
#define THRESHOLD 15000
const tflite::Model* tflModel; //
tflite::ErrorReporter* tflErrorReporter; //
constexpr int tensorArenaSize = 102 * 1024; //
uint8_t tensorArena[tensorArenaSize];
const char* GESTURES[] = {
"shoot",
};
TfLiteTensor* tflInputTensor; //
TfLiteTensor* tflOutputTensor; //
tflite::MicroInterpreter* tflInterpreter; //
void setup() {
// put your setup code here, to run once:
Wire.begin();
Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
Serial.begin(115200);
while (!Serial); // wait for Leonardo enumeration, others continue immediately
pinMode(pushButton, INPUT);
mpu.initialize();
devStatus = mpu.dmpInitialize();
mpu.setXGyroOffset(21); //++
mpu.setYGyroOffset(-22); //--
mpu.setZGyroOffset(28);
if (devStatus == 0) {
mpu.setDMPEnabled(true);
// set our DMP Ready flag so the main loop() function knows it's okay to use it
dmpReady = true;
// get expected DMP packet size for later comparison
packetSize = mpu.dmpGetFIFOPacketSize();
} else {
// Error
Serial.println("Error!");
}
static tflite::MicroErrorReporter micro_error_reporter; // NOLINT
tflErrorReporter = µ_error_reporter;
tflModel = tflite::GetModel(model);
//if (model->version() != TFLITE_SCHEMA_VERSION) {
// error_reporter->Report(
// "Model provided is schema version %d not equal "
// "to supported version %d.",
// model->version(), TFLITE_SCHEMA_VERSION);
//return;
//}
static tflite::MicroMutableOpResolver micro_mutable_op_resolver; // NOLINT
micro_mutable_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED());
static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
tflInterpreter = &static_interpreter;
tflInterpreter->AllocateTensors();
//Serial.print("setup complete");
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
}
void loop() {
// put your main code here, to run repeatedly:
if (!dmpReady) {
Serial.println("IMU not connected.");
delay(10);
return;
}
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
detect_motion_and_send_gyro_data();
}
}
void detect_motion_and_send_gyro_data(){
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
if(digitalRead(pushButton) == 1){
Serial.print("r,");
Serial.print(q.w, 4); Serial.print(",");
Serial.print(q.x, 4); Serial.print(",");
Serial.print(q.y, 4); Serial.print(",");
Serial.println(q.z, 4);
}
if( abs(aaWorld.x) +abs(aaWorld.y) + abs(aaWorld.y) > 15000){
run_inference();
}
}
void run_inference(){
count_no = 0;
while(count_no<40){
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
SendWorldAccel();
}
};
TfLiteStatus invokeStatus = tflInterpreter->Invoke();
float out = tflInterpreter->output(0)->data.f[1];
if(out >= 0.8){
Serial.println(1);
}
else{
Serial.println(0);
}
}
void SendWorldAccel(){
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
if(count_no % 4 == 10){
Serial.print("r,");
Serial.print(q.w, 4); Serial.print(",");
Serial.print(q.x, 4); Serial.print(",");
Serial.print(q.y, 4); Serial.print(",");
Serial.println(q.z, 4);
}
tflInputTensor->data.f[count_no * 3 + 0] = (aaWorld.x) / 16384.0;
tflInputTensor->data.f[count_no * 3 + 1] = (aaWorld.y) / 16384.0;
tflInputTensor->data.f[count_no * 3 + 2] = (aaWorld.z) / 16384.0;
count_no++;
}
The i2cdevlib_MPU6050_model_uduino.ino file is similar to the above one. The only difference is that all the functions are written under the uduino's if statement.
#include <TensorFlowLite_ESP32.h>
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
//#include "tensorflow/lite/experimental/micro/kernels/all_ops_resolver.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "model.h"
#include "I2Cdev.h"
#include "MPU6050_6Axis_MotionApps20.h"
#include <Wire.h>
#include<Uduino.h>
int pushButton = 27;
Uduino uduino("mpu6050");
MPU6050 mpu(0x69);
// MPU control/status vars
bool dmpReady = false; // set true if DMP init was successful
uint8_t devStatus; // return status after each device operation (0 = success, !0 = error)
uint16_t packetSize; // expected DMP packet size (default is 42 bytes)
uint16_t fifoCount; // count of all bytes currently in FIFO
uint8_t fifoBuffer[64]; // FIFO storage buffer
int count =1;
// orientation/motion vars
Quaternion q; // [w, x, y, z] quaternion container
VectorInt16 aa; // [x, y, z] accel sensor measurements
VectorInt16 aaReal; // [x, y, z] gravity-free accel sensor measurements
VectorInt16 aaWorld; // [x, y, z] world-frame accel sensor measurements
VectorFloat gravity; // [x, y, z] gravity vector
float euler[3]; // [psi, theta, phi] Euler angle container
float ypr[3]; // [yaw, pitch, roll] yaw/pitch/roll container and gravity vector
int count_no;
#define THRESHOLD 15000
const tflite::Model* tflModel; //
tflite::ErrorReporter* tflErrorReporter; //
constexpr int tensorArenaSize = 102 * 1024; //
uint8_t tensorArena[tensorArenaSize];
const char* GESTURES[] = {
"shoot",
};
TfLiteTensor* tflInputTensor; //
TfLiteTensor* tflOutputTensor; //
tflite::MicroInterpreter* tflInterpreter; //
void setup() {
// put your setup code here, to run once:
Wire.begin();
Wire.setClock(400000); // 400kHz I2C clock. Comment this line if having compilation difficulties
Serial.begin(115200);
while (!Serial); // wait for Leonardo enumeration, others continue immediately
pinMode(pushButton, INPUT);
mpu.initialize();
devStatus = mpu.dmpInitialize();
mpu.setXGyroOffset(21); //++
mpu.setYGyroOffset(-22); //--
mpu.setZGyroOffset(28);
if (devStatus == 0) {
mpu.setDMPEnabled(true);
// set our DMP Ready flag so the main loop() function knows it's okay to use it
dmpReady = true;
// get expected DMP packet size for later comparison
packetSize = mpu.dmpGetFIFOPacketSize();
} else {
// Error
Serial.println("Error!");
}
static tflite::MicroErrorReporter micro_error_reporter; // NOLINT
tflErrorReporter = µ_error_reporter;
tflModel = tflite::GetModel(model);
//if (model->version() != TFLITE_SCHEMA_VERSION) {
// error_reporter->Report(
// "Model provided is schema version %d not equal "
// "to supported version %d.",
// model->version(), TFLITE_SCHEMA_VERSION);
//return;
//}
static tflite::MicroMutableOpResolver micro_mutable_op_resolver; // NOLINT
micro_mutable_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED());
static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
tflInterpreter = &static_interpreter;
tflInterpreter->AllocateTensors();
//Serial.print("setup complete");
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
}
void loop() {
uduino.update();
if(uduino.isConnected()){
// put your main code here, to run repeatedly:
if (!dmpReady) {
Serial.println("IMU not connected.");
delay(10);
return;
}
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
detect_motion_and_send_gyro_data();
}
}
}
void detect_motion_and_send_gyro_data(){
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
if(digitalRead(pushButton) == 1){
Serial.print("r,");
Serial.print(q.w, 4); Serial.print(",");
Serial.print(q.x, 4); Serial.print(",");
Serial.print(q.y, 4); Serial.print(",");
Serial.println(q.z, 4);
}
if( abs(aaWorld.x) +abs(aaWorld.y) + abs(aaWorld.y) > 15000){
run_inference();
}
}
void run_inference(){
count_no = 0;
while(count_no<40){
int mpuIntStatus = mpu.getIntStatus();
fifoCount = mpu.getFIFOCount();
if ((mpuIntStatus & 0x10) || fifoCount == 1024) { // check if overflow
mpu.resetFIFO();
} else if (mpuIntStatus & 0x02) {
while (fifoCount < packetSize) fifoCount = mpu.getFIFOCount();
mpu.getFIFOBytes(fifoBuffer, packetSize);
fifoCount -= packetSize;
SendWorldAccel();
}
};
TfLiteStatus invokeStatus = tflInterpreter->Invoke();
float out = *tflInterpreter->output(0)->data.f;
if(out >= 0){
Serial.println(1);
}
else{
Serial.println(0);
}
}
void SendWorldAccel(){
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aa, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
mpu.dmpGetLinearAccel(&aaReal, &aa, &gravity);
mpu.dmpGetLinearAccelInWorld(&aaWorld, &aaReal, &q);
if(count_no % 4 == 10){
Serial.print("r,");
Serial.print(q.w, 4); Serial.print(",");
Serial.print(q.x, 4); Serial.print(",");
Serial.print(q.y, 4); Serial.print(",");
Serial.println(q.z, 4);
}
tflInputTensor->data.f[count_no * 3 + 0] = (aaWorld.x) / 16384.0;
tflInputTensor->data.f[count_no * 3 + 1] = (aaWorld.y) / 16384.0;
tflInputTensor->data.f[count_no * 3 + 2] = (aaWorld.z) / 16384.0;
count_no++;
}
The C# code for the game:
Here, the quaternion values from the IMU are parsed from a string and converted to float values. This is then used to rotate the camera smoothly using the quaternion.lerp function.
The cam.transform.parent.transform.eulerAngles
is used to offset the camera's parent by some angle in all three directions to help match the initial physical orientation of the microcontroller with the camera's orientation.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Uduino;
public class ReadMPU : MonoBehaviour {
Vector3 position;
Vector3 rotation;
public Gun gun;
public Vector3 rotationOffset ;
public float speedFactor = 15.0f;
public string imuName = "r"; // You should ignore this if there is one IMU.
public Camera cam;
void Start () {
UduinoManager.Instance.OnDataReceived += ReadIMU;
// Note that here, we don't use the delegate but the Events, assigned in the Inpsector Panel
}
void Update() { }
public void ReadIMU (string data, UduinoDevice device) {
//Debug.Log(data);
string[] values = data.Split(',');
if (values.Length == 5 && values[0] == imuName) // Rotation of the first one
{
float w = float.Parse(values[1]);
float x = float.Parse(values[2]);
float y = float.Parse(values[3]);
float z = float.Parse(values[4]);
cam.transform.localRotation = Quaternion.Lerp(cam.transform.localRotation, new Quaternion(w, y, x, z), Time.deltaTime * speedFactor);
} else if (values.Length ==1)
{
gun.Shoot();
}
else{
Debug.LogWarning(data);
}
cam.transform.parent.transform.eulerAngles = rotationOffset;
// Log.Debug("The new rotation is : " + transform.Find("IMU_Object").eulerAngles);
}
}
5. Improving the game:5.a Changing the gun and adding a better muzzle flash
I'm a big fan of the looks of a double-barrelled shotgun so I changed up the gun and added a more dynamic muzzle flash.
5.b Making the WASD movement more immersive!
I'm not quite sure why this bug popped up at the end of the project but changing the WASD movement from using the global axis to using the camera axis solved it.
Vector3 moveDirection = Vector3.zero;
moveDirection.x = input.x;
moveDirection.z = input.y;
Vector3 moveDirection = (cam.transform.forward * +input.y + cam.transform.right * +input.x).normalized;
5.c Adding scary music to complete the theme
Finally, I added scary music to complete the theme but you won't be able to hear it in any of the youtube videos due to copyright issues :)
5.d Adding an empty game object in between the player and camera
cam.transform.parent.transform.eulerAngles = rotationOffset;
This part of the code under the ReadMPU.cs file is used to align the microcontroller and camera to the required position during initialization, but this causes problems with the movements of the player. To solve this, I added an empty game object in between the player and the camera. This way both the offset of the camera as well the player's rotation can be maintained.
6. Some Drawbacks/Possible Improvements to the Game6.a Making the IMU feel more Mouse-Like
The problem with the current version is that it isn't mouse-like. It's not possible to turn 360 degrees. I think it could be solved by using two sets of gyro data and moving the camera in the delta of their directions.
6.b Adding NavMesh agents to make the game more immersive
Navmesh agents are characters that can navigate terrains by themselves. They are useful as enemies in games.
6.c Adding sounds for gunshots and damage
This is a simple addition to the game that can elevate the player experience to a whole new level. So if you'd like to improve the game, feel free to experiment with this.
7. 3D Printing time!I designed and 3D-printed a gun with a removable magazine to hold the batteries and the microcontroller.
I thank my GSoC mentor, Paul Ruiz, for guiding me throughout the project!
Comments