This project has two goals that I explain in more detail in later sections.:
- Machine Learning will allow us to use Neural Networks so that our "Redboard Artemis ATP Car" drives autonomously and makes decisions at the nodes: turn left or right, move forward and backward: https://en.wikipedia.org/wiki/Machine_learning
- The PID Controller allows us to automatically center the car along the way, and using infrared distance sensors: https://en.wikipedia.org/wiki/PID_controller
The goals of this project are the following:
- Apply the machine learning to a robot car.
- Add the PID controller to this robot.
- Finally, add the Micro OLED Breakout device to this robot.
To facilitate the understanding of this project, we have divided it into the following sections: Hardware, Machine Learning, PID Controller, Adding the micro OLED Breakout, Printing and Assembling the Chassis, and Conclusion.
HARDWARE(Timing: 2 hrs.)
In the figure below I show you the schematic diagram.
SparkFun RedBoard Artemis
Features: Arduino Uno R3 Footprint, 1M Flash / 384k RAM, 48MHz / 96MHz turbo available, 24 GPIO - all interrupt capable, 21 PWM channels, Built in BLE radio, 10 ADC channels with 14-bit precision, 2 UARTs, 6 I2C buses, 4 SPI buses, PDM Interface, I2S Interface and Qwiic Connector. Description, Features and Documents here ...
Micro Oled Breakout
This is a small monochrome, blue-on-black OLED screen presents incredibly clear images for your viewing pleasure. The description, features and documents you can find in: https://www.sparkfun.com/products/14532
Finally, I have used three infrared distance sensors. Here it's datasheets: GP2Y0A41SK0F and GP2Y0A51SK0F (2).
MACHINE LEARNING(Timing: 8 hrs.)
A very good machine learning bibliographic reference is the following: This post is in Spanish, and in my case it helped me to understand the theory and calculation of neural networks: https://www.aprendemachinelearning.com/crear-una-red-neuronal-en-python-desde-cero/
In this tutorial we will create a neural network with Python and copy its weights to a network with forward propagation on the Artemis RedBoard ATP board, and that will allow the robot car to drive alone and without hitting the walls.
For this exercise we will make the neural network have 4 outputs: two for each motor, since to the L298N driver we will connect 2 digital outputs of the board for each car motor. In addition the outputs will be between 0 and 1 (depolarize or polarize the motor).
We will have four inputs, three correspond to the 3 sensors and the fourth is for the BIAS, the values will be 0 and 1, and they are assigned with the following logic: The sensors on the left and right will have a value of 1 if the distance is less than 13 cm, and will have a value of 0 if the distance is greater than 13 cm. The center sensor will have a value of 1 if the distance is less than 16.7 cm, and will have a value of 0 if the distance is greater than 16.7 cm. The BIAS will have a value of 1. Here we see the changes in this table:
And the actions of the engines would be the following:
To create our neural network, we will use this code developed with Python 3.7.3: NeuralNetwork.py
import numpy as np
# We create the class
class NeuralNetwork:
def __init__(self, layers, activation='tanh'):
if activation == 'sigmoid':
self.activation = sigmoid
self.activation_prime = sigmoid_derivada
elif activation == 'tanh':
self.activation = tanh
self.activation_prime = tanh_derivada
# Initialize the weights
self.weights = []
self.deltas = []
# Assign random values to input layer and hidden layer
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# Assigned random to output layer
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
def fit(self, X, y, learning_rate=0.2, epochs=100000):
# I add column of ones to the X inputs. With this we add the Bias unit to the input layer
ones = np.atleast_2d(np.ones(X.shape[0]))
X = np.concatenate((ones.T, X), axis=1)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
#Calculate the difference in the output layer and the value obtained
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
# We start in the second layer until the last one (A layer before the output one)
for l in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_prime(a[l]))
self.deltas.append(deltas)
# Reverse
deltas.reverse()
# Backpropagation
# 1. Multiply the output delta with the input activations to obtain the weight gradient.
# 2. Updated the weight by subtracting a percentage of the gradient
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
if k % 10000 == 0: print('epochs:', k)
def predict(self, x):
ones = np.atleast_2d(np.ones(x.shape[0]))
a = np.concatenate((np.ones(1).T, np.array(x)), axis=0)
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
def print_weights(self):
print("LIST OF CONNECTION WEIGHTS")
for i in range(len(self.weights)):
print(self.weights[i])
def get_weights(self):
return self.weights
def get_deltas(self):
return self.deltas
# When creating the network, we can choose between using the sigmoid or tanh function
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def sigmoid_derivada(x):
return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x):
return np.tanh(x)
def tanh_derivada(x):
return 1.0 - x**2
########## CAR NETWORK
nn = NeuralNetwork([5,5,4],activation ='tanh')
X = np.array([[0,0,0], # out of the circuit and unobstructed
[0,0,1], # detect the right wall
[0,1,0], # detect the front wall
[0,1,1], # detects the right and front walls
[1,0,0], # detect the left wall
[1,0,1], # detect the left and right walls
[1,1,0], # detects the left and front walls
[1,1,1], # detect the three walls
])
# the outputs correspond to starting (or not) the motors
y = np.array([[0,0,0,0], # stop
[1,0,1,0], # go forward
[0,1,0,1], # go back
[0,1,1,0], # turn to the left
[1,0,1,0], # go forward
[1,0,1,0], # go forward
[1,0,0,1], # turn to the right
[0,1,0,1], # go back
])
nn.fit(X, y, learning_rate=0.03,epochs=40001)
def valNN(x):
return (int)(abs(round(x)))
index=0
for e in X:
prediccion = nn.predict(e)
print("X:",e,"expected:",y[index],"obtained:", valNN(prediccion[0]),valNN(prediccion[1]),valNN(prediccion[2]),valNN(prediccion[3]))
index=index+1
What would we see?
The code we add to see the graph, as it trains after 40,000 iterations is as follows: Graphic.py
########## WE GRAPH THE COST FUNCTION
import matplotlib.pyplot as plt
deltas = nn.get_deltas()
valores=[]
index=0
for arreglo in deltas:
valores.append(arreglo[1][0] + arreglo[1][1])
index=index+1
plt.plot(range(len(valores)), valores, color='b')
plt.ylim([0, 1])
plt.ylabel('Cost')
plt.xlabel('Epochs')
plt.tight_layout()
plt.show()
And now we can see the weights obtained from the connections, and which will be the ones we will use in the Arduino code: Generate_Arduino_Code.py
########## WE GENERATE THE ARDUINO CODE
def to_str(name, W):
s = str(W.tolist()).replace('[', '{').replace(']', '}')
return 'float '+name+'['+str(W.shape[0])+']['+str(W.shape[1])+'] = ' + s + ';'
# We get the weights trained to be able to use them in the arduino code
pesos = nn.get_weights();
print('// Replace these lines in your arduino code:')
print('// float HiddenWeights ...')
print('// float OutputWeights ...')
print('// With trained weights.')
print('\n')
print(to_str('HiddenWeights', pesos[0]))
print(to_str('OutputWeights', pesos[1]))
The Arduino code with the configuration of the neural network is loaded on the Artemis board. Self_Driving_Car.ino
// AUTHOR: GUILLERMO PEREZ GUILLEN
// SELF DRIVING CAR USING REDBOARD ARTEMIS ATP
#define ENA 3
#define ENB 5
#define IN1 9
#define IN2 8
#define IN3 11
#define IN4 12
/******************************************************************
NETWORK CONFIGURATION
******************************************************************/
const int InputNodes = 4; // includes BIAS neuron
const int HiddenNodes = 4; //includes BIAS neuron
const int OutputNodes = 4;
int i, j;
double Accum;
double Hidden[HiddenNodes];
double Output[OutputNodes];
int error=0;
int dif,difAnt=0;
const float Kp=0.1;
const float Kd=0.1;
void setup() {
Serial.begin(9600);
pinMode(A0, INPUT); //left sensor
pinMode(A1, INPUT); //center sensor
pinMode(A3, INPUT); //right sensor
pinMode(IN1, OUTPUT);
pinMode(IN2, OUTPUT);
pinMode(IN3, OUTPUT);
pinMode(IN4, OUTPUT);
pinMode(ENA, OUTPUT);
pinMode(ENB, OUTPUT);
}
void loop()
{
double TestInput[] = {0, 0, 0};
double input1=0,input2=0,input3=0;
float volts0 = analogRead(A0)*0.00322265625; // value from sensor * (3.3/1024)
float volts1 = analogRead(A1)*0.00322265625; // value from sensor * (3.3/1024)
float volts2 = analogRead(A3)*0.00322265625; // value from sensor * (3.3/1024)
dif = analogRead(A3) - analogRead(A0); // PID CONTROL
error = floor(Kp*(dif)+Kd*(difAnt-dif)); // PID CONTROL
difAnt=dif; // CONTROL PID
int d0 = constrain(255 - error, 0, 255);//left speed - PID CONTROL
int d1 = constrain(255 + error, 0, 255);//right speed - PID CONTROL
float sensor_left = 8*pow(volts0, -1); // worked out from datasheet graph //GP2Y0A51SK0F - 2 a 15 cm
float sensor_center = 21*pow(volts1, -1); // worked out from datasheet graph //GP2Y0A41SK0F - 4 a 30 cm
float sensor_right = 9*pow(volts2, -1); // worked out from datasheet graph //GP2Y0A51SK0F - 2 a 15 cm
if (sensor_left<13){input1=1;}
else {input1=0;}
if (sensor_center<16.7){input2=1;}
else {input2=0;}
if (sensor_right<13){input3=1;}
else {input3=0;}
/******************************************************************
WE CALL THE FEEDFORWARD NETWORK WITH THE INPUTS
******************************************************************/
Serial.print("Input1:");
Serial.println(input1);
Serial.print("Input2:");
Serial.println(input2);
Serial.print("Input3:");
Serial.println(input3);
//THESE ARE THE THREE INPUTS WITH VALUES OF 0 TO 1 ********************
TestInput[0] = 1.0;//BIAS UNIT
TestInput[1] = input1;
TestInput[2] = input2;
TestInput[3] = input3;
// THIS FUNCTION IS TO GET THE OUTPUTS **********************************
InputToOutput(TestInput[0], TestInput[1], TestInput[2], TestInput[3]); //INPUT to ANN to obtain OUTPUT
int out1 = round(abs(Output[0]));
int out2 = round(abs(Output[1]));
int out3 = round(abs(Output[2]));
int out4 = round(abs(Output[3]));
Serial.print("Output1:");
Serial.println(out1);
Serial.print("Output2:");
Serial.println(out2);
Serial.println(Output[1]);
Serial.print("Output3:");
Serial.println(out3);
Serial.print("Output4:");
Serial.println(out4);
/******************************************************************
DRIVE MOTORS WITH THE NETWORK OUTPUT
******************************************************************/
analogWrite(ENA, d0);
analogWrite(ENB, d1);
digitalWrite(IN1, out1 * HIGH);
digitalWrite(IN2, out2 * HIGH);
digitalWrite(IN3, out3 * HIGH);
digitalWrite(IN4, out4 * HIGH);
delay(50);
}
void InputToOutput(double In1, double In2, double In3, double In4)
{
double TestInput[] = {0, 0, 0, 0};
TestInput[0] = In1;
TestInput[1] = In2;
TestInput[2] = In3;
TestInput[3] = In4;
/******************************************************************
CALCULATE ACTIVITIES IN HIDDEN LAYERS
******************************************************************/
for ( i = 0 ; i < HiddenNodes ; i++ ) { // We go through the four columns of the hidden weights
Accum = 0;
for ( j = 0 ; j < InputNodes ; j++ ) { // Three values of the entry line and each column of hidden weights
Accum += TestInput[j] * HiddenWeights[j][i] ;
}
Hidden[i] = tanh(Accum) ; // We obtain a matrix of a line with four values
}
/******************************************************************
CALCULATE ACTIVATION AND ERROR IN THE OUTPUT LAYER
******************************************************************/
for ( i = 0 ; i < OutputNodes ; i++ ) {
Accum = 0;
for ( j = 0 ; j < HiddenNodes ; j++ ) {
Accum += Hidden[j] * OutputWeights[j][i] ;
}
Output[i] = tanh(Accum) ;//tanh
}
}
/******************************************************************
END NETWORK CONFIGURATION
******************************************************************/
void stop() { // We deactivate the engines
digitalWrite(ENA, LOW);
digitalWrite(ENB, LOW);
Serial.println("Stop!");
}
PID CONTROLLER(Timing: 8 hrs.)
The robot is equipped with 3 analog infrared sensors, which detect the distance at which the walls are, one in front and two on the left and right sides. To calibrate the distances of the infrared sensors GP2Y0A41SK0F and GP2Y0A51SK0F, and with greater precision you can see this post: https://www.instructables.com/id/How-to-Use-the-Sharp-IR-Sensor-GP2Y0A41SK0F-Arduin/
The robot uses PID controller to maintain a central distance between the left and right walls. If the robot is near the left wall, then it can decrease the speed of the right motor and increase the speed of the left motor, to make the robot move to the right, and moving away from the left wall, and vice versa.
The speeds d0 of the left engine, and d1 of the right engine are calculated with the following code:
dif = analogRead(A3) - analogRead(A0); // PID CONTROL
error = floor(Kp*(dif)+Kd*(difAnt-dif)); // PID CONTROL
difAnt=dif; // CONTROL PID
int d0 = constrain(255 - error, 0, 255);//left speed - PID CONTROL
int d1 = constrain(255 + error, 0, 255);//right speed - PID CONTROL
However, the robot's movement may be unstable due to the error caused by a small time error, we have added a second correction factor to make the movement smoother. That is to say: difAnt= dif; now the speeds are applied by means of PWM signals to the two gearmotors:
analogWrite(ENA, d0);
analogWrite(ENB, d1);
digitalWrite(IN1, out1 * HIGH);
digitalWrite(IN2, out2 * HIGH);
digitalWrite(IN3, out3 * HIGH);
digitalWrite(IN4, out4 * HIGH);
delay(50);
Test
My intention is to do tests and for each change made to the robot, so first we do the test and then we add the Micro OLEd Breakout. Below, I show you the tests performed with this prototype
ADDING THE MICRO OLED BREAKOUT(Timing: 4 hrs.)
The small monochrome, blue-on-black OLED screen presents incredibly clear images for your viewing pleasure. This is why I added this device to indicate visual signals when the robot car will be in the following situations: stop, forward, back, turn to the left and turn to the right. The icons (64x48 pixels) that I have used and their function are the following:
The software to get the characters of the monochrome images is LCD Assistant and you can download it at: http://en.radzio.dxp.pl/bitmap_converter/
The example code that I used from the library is: MicroOLED_Rick_and_Morty_I2C.ino
The bitmaps.h library that I obtained, you can download in the download section. Below I show you how the modified Arduino code of the previous section: Self_Driving_Car_microOLED.ino
// AUTHOR: GUILLERMO PEREZ GUILLEN
// SELF DRIVING CAR USING REDBOARD ARTEMIS ATP
// HERE I'VE ADDED CHANGES TO USE THE MICRO OLED BREAKOUT
#include <Wire.h> // Include Wire if you're using I2C
#include <SFE_MicroOLED.h> // Include the SFE_MicroOLED library
#include "bitmaps.h"
#define ENA 3
#define ENB 5
#define IN1 9
#define IN2 8
#define IN3 11
#define IN4 12
//The library assumes a reset pin is necessary. The Qwiic OLED has RST hard-wired, so pick an arbitrarty IO pin that is not being used
#define PIN_RESET 9
//The DC_JUMPER is the I2C Address Select jumper. Set to 1 if the jumper is open (Default), or set to 0 if it's closed.
#define DC_JUMPER 1
MicroOLED oled(PIN_RESET, DC_JUMPER); // I2C declaration
/******************************************************************
NETWORK CONFIGURATION
******************************************************************/
const int InputNodes = 4; // includes BIAS neuron
const int HiddenNodes = 4; //includes BIAS neuron
const int OutputNodes = 4;
int i, j;
double Accum;
double Hidden[HiddenNodes];
double Output[OutputNodes];
int error=0;
int dif,difAnt=0;
const float Kp=0.1;
const float Kd=0.1;
void setup() {
delay(100);
Wire.begin();
oled.begin(); // Initialize the OLED
oled.clear(ALL); // Clear the display's internal memory
oled.display(); // Display what's in the buffer (splashscreen)
delay(100); // Delay 100 ms
oled.clear(PAGE); // Clear the buffer.
Serial.begin(9600);
pinMode(A0, INPUT); //left sensor
pinMode(A1, INPUT); //center sensor
pinMode(A3, INPUT); //right sensor
pinMode(IN1, OUTPUT);
pinMode(IN2, OUTPUT);
pinMode(IN3, OUTPUT);
pinMode(IN4, OUTPUT);
pinMode(ENA, OUTPUT);
pinMode(ENB, OUTPUT);
}
void loop()
{
double TestInput[] = {0, 0, 0};
double input1=0,input2=0,input3=0;
float volts0 = analogRead(A0)*0.00322265625; // value from sensor * (3.3/1024)
float volts1 = analogRead(A1)*0.00322265625; // value from sensor * (3.3/1024)
float volts2 = analogRead(A3)*0.00322265625; // value from sensor * (3.3/1024)
dif = analogRead(A3) - analogRead(A0); // PID CONTROL
error = floor(Kp*(dif)+Kd*(difAnt-dif)); // PID CONTROL
difAnt=dif; // CONTROL PID
int d0 = constrain(255 - error, 0, 255);//left speed - PID CONTROL
int d1 = constrain(255 + error, 0, 255);//right speed - PID CONTROL
float sensor_left = 8*pow(volts0, -1); // worked out from datasheet graph //GP2Y0A51SK0F - 2 a 15 cm
float sensor_center = 21*pow(volts1, -1); // worked out from datasheet graph //GP2Y0A41SK0F - 4 a 30 cm
float sensor_right = 9*pow(volts2, -1); // worked out from datasheet graph //GP2Y0A51SK0F - 2 a 15 cm
if (sensor_left<13){input1=1;}
else {input1=0;}
if (sensor_center<16.7){input2=1;}
else {input2=0;}
if (sensor_right<13){input3=1;}
else {input3=0;}
/******************************************************************
WE CALL THE FEEDFORWARD NETWORK WITH THE INPUTS
******************************************************************/
Serial.print("Input1:");
Serial.println(input1);
Serial.print("Input2:");
Serial.println(input2);
Serial.print("Input3:");
Serial.println(input3);
//THESE ARE THE THREE INPUTS WITH VALUES OF 0 TO 1 ********************
TestInput[0] = 1.0;//BIAS UNIT
TestInput[1] = input1;
TestInput[2] = input2;
TestInput[3] = input3;
// THIS FUNCTION IS TO GET THE OUTPUTS **********************************
InputToOutput(TestInput[0], TestInput[1], TestInput[2], TestInput[3]); //INPUT to ANN to obtain OUTPUT
int out1 = round(abs(Output[0]));
int out2 = round(abs(Output[1]));
int out3 = round(abs(Output[2]));
int out4 = round(abs(Output[3]));
if (out1==0 && out2==0 && out3==0 && out4==0){
drawStop();
}
if (out1==1 && out2==0 && out3==1 && out4==0){
drawForward();
}
if (out1==0 && out2==1 && out3==0 && out4==1){
drawBack();
}
if (out1==0 && out2==1 && out3==1 && out4==0){
drawLeft();
}
if (out1==1 && out2==0 && out3==0 && out4==1){
drawRight();
}
Serial.print("Output1:");
Serial.println(out1);
Serial.print("Output2:");
Serial.println(out2);
Serial.println(Output[1]);
Serial.print("Output3:");
Serial.println(out3);
Serial.print("Output4:");
Serial.println(out4);
/******************************************************************
DRIVE MOTORS WITH THE NETWORK OUTPUT
******************************************************************/
analogWrite(ENA, d0);
analogWrite(ENB, d1);
digitalWrite(IN1, out1 * HIGH);
digitalWrite(IN2, out2 * HIGH);
digitalWrite(IN3, out3 * HIGH);
digitalWrite(IN4, out4 * HIGH);
delay(50);
}
void InputToOutput(double In1, double In2, double In3, double In4)
{
double TestInput[] = {0, 0, 0, 0};
TestInput[0] = In1;
TestInput[1] = In2;
TestInput[2] = In3;
TestInput[3] = In4;
/******************************************************************
CALCULATE ACTIVITIES IN HIDDEN LAYERS
******************************************************************/
for ( i = 0 ; i < HiddenNodes ; i++ ) { // We go through the four columns of the hidden weights
Accum = 0;
for ( j = 0 ; j < InputNodes ; j++ ) { // Three values of the entry line and each column of hidden weights
Accum += TestInput[j] * HiddenWeights[j][i] ;
}
Hidden[i] = tanh(Accum) ; // We obtain a matrix of a line with four values
}
/******************************************************************
CALCULATE ACTIVATION AND ERROR IN THE OUTPUT LAYER
******************************************************************/
for ( i = 0 ; i < OutputNodes ; i++ ) {
Accum = 0;
for ( j = 0 ; j < HiddenNodes ; j++ ) {
Accum += Hidden[j] * OutputWeights[j][i] ;
}
Output[i] = tanh(Accum) ;//tanh
}
}
/******************************************************************
END NETWORK CONFIGURATION
******************************************************************/
void stop() { // We deactivate the engines
digitalWrite(ENA, LOW);
digitalWrite(ENB, LOW);
Serial.println("Stop!");
}
//---------------------------------------------------------------
void drawLeft()
{
oled.clear(ALL);
oled.clear(PAGE);
oled.drawBitmap(left);//Display Logo
oled.display();
}
//---------------------------------------------------------------
void drawRight()
{
oled.clear(ALL);
oled.clear(PAGE);
oled.drawBitmap(right);//Display Logo
oled.display();
}
//---------------------------------------------------------------
void drawForward()
{
oled.clear(ALL);
oled.clear(PAGE);
oled.drawBitmap(forward);//Display Logo
oled.display();
}//---------------------------------------------------------------
void drawBack()
{
oled.clear(ALL);
oled.clear(PAGE);
oled.drawBitmap(back);//Display Logo
oled.display();
}//---------------------------------------------------------------
void drawStop()
{
oled.clear(ALL);
oled.clear(PAGE);
oled.drawBitmap(stop2);//Display Logo
oled.display();
}
Test
Below I show you some tests done with the Micro OLED Breakout:
PRINTING AND ASSEMBLING THE CHASSIS(Timing: 4 hrs.)
There are two pieces that must be printed with PLA plastic. They may have to adjust the measurements somewhat, or file the plastic a little so that everything fits well. The 3D files you can get on the download section.
On the chassis bottom, you have to place the next devices: 1) The gearmotors are adjust with plastic gauze; 2) The GP2Y0A51SK0F infrared sensors are placed with screws on the left and right front ends of the chassis; 3) The GP2Y0A41SK0F infrared sensor is placed with screws in the center and front of the chassis; 4) The battery is adjusted with plastic straps; and 5) The L298N driver and the ball caster are placed with screws. Finally the RedBoard Artemis ATP board is adjusted to the lower chassis with screws. In the figures below we have several views of this assembly.
This project is a great personal challenge, because for the first time I combined Machine Learning with PID Controller, and I feel satisfied with my prototype developed with my RedBoard Artemis ATP.
The first video shows that I mainly perform two tests: in the first one I made many adjustments so that the robot surrounded the object (cardboard box) and in both directions without problems. In the second test, I added some small diagonal walls in the nodes to improve the robot's turn and the change being less abrupt.
The distances I assigned to the infrared sensors were experimentally calibrated so that the robot could focus to the center as quickly as possible. In the case of the central infrared sensor, I got the distance of 16.5 cm that allowed me to slow down the robot in time to turn and without impacting the opposite wall.
Finally, in the second video we verify the usefulness of using the Micro OLED Breakout in the robot car, now we can see on the display the tasks.
Comments