Electrical cars and vehicles are becoming popular now and the market is growing. Soon there will be large number of electrical cars all around us and we will be in need of getting our car charged quite often. The need for charging stations will increase in such case and the information about nearby charging stations with availability of parking space for charging the car will play a vital role. At present, there are many electric vehicle charging stations in Jaipur, India as seen in the following google map.
But none of the charging station provides details about availability of parking lot for charging the vehicle.
Therefore, it is vital to have the information about the availability of charging point as illustrated in following figure.
AiSpark is AI powered Smart Parking System specially designed to check the Parking Space availability at Electrical Car Charging Stations. Using the system any user can check the availability of parking space for charging of car remotely and can get the car for charging.
The AiSpark system continuously monitor the parking space at electrical car parking station and update the IoT server, the information then can be used by the user to find the available parking lot for charging via the web app.
To build this project I used following components:
1. Kria KV260 Vision AI Started Kit
2. USB Web Camera
3. 4G WiFi Router with LAN port and 4G Sim support
For connecting the components, refer to Schematics and circuit diagrams section.Connect the KV260 to Edge Impulse
First create a project on Edge Impulse and then connect KV260 to Edge Impulse project with following command
$ edge-impulse-linux
Kria KV260 will be connected and listed in edge impulse.
Now collect the test and training images and label objects in your collected images.
Now build the model.
Here the model which I build working accurately. It counts the number of cars in the image.
I use this result to find the available parking space for charging the car with the simple formula
Available Space = Total Parking Space - Object Count in Image
Run the Edge Impulse model on KV260 using following command:
$ edge-impulse-linux-runner
The following videos show the output on KV260.
Building FirmwareTo build the firmware for KV260, deploy the C++ Library
on edge Impulse.
The library will be downloaded as zip file. On Ubuntu on KV260, install the compiler and build tools with following commands
Start by updating the packages list:
$ sudo apt update
Install the build-essential
package by typing:
$ sudo apt install build-essential
Then install git.
$ sudo apt-get install git
Install the paho MQTT library as the data is sent to MQTT server.
$ git clone https://github.com/eclipse/paho.mqtt.c.git
$ cd paho.mqtt.c
install the OPENSSL dependencies
$ sudo apt-get install libssl-dev
and build the MQTT library with make
command
$ make
and after building the MQTT library, install it.
$ sudo make install
Now exit from paho.mqtt.c
directory and clone example-standalone-inferencing-linux example in your home directory with the following command
$ git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
initialize the submodules
$ cd example-standalone-inferencing-linux && git submodule update --init --recursive
install the libraries
$ sudo apt install libasound2
$ sh build-opencv-linux.sh
OpenCV installation takes time so be patience. After this unzip and copy your EI model (C++ library) to example-standalone-inferencing-linux
directory as shown in following figure
then open the camera.cpp
source file from source
directory and copy and paste the following code by replacing the old code.
#include <unistd.h>
#include "opencv2/opencv.hpp"
#include "opencv2/videoio/videoio_c.h"
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"
#include "iostream"
/*-------------------------------------- MQTT --------------------------------------*/
#include "stdlib.h"
#include "string.h"
#include "MQTTClient.h"
#define ADDRESS "tcp://mqtt.eclipseprojects.io:1883"
#define CLIENTID "AiSpark_0001"
#define TOPIC "object_count"
#define QOS 1
#define TIMEOUT 10000L
int MQTT_Connect_and_Publish(char PAYLOAD[]){
MQTTClient client;
MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer;
MQTTClient_message pubmsg = MQTTClient_message_initializer;
MQTTClient_deliveryToken token;
int rc;
MQTTClient_create(&client, ADDRESS, CLIENTID,
MQTTCLIENT_PERSISTENCE_NONE, NULL);
conn_opts.keepAliveInterval = 20;
conn_opts.cleansession = 1;
if ((rc = MQTTClient_connect(client, &conn_opts)) != MQTTCLIENT_SUCCESS)
{
printf("Failed to connect, return code %d\n", rc);
exit(-1);
}
strcpy(PAYLOAD, PAYLOAD);
pubmsg.payload = PAYLOAD;
pubmsg.payloadlen = strlen(PAYLOAD);
pubmsg.qos = QOS;
pubmsg.retained = 0;
MQTTClient_publishMessage(client, TOPIC, &pubmsg, &token);
printf("Waiting for up to %d seconds for publication of %s\n"
"on topic %s for client with ClientID: %s\n",
(int)(TIMEOUT/1000), PAYLOAD, TOPIC, CLIENTID);
rc = MQTTClient_waitForCompletion(client, token, TIMEOUT);
printf("Message with delivery token %d delivered\n", token);
MQTTClient_disconnect(client, 10000);
MQTTClient_destroy(&client);
return rc;
}
/*--------------------------------------MQTT------------------------------------*/
static bool use_debug = false;
// If you don't want to allocate this much memory you can use a signal_t structure as well
// and read directly from a cv::Mat object, but on Linux this should be OK
static float features[EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT];
/**
* Resize and crop to the set width/height from model_metadata.h
*/
void resize_and_crop(cv::Mat *in_frame, cv::Mat *out_frame) {
// to resize... we first need to know the factor
float factor_w = static_cast<float>(EI_CLASSIFIER_INPUT_WIDTH) / static_cast<float>(in_frame->cols);
float factor_h = static_cast<float>(EI_CLASSIFIER_INPUT_HEIGHT) / static_cast<float>(in_frame->rows);
float largest_factor = factor_w > factor_h ? factor_w : factor_h;
cv::Size resize_size(static_cast<int>(largest_factor * static_cast<float>(in_frame->cols)),
static_cast<int>(largest_factor * static_cast<float>(in_frame->rows)));
cv::Mat resized;
cv::resize(*in_frame, resized, resize_size);
int crop_x = resize_size.width > resize_size.height ?
(resize_size.width - resize_size.height) / 2 :
0;
int crop_y = resize_size.height > resize_size.width ?
(resize_size.height - resize_size.width) / 2 :
0;
cv::Rect crop_region(crop_x, crop_y, EI_CLASSIFIER_INPUT_WIDTH, EI_CLASSIFIER_INPUT_HEIGHT);
if (use_debug) {
printf("crop_region x=%d y=%d width=%d height=%d\n", crop_x, crop_y, EI_CLASSIFIER_INPUT_WIDTH, EI_CLASSIFIER_INPUT_HEIGHT);
}
*out_frame = resized(crop_region);
}
int main(int argc, char** argv) {
// If you see: OpenCV: not authorized to capture video (status 0), requesting... Abort trap: 6
// This might be a permissions issue. Are you running this command from a simulated shell (like in Visual Studio Code)?
// Try it from a real terminal.
if (argc < 2) {
printf("Requires one parameter (ID of the webcam).\n");
printf("You can find these via `v4l2-ctl --list-devices`.\n");
printf("E.g. for:\n");
printf(" C922 Pro Stream Webcam (usb-70090000.xusb-2.1):\n");
printf(" /dev/video0\n");
printf("The ID of the webcam is 0\n");
exit(1);
}
for (int ix = 2; ix < argc; ix++) {
if (strcmp(argv[ix], "--debug") == 0) {
printf("Enabling debug mode\n");
use_debug = true;
}
}
// open the webcam...
cv::VideoCapture camera(atoi(argv[1]));
if (!camera.isOpened()) {
std::cerr << "ERROR: Could not open camera" << std::endl;
return 1;
}
if (use_debug) {
// create a window to display the images from the webcam
cv::namedWindow("Webcam", cv::WINDOW_AUTOSIZE);
}
// this will contain the image from the webcam
cv::Mat frame;
// display the frame until you press a key
while (1) {
// 100ms. between inference
int64_t next_frame = (int64_t)(ei_read_timer_ms() + 100);
// capture the next frame from the webcam
camera >> frame;
cv::Mat cropped;
resize_and_crop(&frame, &cropped);
size_t feature_ix = 0;
for (int rx = 0; rx < (int)cropped.rows; rx++) {
for (int cx = 0; cx < (int)cropped.cols; cx++) {
cv::Vec3b pixel = cropped.at<cv::Vec3b>(rx, cx);
uint8_t b = pixel.val[0];
uint8_t g = pixel.val[1];
uint8_t r = pixel.val[2];
features[feature_ix++] = (r << 16) + (g << 8) + b;
}
}
ei_impulse_result_t result;
// construct a signal from the features buffer
signal_t signal;
numpy::signal_from_buffer(features, EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT, &signal);
// and run the classifier
EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
if (res != 0) {
printf("ERR: Failed to run classifier (%d)\n", res);
return 1;
}
#if EI_CLASSIFIER_OBJECT_DETECTION == 1
printf("Classification result (%d ms.):\n", result.timing.dsp + result.timing.classification);
bool found_bb = false;
int num_bboxes = 0;
for (size_t ix = 0; ix < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; ix++) {
auto bb = result.bounding_boxes[ix];
if (bb.value == 0) {
continue;
}
found_bb = true;
printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\n", bb.label, bb.value, bb.x, bb.y, bb.width, bb.height);
num_bboxes ++; //counting bounding boxes
}
printf("\nTotal Objects Detected = %d\n", num_bboxes);
char payload[10];
sprintf(payload, "%d", num_bboxes);
MQTT_Connect_and_Publish(payload); // sending object count to MQTT server
if (!found_bb) {
printf(" no objects found\n");
}
#else
printf("%d ms. ", result.timing.dsp + result.timing.classification);
for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
printf("%s: %.05f", result.classification[ix].label, result.classification[ix].value);
if (ix != EI_CLASSIFIER_LABEL_COUNT - 1) {
printf(", ");
}
}
printf("\n");
#endif
// show the image on the window
if (use_debug) {
cv::imshow("Webcam", cropped);
// wait (10ms) for a key to be pressed
if (cv::waitKey(10) >= 0)
break;
}
int64_t sleep_ms = next_frame > (int64_t)ei_read_timer_ms() ? next_frame - (int64_t)ei_read_timer_ms() : 0;
if (sleep_ms > 0) {
usleep(sleep_ms * 1000);
}
}
return 0;
}
#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_CAMERA
#error "Invalid model for current sensor."
#endif
Also add the MQTT library by adding following line in Makefile
.
LDFLAGS += -lpaho-mqtt3c
Now build the firmware application on Kria KV260 with following command.
$ APP_CAMERA=1 make -j2
and run it
$ ./build/camera /dev/video0
The results are shown below.
The model successfully run on Kria KV260 without quantization, however, it can be quantized as instructed in the tutorial here.
Comments