There is a huge demand for fast and effective detection of COVID-19. There are many studies that suggest that respiratory sound data, like coughing, breathing are among the most effective techniques for detecting the COVID-19. The aim of this project is to show that how the most common respiratory illness such as cough can be detected using voice data and differentiate between healthy breathing and coughs. The same technique can be used to identify other serious respiratory disease such as pneumonia, COVID-19 etc. There are many studies show that the COVID-19 can be easily and effectively identified using cough recordings. Following is the link to such findings:
- Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs
- COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings
- AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app
- COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings
- Identifying COVID-19 by using spectral analysis of cough recordings: a distinctive classification study
- Robust Detection of COVID-19 in Cough Sounds: Using Recurrence Dynamics and Variable Markov Model
- COVID-19 cough classification using machine learning and global smartphone recordings
- COVID-19 Detection in Cough, Breath and Speech using Deep Transfer Learning and Bottleneck Features
Above are the few findings. In this project I have developed an ML model using Edge Impulse Studio that uses mel-frequency cepstrum (MFCC) to extract voice features from sound recordings and classify between healthy breath and cough. Building the Machine Learning (ML) model that detects COVID-19 requires highly accurate and large amount of voice recording data which is currently not available. But this project shows that such system could be easily build using the technique given here. However the data for building such system can be collected using the technique described in following article:
Utilize the Power of the Crowd for Data Collection
But this also require lot of time and peoples contribution. Apart from analyzing the sound data, there is also a need to maintain a proper distance while going in public places to minimize the spread of respiratory related infections. The sensor node in this project also alerts its user to maintain proper distance when they come closer to other people in public. Thus the system helps in creating/maintaining the Healthy Spaces.
The Project NameThe name of the project is Respiratory Health Analyzer abbreviated as RoHA. RoHA is an Urdu word which means soul or life. The aim of this project is also to save the human life through creating Healthy Spaces.
The ArchitectureThe following figure show the architecture of the RoHA project. The tinyML model is developed using Edge Impulse. This tinyML model processes the human voice using MFCC to extract features from it which can be used to perform analysis on the patterns of voice data. This model takes input from built in microphone on AWS IoT Edu Kit and analyze the respiratory health based on the sound recording. The inferencing result is then displayed on the sensor node and data is also sent to AWS IoT Core using MQTT protocol. The data is then forwarded to DynamoDB for permanent storage. The custom web app is built using PHP, gathers the data from DynamoDB and display the status of respiratory health, cough count and healthy count. The sensor node also detect human distancing using PIR sensor and alert the person about it on the sensor node screen.
Following are the steps involved in developing this project:
- Create a tinyML model using Edge Impulse
- Setup environment for AWS IoT Edu Kit
- Configure AWS IoT core
- Configure IAM
- Configure DynamoDB service
- Create AWS IoT rule for DynamoDB
- Test the AWS IoT rule and DynamoDB table
- Build the firmware
- Install the Web Server
- Get AWS SDK for PHP & Build Web App
- Run the Web Application
- Video Demo
- Conclusion & Future Scope
In this step we will build the tinyML model using Edge Impulse to analyze respiratory health of a person. I am building a tinyML model on to labels cough and healthy breath for testing purpose. However any one can easily build the model for identifying any any other respiratory illness using the same technique given that the data-set is available for that particular disease that one wants to identify.
I myself continuously working on gathering and finding the data-set for Covid-19 and pneumonia etc. so that I can update my model to work for these diseases as well.
Before you can start developing the tinyML model, you need to have an account on Edge Impulse which is free for developers. Once you have the account, login to your account, choose create a new project
and choose name to your project and then click on create new project button.
after this, the next step is gathering the data for your project. For this go to dashboard and click on lets collect some data button.
When you click the button the following screen will appear.
In this screen, if you already have data-set in WAV format then you can use upload data option, other wise choose use your mobile phone option to record data for the model. If you choose upload data option the following screen will appear, from here you can choose one or multiple files and specify label for those group of files and than click begin upload button to upload data.
I have used use your mobile phone option which opens the following window.
You need to scan this code to upload data from mobile phone. Once you scan and click on the URL, you need to go through the steps as illustrated in following figure.
In STEP 1 once your phone is connected, click on collecting audio button. In STEP 2 click give access to microphone button. In STEP 3 specify label (cough and healthy breathing in this example), length values and click on start recording button. Then record the data as in STEP 4, the data will be automatically uploaded to Edge Impulse after recording is done. You can record data for different labels in this way. The uploaded data is visible on your project's dashboard on Edge Impulse.
After this choose Create Impulse under Impulse Design in the left navigation and add processing and learning blocks as per following figure.
Then go to MFCC and verify your data. You can filter out data at this step.
Once you done, click on save parameters option.
After this the following window will appear, in this window click on Generate Features button.
Then choose NN Classifier from left navigation.
You can change settings or leave it default. Then click on start training.
Your model will be trained and you will see the result.
From the above figures we can see that our model is perfectly build and trained on data. You can choose Live Classification in the left navigation to test the data. You can then choose your phone to get data for live sampling by clicking start sampling button or load the existing sample test data by choosing sample data from drop down list and clicking load sample button.
When you load the sample, the live classification will be done and you can see the model performance.
Live classification test your model against single sample at a time. You can choose the Model Testing option to test your model against multiple test samples from test data. For this click on the Classify all button and you will see the model performance. As you can see in following figure that my model is working perfectly.
Once you done the testing choose Deployment from left navigation to create the library that we will use later on to build the firmware. In the window that appear choose Arduino Library.
Now scroll down and click on build library button.
The Edge Impulse automatically build the library and it will be automatically downloaded as zip file.
Keep this zip file for next step.
Setup environment for AWS IoT Edu KitNow in this step we are going to set up the environment for the AWS IoT Edu Kit.
First download the official Arduino IDE. Now first we need to install ESP32 Boards Manager. For this open up the Arduino IDE, and navigate to File -> Preferences -> Settings
.
Add the following ESP32 Boards Manager URL to Additional Boards Manager https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/arduino/package_m5stack_index.json
Then
go to Tools->Board->Boards Manager
And search m5stack
in the Boards Manager window, and click Install.
Once board is installed, go to Tools -> Board: -> M5Stack Arduino -> M5Stack-Core2
as your board as shown in the following figure.
Now we also need M5Stack Library to work with. Go to Sketch -> Include Library -> Manage Libraries...
Then search M5Core2
, find it and click Install.
You will also need ArduinoJson library that we will use later on.
Now we will add the zip library for our model that we have developed using Edge Impulse. For this go to Sketch -> Include Library -> Add ZIP Library...
and
select the ZIP library.
Once the library is installed you can see it in Arduino IDE.
For this step first you need AWS account and need to sign in to the AWS Console your account.
If you don't have account then click on create new account otherwise sign in.
After you sign in you will see AWS Management console. We will use the search box to access different services.
In the search box type AWS IoT and select IoT Core.
You will see the following page.
Now go to Manage in on the left navigation and choose Things and in the window on the right click Create things Button.
In the next window choose Create single thing option and click on Next button.
Next, specify your Thing name.
Leave other settings as it is and click on Next button.
In the next window choose option Auto-generate a new certificate and click next.
In the next window click Create policy. It will open in new window. You can also create and attach policy later on but its good to create it here.
In the window that appear in new tab/window of your web browser, specify a unique name to the policy and Action as * and Resource ARN as * as well and under Effect select Allow. After this click on Create button.
Now come back to previous window from browser tab there you will see the name of policy you have created. Now select the policy to attach it with your Thing and then click on Create thing button.
When thing is created you will be prompt to download certificate and keys. Download them and keep at safe place.
After you have downloaded, click Done button.
You will see your thing created with the policy attached to it.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. We need this step as we will be using DynamoDB service in PHP application. Therefore to access DynamoDB programmatically or through the AWS Command Line Interface (AWS CLI), we must have an AWS access key. Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. For this we first need to create IAM user and then create the access key. To access IAM service type IAM in the search bo of AWS console and choose IAM.
In the dashboard click on users on the left navigation and then click Add user button on the right side.
Specify the unique user name as db_user and and select Programmatic access and click on next button.
In the next window click on tab Attach existing policies directly and in filter policies type 'dy' and select AmazonDynamoDBFullAccess. After this click on next button.
The Add Tag window will appear. This is optional step so just go to bottom of page and click next.
Next, review the settings and click Create user button at bottom.
The user will be created. You can now get the Access key ID and secret key from the page and even you can download the same in CSV file for later use. This will be required in our PHP program.
In this step we are going to configure DynamoDB database service where our data will be stored permanently. To access DynamoDB service, go to AWS Console and in search box type DynamoDB then from the list select DynamoDB.
The dashboard of DynamoDB will open.
In the dashboard click on Create table button. We'll create a DynamoDB table to record data from sensor node.
- In Table name, enter the table name:
wx_data
. - In Primary key, in Partition key, enter
sample_time
, and in the option list next to the field, choose Number. Check Add sort key. - In the field that appears below Add sort key, enter
device_id
, and in the option list next to the field, choose Number. - At the bottom of the page, choose Create.
We'll define device_data
later, when we configure the DynamoDB rule action. device_data
field will be used to store the actual data from sensor node.
Now we will create an AWS IoT rule to send data to the DynamoDB table. This rule will then be used to store the data from sensor node to DynamoDB via AWS IoT core.
- Open the Rules hub of the AWS IoT console.
- To start creating your new rule in Rules, choose Create.
In Name, enter the rule's name, wx_data_ddb
. In Using SQL version, select 2016-03-23
.
In the Rule query statement edit box, enter the statement:
SELECT status FROM 'device/+/data'
and then click Add action
In Select an action, choose Insert a message into a DynamoDB table.
and then click Configure action.
Now the configure action window will open.
- In Table name, choose the name of the DynamoDB table you created in a previous step:
wx_data
. - In Partition key value. enter
${timestamp()}
. - In Sort key value, enter
${cast(topic(2) AS DECIMAL)}
. - In Write message data to this column, enter
device_data
. This will create thedevice_data
column in the DynamoDB table to store data from sensor node. - Leave Operation blank.
- In Choose or create a role to grant AWS IoT access to perform this action, choose Create Role.
- In Create a new role, enter
wx_ddb_role
, and choose Create role.At the bottom of Configure action, choose Add action. - To create the rule, at the bottom of Create a rule, choose Create rule.
Your rule will be created.
Test the AWS IoT rule and DynamoDB tableNow we will test if the rule is working. To test the new rule, we'll use the MQTT client to publish and subscribe to the MQTT messages used in this test. Go to AWS IoT Core, in the left navigation go to Test -> MQTT test Client
In the MQTT client, choose Subscribe to a topic. For Topic filter, enter the topic of the input topic filter, device/+/data
. Choose Subscribe.
Now, publish a message to the input topic with a specific device ID, device/33/data. You can't publish to MQTT topics that contain wildcard characters.
In the MQTT client, choose Publish to a topic. For Topic name, enter the input topic name, device/22/data. For Message payload, enter the following sample data.
{
"status" : "cough"
}
When you click publish button, you will see the data published and when you open the DynamoDB table you will see the same data being stored there.
To view data published go to DynamoDB dashboard, click on Tables in the left navigation and in the window on right click on your table name.
In the next window, click on view items.
In the open window, scroll down to view data there you can see the data.
In the next step we will build the firmware.
Build the firmwareOpen the Arduino IDE and create a new sketch with name RoHA_Firmware.ino
. Create a header file with name secrets.h
in the same folder and one subfolder named data. secrets.h
header file stores device certificate and keys and data folder will contain an image that we will load on sensor node when it boots up. Your directory structure would look like following.
Following is the structure of secrets.h
file
#include <pgmspace.h>
#define SECRET
#define THINGNAME "RoHA"
const char WIFI_SSID[] = "WIFI SSID";
const char WIFI_PASSWORD[] = "WIFI PASSWORD";
const char AWS_IOT_ENDPOINT[] = "YOUR AWS END POINT HERE";
// Amazon Root CA 1
static const char AWS_CERT_CA[] PROGMEM = R"EOF(
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
)EOF";
// Device Certificate
static const char AWS_CERT_CRT[] PROGMEM = R"KEY(
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
)KEY";
// Device Private Key
static const char AWS_CERT_PRIVATE[] PROGMEM = R"KEY(
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
)KEY";
In the file you need to paste the Amazon Root CA 1
, Device Certificate
, and Device Private Key
which we have downloaded in the Configure AWS IoT core step of this project documentation. You also need to specify your WiFi SSID
, PASSWORD
, and AWS Endpoint
. It will look something like following.
You can get the values to Amazon Root CA 1
, Device Certificate
, and Device Private Key
from the files downloaded in step Configure AWS IoT core. See the following figure.
Also, you can find the AWS IoT endpoint from AWS IoT Core console. Scroll down the IoT Core page to find the settings on the left navigation.
Click on settings, the following page will open. Copy endpoint name from there.
Now you can use the firmware code attached with this documentation in RoHA_Firmware.ino
file. Before uploading the firmware we are first going to store the image to AWS IoT EduKit. This image will be used as startup image that will appear on sensor node screen when the firmware loads. To upload image we need a Arduino ESP32 filesystem uploader plugin for Arduino. Download the ESP32FS-1.0.zip file and extract it to ../Arduino/tools
folder. The directory structure should look like following.
If there is no tools folder under Arduino folder then create it.
Restart the Arduino IDE and you should be able to see the option to upload data as shown below.
now store the image in the data folder within the main firmware folder. The directory structure will be as follows.
The resolution of image should be 320x240.
Now first upload the data using ESP32 data upload tool in Arduino and then compile and upload the firmware also. Make sure that you have included the correct model library and other required library.
We will now understand the few important lines in firmware code. In the code the following variable is used to store data from microphone.
static signed short sampleBuffer[2048];
The topic to which sensor node can publish and subscribe are following.
// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC "device/33/data"
#define AWS_IOT_SUBSCRIBE_TOPIC "device/33/data"
The connectAWS()
function connects to AWS IoT core. The function uses AWS IoT devices credentials and endpoint from secrets.h
header file.
void connectAWS(){
WiFi.mode(WIFI_STA);
WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
Serial.println("Connecting to Wi-Fi");
while (WiFi.status() != WL_CONNECTED){
delay(500);
Serial.print(".");
}
// Configure WiFiClientSecure to use the AWS IoT device credentials
net.setCACert(AWS_CERT_CA);
net.setCertificate(AWS_CERT_CRT);
net.setPrivateKey(AWS_CERT_PRIVATE);
// Connect to the MQTT broker on the AWS endpoint we defined earlier
client.begin(AWS_IOT_ENDPOINT, 8883, net);
// Create a message handler
client.onMessage(messageHandler);
Serial.print("Connecting to AWS IOT");
while (!client.connect(THINGNAME)) {
Serial.print(".");
delay(100);
}
if(!client.connected()){
Serial.println("AWS IoT Timeout!");
return;
}
// Subscribe to a topic
client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
Serial.println("AWS IoT Connected!");
}
The sensor node publishes the topic to AWS IoT Core.
void publishMessage(){
StaticJsonDocument<200> doc;
doc["status"] = payload;
char jsonBuffer[512];
serializeJson(doc, jsonBuffer); // print to client
client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
}
In the setup()
function the PIR sensor is configured on pin 36 on PORT B.
pinMode(36, INPUT);
The following code in setup()
display the startup image on device LCD when firmware is loaded.
M5.Lcd.drawPngFile(SPIFFS, "/RoHA_Poster.png", 0, 0);
The main loop
display the main screen of firmware and PIR sensor data and waits for users input to record the vocal data. When the user press the left button on AWS IoT Edu Kit, the voice data recording starts and after recording is done the inference is run and output is displayed on screen. Afterwards the data is published to AWS IoT core and the main screen is again displayed to user.
void loop() {
M5.update();
display_title(); //Display main screen
display_pir(); //Display person contact alert
//Enables touch on LCD
TouchPoint_t pos= M5.Touch.getPressPoint();
//Detects left button click position
if(pos.y > 240)
if(pos.x < 109)
{
//when left button is pressed
if(M5.Touch.ispressed() == true){
//Recording starts and then inference is run
record_and_inference();
delay(2000);
//send the inference data to AWS IoT Core
send_data();
delay(2000);
}
}
}
In the record_and_inference()
function the following code displays the inferencing result on LCD of the device.
record_and_inference(){
...
//Printing inferencing result on M5Stack LCD
M5.Lcd.setTextColor(BLACK);
M5.Lcd.setCursor(10, 126);
M5.Lcd.printf("%s: %.5f\n", result.classification[0].label, result.classification[0].value);
M5.Lcd.setTextColor(BLACK);
M5.Lcd.setCursor(10, 146);
M5.Lcd.printf("%s: %.5f\n", result.classification[1].label, result.classification[1].value);
//checking which label has closest value
if(result.classification[ix].value > 0.50000){
//storing the label text in payload
strcpy(payload, result.classification[ix].label);
//check what is the status and set text and background color accordingly
if (strcmp(result.classification[ix].label, "cough")==0){
M5.Lcd.setTextColor(WHITE, RED);
}else{
M5.Lcd.setTextColor(WHITE, BLUE);
}
M5.Lcd.setCursor(10, 166);
M5.Lcd.printf("STATUS: %s", result.classification[ix].label);
}
...
}
The following function is invoked within pdm_data_ready_inference_callback()
used to read data from microphone.
i2s_read(Speak_I2S_NUMBER, (char *)(sampleBuffer), DATA_SIZE, &bytesRead, (100 / portTICK_RATE_MS));
The full firmware source code is attached with this document. Read instruction specified with the attached file before you upload it.
As you can see in the above figure, that an alert message is displayed whenever a proper distancing is not maintained. The display_pir()
function displays the message.
/* This function detects human presense and display PIR data on LCD as well as serial monitor*/
void display_pir(){
if(digitalRead(36)==1){
M5.Lcd.setTextColor(WHITE,RED);
M5.Lcd.setCursor(10, 56);
/* Enable haptic feedback */
M5.Axp.SetLDOEnable(3,true);
delay(100);
/* Disable haptic feedback */
M5.Axp.SetLDOEnable(3,false);
//Display Alert on LCD
M5.Lcd.print("Alert! Keep Distance.");
//Debuging message on serial terminal
Serial.println("PIR Status: Sensing");
Serial.println(" value: 1");
}
else{
M5.Lcd.setTextColor(BLACK,WHITE);
M5.Lcd.setCursor(10, 56);
M5.Lcd.print("You are doing good. ");
//Debuging message on serial terminal
Serial.println("PIR Status: Not Sensed");
Serial.println(" value: 0");
}
delay(500);
}
Whenever the alert is displayed, the sensor node also gives the haptic feedback. The following lines in the display_pir()
function enables the haptic feedback.
/* Enable haptic feedback */
M5.Axp.SetLDOEnable(3,true);
delay(100);
/* Disable haptic feedback */
M5.Axp.SetLDOEnable(3,false);
The following images show the result of inferencing.
The sensor node activity can be tracked on AWS IoT Core using the Monitor option in the left navigation.
The uploaded data can also be viewed in DynamoDB.
Here is the video that show the booting process of the sensor node.
Install the Web ServerNow the next step is to install a web server. Web Server for this project can be used in two ways
- Webserver on local computer
- Webserver on hosting space on Internet
If you buy a hosting space, you do not need to install the web server. You just need to upload the code to the webserver provided with this project. If you are planning to install the webserver on local computer then I prefer XAMPP web server.
Download the XAMPP server on your local computer and install it. The steps are very simple. You can follow this guide.
Get AWS SDK for PHP & build Web AppNavigate to htdocs folder in the XAMPP webservers root directory and copy paste the entire RoHA_WebApp
folder. This is your web application. It contains the files and directories as shown in following figure.
On my system I have installed the AWS SDK for PHP in the RoHA_WebApp
folder. Although the SDK is already included in the web application, you do not need to reinstall it. However you can find the installation steps here for knowledge.
The directory structure of the RoHA_WebApp
is as following.
The web application has two main files
index.html
- displays the user interfacegetData.php
- extract data from DynamoDB
In the index.html
the following javascript code get data from getData.php
file.
<script type="text/javascript">
function doRefresh(){
//Set DIV text with the cough count
$("#cough").load("getData.php?id=cough");
//Set DIV text with the healthy breath count
$("#healthy").load("getData.php?id=healthy");
//Set DIV text with recent respiratory rate status
$("#latest").load("getData.php?id=latest");
//Set button text with recent respiratory rate status date
$("#latestdate").load("getData.php?id=latestdate");
}
//Calls doRefresh function every second to update data on dashboard
$(function() {
doRefresh();
setInterval(doRefresh, 1000);
});
</script>
The getData.php
uses AWS SDK for PHP included using following statements
require 'vendor/autoload.php';
...
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
Then the credentials are specified using the following code.
$credentials = new Aws\Credentials\Credentials('YOUR_ACCESS_KEY_ID', 'YOUR_SECRET_KEY');
$sdk = new Aws\Sdk([
'region' => 'us-east-1',
'version' => 'latest',
'credentials' => $credentials
]);
In the above code replace YOUR_ACCESS_KEY_ID
with your access key id and YOUR_SECRET_KEY
with your secret key from the configure IAM step of this documentation. I used the scan to get data from dynamoDB.
$response = $dynamodb->scan(array(
'TableName' => 'wx_data'
));
The following code fetch data from the array returned by scan and save in the defined variables.
$latest = "";
$coughCount = 0;
$healthyCount = 0;
//set the latest date from the first record fetched
$latestdate = $response['Items'][0]['sample_time']['N'];
//iterate through all data
foreach ($response['Items'] as $item) {
//check if latest date is < current date fetched
//this step will finally get the latest date and status from database
if($latestdate < $item['sample_time']['N']){
$latestdate = $item['sample_time']['N'];
$latest = $item['device_data']['M']['status']['S'];
}
//calculating the cough and healty breath count
if ($item['device_data']['M']['status']['S'] == "cough"){
$coughCount++;
} else if ($item['device_data']['M']['status']['S'] == "healthy breath"){
$healthyCount++;
}
}
The DynamoDB scan operation returns the result in the form of array as shown below.
Therefore in order to display meaningful information to user we must extract the information from this data. For example the following line extract the first date time value from the result set.
$latestdate = $response['Items'][0]['sample_time']['N'];
The first date value is extracted as shown in image below and stored in $latestdate
variable.
Similarly the other values can also be extracted as shown in following image.
In the getData.php
file the following line of code
echo "<script>$('#latestdate').html(new Date(".$latestdate.").toString());</script>";
converts the following long
date value
1631297772331
to the following user readable date time format
Fri Sep 10 2021 23:46:12 GMT+0530 (India Standard Time)
Run the Web ApplicationOnce all the settings are done and web application is uploaded, you can connect the sensor node and then open the web app in the browser using following URL.
http://localhost/RoHA_WebApp/
or
http://192.168.157.130/RoHA_WebApp/
Remember that 192.168.157.130
is the IP address of your system which you need to change. The IP address is needed in case you are accessing the web app other than on the system where it is installed i.e. any remote system in network. on windows you can use ipconfig
command and on Unix/Linux you can use ifconfig
command to find the IP address of your system.
The web application shows the following output.
The web application displays the respiratory health status, cough count and healthy breath count.
It displays the latest inferencing result i.e. respiratory health status.
As well as at the top right corner it displays the date time of the current inferencing result (i.e. date time of recent respiratory analysis) stored in the DynamoDB.
Following narrated videos show full working of the project.
Conclusion & Future ScopeThe project RoHA shows how we can build a system to analyze the respiratory health of a person using AWS IoT Edu Kit, and AWS services (IoT Core + DynamoDB). The analysis of respiratory health is very important to fight with disease like COVID-19 etc. Using such system the early signs can be identified and necessary action can be taken. For building such system a good data-set is needed and researchers around the world are working on it. At present, in this project I have only gave an example on how to build such system. The tinyML model in the project works on two labels and small data-set. However for industrial grade system enough amount of data-set is required upon which the tinyML model has to be re-build. That's the only updation required in this project. I am also planning to include other services provided by AWS that could enrich the functionality of the system.
Thank you for taking time to read this project.
Comments