Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!
Ralph Yamamoto
Created June 29, 2024

Face Recognition Controlled Entry Lock

Face recognition controlled smart lock to assist mobility impaired individuals to access a locked entry door.

32

Things used in this project

Hardware components

SwitchBot Lock Pro + Keypad Touch
×1
UNIHIKER - IoT Python Programming Single Board Computer with Touchscreen
DFRobot UNIHIKER - IoT Python Programming Single Board Computer with Touchscreen
×1
M5Stack CoreS3 - ESP32S3 - IoT Development Kit
×1
Blues Notecard (Cellular)
Blues Notecard (Cellular)
×1
Blues Notecarrier A
Blues Notecarrier A
×1
Seeed Studio XIAO ESP32S3 Sense
Seeed Studio XIAO ESP32S3 Sense
×1
Mini RFID Unit RC522 Module Sensor
M5Stack Mini RFID Unit RC522 Module Sensor
×1
Finger Print Unit FPC1020A
M5Stack Finger Print Unit FPC1020A
×1
Adafruit Ultra Tiny USB Camera with GC0307 Sensor
×1
SparkFun Qwiic micro:bit Breakout (with Headers)
×1
Grove Shield for Seeeduino XIAO - with embedded battery management chip
Seeed Studio Grove Shield for Seeeduino XIAO - with embedded battery management chip
×1

Software apps and online services

Arduino IDE
Arduino IDE
VS Code
Microsoft VS Code

Hand tools and fabrication machines

Anycubic Kobra 2

Story

Read more

Schematics

Board Interconnect

Wiring between Unihiker , Xiao, and Notecard

Code

faceRegistration.py

Python
Collects and labels 50 face images per ID
'''
Run the program, enter the ID in the terminal and press Enter. The screen will display the camera image. Adjust the position until a green box appears, indicating that the photo is being taken. When "done" is displayed, the process is complete.
The captured images will be saved in the "/root/image/project14/new" folder. This code captures 50 images, but you can modify the code parameters to capture more images.
'''
 
import cv2  # Import the OpenCV library
import os  # Import the os library
 
img_src = '/root/image/project14'  # Define the image path (Specify to a fixed location in Mind+)
os.system('mkdir -p ' + img_src + '/new1/')  # Create a folder named "new" in the specified path (The folder needs to be created first before storing the images)
# img_src = os.getcwd()  # Specify the current working directory if running directly
 
cap = cv2.VideoCapture(0)  # Open and initialize the camera
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)  # Set the buffer size to 1 frame to reduce latency
cv2.namedWindow('frame', cv2.WND_PROP_FULLSCREEN)  # Create a window named 'frame' with the default property of being able to go fullscreen
cv2.setWindowProperty('frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)  # Set the 'frame' window to fullscreen
 
font = cv2.FONT_HERSHEY_SIMPLEX  # Set the font type to a normal-sized sans-serif font
detector = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')  # Load the face detection classifier
sampleNum = 0  # Initialize the sample number to 0
ID = input('enter your id: ')  # Enter the ID number for the face image data
 
while True:
    ret, img = cap.read()  # Read the image frame by frame
    # img = cv2.flip(img, 1)  # Mirror the image (horizontally flip the img image)
    #img = cv2.flip(img, 0)  # Mirror the image (vertically flip the img image)
    if ret:  # If an image is successfully read
        h, w, c = img.shape  # Record the shape of the image, which includes the height, width, and channels
        w1 = h * 240 // 320
        x1 = (w - w1) // 2
        img = img[:, x1:x1 + w1]  # Crop the image
        img = cv2.resize(img, (240, 320))  # Resize the image to match the dimensions of the PinPong board
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # Convert the image to grayscale
        faces = detector.detectMultiScale(gray, 1.3, 5)  # Detect and obtain face recognition data
        for (x, y, w, h) in faces:  # If a face is detected
            cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)  # Draw a rectangle box around the face
            sampleNum = sampleNum + 1  # Increment the sample number
            if sampleNum <= 50:  # If the number of samples is less than or equal to 50
                cv2.putText(img, 'shooting', (10, 50), font, 0.6, (0, 255, 0), 2)  # Display the text "shooting" on the image, indicating that the face is being captured
                # Save the cropped face image with a name format of sampleNum.UserID.jpg
                cv2.imwrite(img_src + '/new1/' + str(sampleNum) + '.' + str(ID) + ".jpg", gray[y:y + h, x:x + w])
            else:  # If the number of images exceeds 50
                cv2.putText(img, 'Done, Please quit', (10, 50), font, 0.6, (0, 255, 0), 2)  # Display the text "Done, Please quit" to indicate completion
                break
        cv2.imshow('frame', img)  # Display the image on the 'frame' window
 
        key = cv2.waitKey(2)  # Delay each frame by 1ms, delay cannot be 0, otherwise the result will be a static frame
#        if key & 0xFF == ord('b'):  # Press 'b' to exit
        if key & 0xFF == 113:  # Press 'q' to exit
            break
 
cap.release()  # Release the camera
cv2.destroyAllWindows()  # Close all windows

faceModelTraining.py

Python
Builds face recognition model using labeled images
# Train the Captured Face Images and Generate the model.yml (Located at the same level as the 'new' folder)
import cv2  # Import the OpenCV library
import os  # Import the os library
import numpy as np  # Import the numpy library
from PIL import Image  # Import the Image module from the PIL library
 
img_src = '/root/image/project14'  # Define the save path (Specify to a fixed location in Mind+)
 
# Initialize the face detector and recognizer
detector = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')  # Load the face detection classifier
recognizer = cv2.face.LBPHFaceRecognizer_create()  # Create an instance of the LBPH recognizer model (empty)
 
'''
Traverse the image path, import images and IDs, and add them to the list
'''
def get_images_and_labels(path):
    # Concatenate the image path to the specific image name, such as '/root/image/project14/new/50.1.jpg', and store it in the 'image_paths' list
    image_paths = [os.path.join(path, f) for f in os.listdir(path)]
    # Create an empty list for face samples
    face_samples = []
    # Create an empty list for IDs
    ids = []
    for image_path in image_paths:  # Traverse the image path
        # Print the name of each image (image name with path)
        print(image_path)
        # Convert the color image to grayscale
        image = Image.open(image_path).convert('L')
        # Convert the grayscale image format to a Numpy array
        image_np = np.array(image, 'uint8')
        '''To get the ID, we split the image path and retrieve the relevant information'''
        # Split by ".", if the last group is "jpg", then execute the following steps
        if os.path.split(image_path)[-1].split(".")[-1] != 'jpg':
            continue
        # Extract the ID number from the complete path name of the image, which is the ID number we set when capturing the image
        image_id = int(os.path.split(image_path)[-1].split(".")[1])
        # Detect the face in the array format and store the results in 'faces'
        faces = detector.detectMultiScale(image_np)  # Detect faces
        # Crop out the face portion from the array format face image and store them in the face samples list, and store the ID number of the image in the ID samples list
        for (x, y, w, h) in faces:
            face_samples.append(image_np[y:y + h, x:x + w])
            ids.append(image_id)
    return face_samples, ids  # Return the face samples list and ID samples list
 
# Train the LBPH recognizer model with the face samples and ID samples, and output the corresponding face model (.yml format file)
faces, Ids = get_images_and_labels(img_src+'/new1/')  # Pass in the path and the folder name of the image to get the face samples list and ID samples list
recognizer.train(faces, np.array(Ids))  # Pass the face samples list and ID samples list to the empty LBPH recognizer model to get a complete face model
recognizer.save(img_src+'/model.yml')  # Save the generated model
print("generate model done")  # Print "Model has been generated"

faceRecognition.py

Python
Display ID labels on detected faces
'''
Run this program to predict faces using the trained face model. When a trained face is detected, display the ID number and rotate the servo to 170Β° to open the door. When an unfamiliar face is detected, display "unknown". The more images collected, the higher the recognition accuracy.
'''
import cv2  # Import the OpenCV library
from pinpong.board import Board, Pin, Servo  # Import the Pinpong library
import time  # Import the time library
 
Board("UNIHIKER").begin()  # Initialize and select the board type, if not specified, it will be automatically recognized
 
s1 = Servo(Pin(Pin.P23))  # Initialize the servo pin by passing it to the Servo class
s1.angle(10)  # Control the servo to rotate to the initial position of 10 degrees (closed door)
time.sleep(1)
 
img_src = '/root/image/project14'  # Define the save path (Specify to a fixed location in Mind+)
 
cap = cv2.VideoCapture(0)  # Open the camera with index 0 and initialize
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)  # Set the buffer to 1 frame to reduce latency
cv2.namedWindow('frame', cv2.WND_PROP_FULLSCREEN)  # Create a window with the name 'frame', and set its property to fullscreen
cv2.setWindowProperty('frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)  # Set the 'frame' window to fullscreen
font = cv2.FONT_HERSHEY_SIMPLEX  # Set the font type to normal-sized sans-serif
 
'''Initialize the face detector and recognizer, and use the previously trained .yml file to recognize faces'''
detector = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')  # Load the face detection classifier
recognizer = cv2.face.LBPHFaceRecognizer_create()  # Create an instance of the LBPH recognizer model
recognizer.read(img_src + '/model.yml')  # Read the trained face model from the specified path
 
count = 0  # Define a count flag
 
while True:
    ret, img = cap.read()  # Read the image frame by frame
    # img = cv2.flip(img, 1)  # Mirror the image (horizontally flip the img image)
    if ret:  # If an image is read successfully
        h, w, c = img.shape  # Record the shape of the image, including height, width, and channels
        w1 = h * 240 // 320
        x1 = (w - w1) // 2
        img = img[:, x1:x1 + w1]  # Crop the image
        img = cv2.resize(img, (240, 320))  # Resize the image to match the UNIHIKER display
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # Convert the image to grayscale
        faces = detector.detectMultiScale(gray, 1.2, 5)  # Detect and obtain face recognition data
        for (x, y, w, h) in faces:
            cv2.rectangle(img, (x - 50, y - 50), (x + w + 50, y + h + 50), (0, 255, 0), 2)  # Draw a rectangular box around the face
            img_id, confidence = recognizer.predict(gray[y:y + h, x:x + w])  # Predict on the grayscale image within the specified rectangular region (used for face presentation)
            if confidence < 90:  # If confidence < 80, it means a trained face is detected
                if img_id == 1:
                    img_id = "Ralph"
                cv2.putText(img, "Welcome Home", (x, y), font, 0.6, (0, 255, 0), 2)  # Add the text "Welcome Home" to the specified position on the image
                cv2.putText(img, str(img_id), (x, y + h), font, 0.6, (0, 255, 0), 2)  # Add the corresponding face ID label to the specified position on the image
                if count == 2:  # On the third frame
                    #s1.angle(170)  # Control the servo to rotate to the position of 170 degrees (open the door)
                    print("The door is open")
                    time.sleep(5)  # Wait for five seconds
                    #s1.angle(10)  # Control the servo to rotate to the position of 10 degrees (close the door)
                    print("The door is closed")
                    count = 0  # Reset the count
                else:
                    #s1.angle(10)
                    count = count + 1  # Increment the frame count
            #else:  # If an unknown face is detected
                #img_id = "unknown"
                #cv2.putText(img, str(img_id), (x, y + h), font, 0.6, (0, 255, 0), 2)  # Add the text "unknown" to the specified position on the image
        cv2.imshow('frame', img)  # Display the image
        key = cv2.waitKey(1)  # Delay each frame by 1ms, delay cannot be 0, otherwise the result will be a static frame
        if key & 0xFF == ord('b'):  # Press 'b' to exit
            break
 
cap.release()  # Release the camera
cv2.destroyAllWindows()  # Close all windows

faceRecognitionUnlock.py

Python
Locks or unlocks depending on whether detected face is recognized
'''
Run this program to predict faces using the trained face model. When a trained face is detected, display the ID number and rotate the servo to 170Β° to open the door. When an unfamiliar face is detected, display "unknown". The more images collected, the higher the recognition accuracy.
'''
import cv2  # Import the OpenCV library
from pinpong.board import Board, Pin, Servo  # Import the Pinpong library
import time  # Import the time library
import json
import hashlib
import hmac
import base64
import uuid
import requests

# Replace with your actual device ID
DEVICE_ID = 'FA25793937EC'
# Base URL for the SwitchBot API
BASE_URL = 'https://api.switch-bot.com/v1.1/devices'
# Declare empty header dictionary
apiHeader = {}
# open token
token = '84df5ca672695c6f7549769dece8861f26c3dc316e207db940137241fecdf8851d7520486cdbc53080e8c1ce7a41eafd' # copy and paste from the SwitchBot app V6.14 or later
# secret key
secret = '609f728a30cc777d698f9e43f1501b9f' # copy and paste from the SwitchBot app V6.14 or later
nonce = uuid.uuid4()
t = int(round(time.time() * 1000))
string_to_sign = '{}{}{}'.format(token, t, nonce)

string_to_sign = bytes(string_to_sign, 'utf-8')
secret = bytes(secret, 'utf-8')

sign = base64.b64encode(hmac.new(secret, msg=string_to_sign, digestmod=hashlib.sha256).digest())
print ('Authorization: {}'.format(token))
print ('t: {}'.format(t))
print ('sign: {}'.format(str(sign, 'utf-8')))
print ('nonce: {}'.format(nonce))

#Build api header JSON
apiHeader['Authorization']=token
apiHeader['Content-Type']='application/json'
apiHeader['charset']='utf8'
apiHeader['t']=str(t)
apiHeader['sign']=str(sign, 'utf-8')
apiHeader['nonce']=str(nonce)

def lock_device():
    url = f'{BASE_URL}/{DEVICE_ID}/commands'
    payload = {
        'command': 'lock',
        'parameter': 'default',
        'commandType': 'command'
    }
    response = requests.post(url, headers=apiHeader, json=payload)
    return response.json()

def unlock_device():
    url = f'{BASE_URL}/{DEVICE_ID}/commands'
    payload = {
        'command': 'unlock',
        'parameter': 'default',
        'commandType': 'command'
    }
    response = requests.post(url, headers=apiHeader, json=payload)
    return response.json()

def list_devices():
    response = requests.get(BASE_URL, headers=apiHeader)
    return response.json()
 
Board("UNIHIKER").begin()  # Initialize and select the board type, if not specified, it will be automatically recognized
 
img_src = '/root/image/project14'  # Define the save path (Specify to a fixed location in Mind+)
 
cap = cv2.VideoCapture(0)  # Open the camera with index 0 and initialize
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)  # Set the buffer to 1 frame to reduce latency
cv2.namedWindow('frame', cv2.WND_PROP_FULLSCREEN)  # Create a window with the name 'frame', and set its property to fullscreen
cv2.setWindowProperty('frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)  # Set the 'frame' window to fullscreen
font = cv2.FONT_HERSHEY_SIMPLEX  # Set the font type to normal-sized sans-serif
 
'''Initialize the face detector and recognizer, and use the previously trained .yml file to recognize faces'''
detector = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')  # Load the face detection classifier
recognizer = cv2.face.LBPHFaceRecognizer_create()  # Create an instance of the LBPH recognizer model
recognizer.read(img_src + '/model.yml')  # Read the trained face model from the specified path
 
count = 0  # Define a count flag
 
while True:
    ret, img = cap.read()  # Read the image frame by frame
    # img = cv2.flip(img, 1)  # Mirror the image (horizontally flip the img image)
    if ret:  # If an image is read successfully
        h, w, c = img.shape  # Record the shape of the image, including height, width, and channels
        w1 = h * 240 // 320
        x1 = (w - w1) // 2
        img = img[:, x1:x1 + w1]  # Crop the image
        img = cv2.resize(img, (240, 320))  # Resize the image to match the UNIHIKER display
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # Convert the image to grayscale
        faces = detector.detectMultiScale(gray, 1.2, 5)  # Detect and obtain face recognition data
        for (x, y, w, h) in faces:
            cv2.rectangle(img, (x - 50, y - 50), (x + w + 50, y + h + 50), (0, 255, 0), 2)  # Draw a rectangular box around the face
            img_id, confidence = recognizer.predict(gray[y:y + h, x:x + w])  # Predict on the grayscale image within the specified rectangular region (used for face presentation)
            if confidence < 90:  # If confidence < 80, it means a trained face is detected
                if img_id == 1:
                    img_id = "Ralph"
                cv2.putText(img, "Welcome Home", (x, y), font, 0.6, (0, 255, 0), 2)  # Add the text "Welcome Home" to the specified position on the image
                cv2.putText(img, str(img_id), (x, y + h), font, 0.6, (0, 255, 0), 2)  # Add the corresponding face ID label to the specified position on the image
                if count == 5:  # On the third frame
                    cv2.imshow('frame', img)  # Display the image
                    #time.sleep(5)  # Wait for five seconds
                    lock_response = unlock_device()
                    print('Unlock Response:', lock_response)
                    print("The door is unlocked")
                    time.sleep(5)  # Wait for five seconds
                    count = 0  # Reset the count
                else:
                    count = count + 1  # Increment the frame count
            else:  # If an unknown face is detected
                img_id = "unknown"
                cv2.putText(img, str(img_id), (x, y + h), font, 0.6, (0, 255, 0), 2)  # Add the text "unknown" to the specified position on the image
                cv2.imshow('frame', img)  # Display the image
                #time.sleep(5)  # Wait for five seconds
                lock_response = lock_device()
                print('Unlock Response:', lock_response)
                print("The door is locked")
                time.sleep(5)  # Wait for five seconds
                #cv2.putText(img, str(img_id), (x, y + h), font, 0.6, (0, 255, 0), 2)  # Add the text "unknown" to the specified position on the image
        cv2.imshow('frame', img)  # Display the image
        key = cv2.waitKey(1) & 0xFF  # Delay each frame by 1ms, delay cannot be 0, otherwise the result will be a static frame
        if key == ord('q'):  # Press 'b' to exit
            break
        #time.sleep(5)  # Wait for five seconds
 
cap.release()  # Release the camera
cv2.destroyAllWindows()  # Close all windows

Xiao_ESP32S3_NotecardInterrupt.ino

C/C++
Adds and syncs note to Notecard when interrupt is received from Unihiker
// Copyright 2022 Blues Inc.  All rights reserved.
//
// Use of this source code is governed by licenses granted by the
// copyright holder including that found in the LICENSE file.
//
// This example does the same function as the "basic" example, but demonstrates
// how easy it is to use the Notecard libraries to construct JSON commands and
// also to extract responses.
//
// Using the Notecard library, you can also easily set up your Arduino
// environment to "watch" JSON request and response traffic going to/from the
// Notecard on your Arduino debug port.
//
// Note that by using the Notecard library, it is also quite easy to connect the
// Notecard to a Microcontroller's I2C ports (SDA and SCL) rather than using
// Serial, in case there is no unused serial port available to use for the
// Notecard.

// Include the Arduino library for the Notecard
#include <Notecard.h>
const int interruptPin = D0;  // GPIO pin for the interrupt
volatile bool interruptFlag = false;  // Flag to indicate the interrupt

// If the Notecard is connected to a serial port, define it here.  For example,
// if you are using the Adafruit Feather NRF52840 Express, the RX/TX pins (and
// thus the Notecard) are on Serial1. However, if you are using an M5Stack Basic
// Core IoT Development Kit, you would connect the R2 pin to the Notecard's TX
// pin, and the M5Stack's T2 pin to the Notecard's RX pin, and then would use
// Serial2.
//
// Also, you may define a debug output port where you can watch transaction as
// they are sent to and from the Notecard.  When using the Arduino IDE this is
// typically "Serial", but you can use any available port.
//
// Note that both of these definitions are optional; just prefix either line
// with `//` to remove it.
//
// - Remove `txRxPinsSerial` if you wired your Notecard using I2C SDA/SCL pins,
//   instead of serial RX/TX.
// - Remove `usbSerial` if you don't want the Notecard library to output debug
//   information.

// #define txRxPinsSerial Serial1
#define usbSerial Serial

// This is the unique Product Identifier for your device.  This Product ID tells
// the Notecard what type of device has embedded the Notecard, and by extension
// which vendor or customer is in charge of "managing" it.  In order to set this
// value, you must first register with notehub.io and "claim" a unique product
// ID for your device.  It could be something as simple as as your email address
// in reverse, such as "com.gmail.smith.lisa:test-device" or
// "com.outlook.gates.bill.demo"

// This is the unique Product Identifier for your device
#ifndef PRODUCT_UID
#define PRODUCT_UID "com.hotmail.ralphjy:build2gether_2.0" // "com.my-company.my-name:my-project"
#pragma message "PRODUCT_UID is not defined in this example. Please ensure your Notecard has a product identifier set before running this example or define it in code here. More details at https://dev.blues.io/tools-and-sdks/samples/product-uid"
#endif

#define myProductID PRODUCT_UID
Notecard notecard;

void IRAM_ATTR handleInterrupt() {
  // Enqueue the measurement to the Notecard for transmission to the Notehub,
  // adding the "sync" flag for demonstration purposes to upload the data
  // instantaneously. If you are looking at this on notehub.io you will see
  // the data appearing 'live'.
  J *req = notecard.newRequest("note.add");
  if (req != NULL)
  {
      JAddBoolToObject(req, "sync", true);
      J *body = JAddObjectToObject(req, "body");
      if (body != NULL)
      {
          JAddNumberToObject(body, "lockStatus", "Unlocked");
          JAddNumberToObject(body, "faceID", "Ralph");
      }
      notecard.sendRequest(req);
  }
  Serial.println("Interrupt detected!");
}

// One-time Arduino initialization
void setup()
{
  pinMode(interruptPin, INPUT);  // Set the pin as input with internal pull-up
  attachInterrupt(digitalPinToInterrupt(interruptPin), handleInterrupt, RISING);  // Attach interrupt on rising edge

    // Set up for debug output (if available).
#ifdef usbSerial
    // If you open Arduino's serial terminal window, you'll be able to watch
    // JSON objects being transferred to and from the Notecard for each request.
    usbSerial.begin(115200);
    const size_t usb_timeout_ms = 3000;
    for (const size_t start_ms = millis(); !usbSerial && (millis() - start_ms) < usb_timeout_ms;)
        ;

    // For low-memory platforms, don't turn on internal Notecard logs.
#ifndef NOTE_C_LOW_MEM
    notecard.setDebugOutputStream(usbSerial);
#else
#pragma message("INFO: Notecard debug logs disabled. (non-fatal)")
#endif // !NOTE_C_LOW_MEM
#endif // usbSerial

    // Initialize the physical I/O channel to the Notecard
#ifdef txRxPinsSerial
    notecard.begin(txRxPinsSerial, 9600);
#else
    notecard.begin();
#endif

    // "newRequest()" uses the bundled "J" json package to allocate a "req",
    // which is a JSON object for the request to which we will then add Request
    // arguments.  The function allocates a "req" request structure using
    // malloc() and initializes its "req" field with the type of request.
    J *req = notecard.newRequest("hub.set");

    // This command (required) causes the data to be delivered to the Project
    // on notehub.io that has claimed this Product ID (see above).
    if (myProductID[0])
    {
        JAddStringToObject(req, "product", myProductID);
    }

    // This command determines how often the Notecard connects to the service.
    // If "continuous", the Notecard immediately establishes a session with the
    // service at notehub.io, and keeps it active continuously. Due to the power
    // requirements of a continuous connection, a battery powered device would
    // instead only sample its sensors occasionally, and would only upload to
    // the service on a "periodic" basis.
    JAddStringToObject(req, "mode", "continuous");

    // Issue the request, telling the Notecard how and how often to access the
    // service.
    // This results in a JSON message to Notecard formatted like:
    //     {
    //       "req"     : "service.set",
    //       "product" : myProductID,
    //       "mode"    : "continuous"
    //     }
    // Note that `notecard.sendRequestWithRetry()` always frees the request data
    // structure, and it returns "true" if success or "false" if there is any
    // failure. It is important to use `sendRequestWithRetry()` on the first
    // message from the MCU to the Notecard, because there will always be a
    // hardware race condition on cold boot and the Notecard must be ready to
    // receive and process the message.
    notecard.sendRequestWithRetry(req, 5); // 5 seconds
}

void loop()
{
    // Delay between loops
    delay(15 * 1000); // 15 seconds
}

Credits

Ralph Yamamoto

Ralph Yamamoto

11 projects β€’ 21 followers

Comments