Colonel Panic
Published

Dynamic Vision Assist

The Dynamic Vision Assist is a wearable that is designed to allow a user with low to no vision navigate trails, parks, and hiking paths.

AdvancedFull instructions providedOver 6 days336

Things used in this project

Hardware components

Raspberry Pi 5
Raspberry Pi 5
Runs Debian with DVA scripts.
×1
OpenCV AI Kit: OAK-D
OpenCV AI Kit: OAK-D
For Object Recognition and Optical Character Recognition
×1
Blues Notecard (Cellular)
Blues Notecard (Cellular)
To monitor GPS coordinates of wearer and push them to the cloud.
×1
M5StickC ESP32-PICO Mini IoT Development Board
M5Stack M5StickC ESP32-PICO Mini IoT Development Board
To control vibration motors for haptic feedback and to manage data from ToF sensors
×3
M5Stack Core2 ESP32 IoT Development Kit
M5Stack Core2 ESP32 IoT Development Kit
To receive tactile feedback from the m5stack mechanical key buttons and send requests respective DVA API endpoints determined by 1 of 3 mechanical keys
×1
M5Stack Mechanical Key Button Unit
To communicate with the M5stack Core2 for the purpose of switching from object recognition (yolov4), OCR (tesseract), and to mute the NNs.
×5
Blues Notecarrier A
Blues Notecarrier A
To support the Blues Notecard Cellular
×1
Unbuckled Grove Cable 1m/2m/50cm/20cm/10cm
M5Stack Unbuckled Grove Cable 1m/2m/50cm/20cm/10cm
To connect ToF sensors to the M5stack StickC modules as well as to connect the mechanical keys to the M5stack Core2.
×6
HAT-Vibrator
M5Stack HAT-Vibrator
To vibrate with varying intensities depending on the ToF sensors.
×3
GL Inet 300M Smart Router
To connect the Raspberry pi 5 to the M5stack Core2 tactile NN selector.
×1
Portable power bank
×1
Portable power bank
To power the Raspberry pi 5 and the Oak-D camera
×1
Velcro Cable Ties
Cable management
×4
Crossbody Bag
Waterproof bag to house Dynamic Vision Assist components.
×1
Magnetic Ball Joints
To adjust ToF sensors that are magnetically attached to the Dynamic Vision Assist or DVA housing.
×3
Magnet Mount Kit
×1
Qaekie Bone Conducting Headphones
×1
4 port usb hub
×1
m5stack Magnetic USBc charger
×5
Solar Panel, 2.5 W
Solar Panel, 2.5 W
×1

Software apps and online services

Raspbian
Raspberry Pi Raspbian
Blues Notehub.io
Blues Notehub.io
Maker service
IFTTT Maker service

Hand tools and fabrication machines

3D Printer (generic)
3D Printer (generic)
Ender 3 pro.

Story

Read more

Schematics

Stick C Wiring

Shows StickC and ToF wiring

core2

Core2 Grove Wiring

m5 core2 wires

StickC Wiring

Further Demos StickC Wiring

20240904_181104_GBAMYpk7gM.jpg

project without housing

notehub

core2

magnetic charger

Code

DVA_API startup bash script.

BatchFile
Bash script that runs as a cronjob to boot camera and the DVA API upon each raspberry pi boot. Includes print statements for debugging
#!/bin/bash
# Log output to a file for debugging
exec &> /home/pi/start_ai_cam.log

# Pause for 5 seconds to ensure the environment is ready
sleep  5

# Print current date and time to the log
echo "Script started at: $(date)"

# Activate the virtual environment
source /home/pi/Desktop/luxonis/envDepthAI/bin/activate

# Run the Python script
python /home/pi/Desktop/autorun_ai_cam_script/DVA_API.py

# Print script completion to the log
echo "Script completed at: $(date)"

Hip haptic feedback unit

MicroPython
This code is flashed to a M5stack Stick-C. It handles time of flight distance detection as well as haptic feedback using a vibration motor hat for the user's hips/lower body. Essentially when an object comes within range of a wearer's lower body, the hip sensor(s) vibrate at varying intensity depending on the distance to the object. Additionally, a user can hold the A button to calibrate distance to the ground baseline to enable decline detection.
from m5stack import *
from m5ui import *
from uiflow import *
import time
import i2c_bus
from easyIO import *
import unit
import hat


setScreenColor(0x111111)
tof_0 = unit.get(unit.TOF, unit.PORTA)

hat_vibrator_0 = hat.get(hat.VIBRATOR)

distance = None
calibrate = None
batt = None



label1 = M5TextBox(7, 99, "label1", lcd.FONT_DejaVu40, 0xFFFFFF, rotate=0)


# Describe this function...
def hip_distance():
  global distance, calibrate, batt, i2c0
  distance = tof_0.distance
  if calibrate == None:
    if distance==distance < 400:
      hat_vibrator_0.set_duty(100)
    elif distance==distance < 800:
      hat_vibrator_0.set_duty(80)
    elif distance==distance < 1200:
      hat_vibrator_0.set_duty(60)
    elif distance==distance < 1600:
      hat_vibrator_0.set_duty(40)
    elif distance==distance < 1800:
      hat_vibrator_0.set_duty(30)
    elif distance==distance < 2000:
      hat_vibrator_0.set_duty(20)
    else:
      hat_vibrator_0.turn_off()
  elif calibrate != None:
    if distance==distance >= calibrate + 101:
      hat_vibrator_0.set_duty(100)
      wait_ms(100)
      hat_vibrator_0.set_duty(0)
      wait_ms(50)
    elif distance==distance < 400:
      hat_vibrator_0.set_duty(100)
    elif distance==distance < 800:
      hat_vibrator_0.set_duty(80)
    elif distance==distance < 1200:
      hat_vibrator_0.set_duty(60)
    elif distance==distance < 1600:
      hat_vibrator_0.set_duty(40)
    elif distance==distance < 1800:
      hat_vibrator_0.set_duty(30)
    elif distance==distance < 2000:
      hat_vibrator_0.set_duty(20)
    else:
      hat_vibrator_0.turn_off()

# Describe this function...
def one_pulse():
  global distance, calibrate, batt, i2c0
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)

# Describe this function...
def two_pulse():
  global distance, calibrate, batt, i2c0
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)


def buttonA_pressFor():
  global distance, calibrate, batt, i2c0
  hat_vibrator_0.turn_off()
  label1.setColor(0xcc33cc)
  calibrate = distance
  axp.setLcdBrightness(100)
  speaker.tone(800, 200)
  speaker.tone(1200, 200)
  axp.setLcdBrightness(0)
  pass
btnA.pressFor(0.8, buttonA_pressFor)

def buttonB_wasPressed():
  global distance, calibrate, batt, i2c0
  hat_vibrator_0.turn_off()
  if batt==batt < 10:
    hat_vibrator_0.turn_off()
    speaker.setVolume(25)
    speaker.tone(900, 200)
  elif batt==batt < 20:
    one_pulse()
  elif batt==batt < 30:
    two_pulse()
  elif batt==batt < 40:
    one_pulse()
    two_pulse()
  elif batt==batt < 50:
    two_pulse()
    two_pulse()
  elif batt==batt < 60:
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt < 70:
    two_pulse()
    two_pulse()
    two_pulse()
  elif batt==batt < 80:
    two_pulse()
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt < 90:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
  elif batt==batt < 95:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt <= 100:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
  else:
    hat_vibrator_0.turn_off()
  pass
btnB.wasPressed(buttonB_wasPressed)

def buttonA_wasDoublePress():
  global distance, calibrate, batt, i2c0
  hat_vibrator_0.turn_off()
  label1.setColor(0xcc33cc)
  calibrate = None
  label1.setText('clear')
  axp.setLcdBrightness(100)
  speaker.tone(1200, 200)
  speaker.tone(800, 200)
  axp.setLcdBrightness(0)
  pass
btnA.wasDoublePress(buttonA_wasDoublePress)


speaker.setVolume(50)
hat_vibrator_0.set_duty(0)
axp.setLcdBrightness(0)
i2c0 = i2c_bus.easyI2C(i2c_bus.PORTA, 0x00, freq=400000)
label1.setColor(0x000000)
hat_vibrator_0.set_duty(100)
wait_ms(100)
hat_vibrator_0.set_duty(0)
wait_ms(50)
hat_vibrator_0.set_duty(100)
wait_ms(100)
hat_vibrator_0.set_duty(0)
while True:
  batt = map_value((axp.getBatVoltage()), 3.7, 4.1, 0, 100)
  hip_distance()
  label1.setText(str(calibrate))
  wait_ms(2)

Raspberry pi DVA API

Python
This script runs on the raspberry pi and both provides a framework to interact with Yolov4 for object recognition as well as Tesseract for optical character recognition. It creates a flask API with endpoints that accept post requests from the Dynamic VIsion Assist tactile controller. This allows a user to either mute the aforementioned neural networks or toggle between OCR and object recognition as suits the wearer's use case.
# curl -X POST http://localhost:5000/switch_endpoint -H "Content-Type: application/json" -d '{"endpoint": "OR_OA"}'
# curl -X POST http://localhost:5000/switch_endpoint -H "Content-Type: application/json" -d '{"endpoint": "OCR"}'
# curl -X POST http://localhost:5000/switch_endpoint -H "Content-Type: application/json" -d '{"endpoint": "IDLE"}'


import os
import cv2
import depthai as dai
import numpy as np
import requests
import pytesseract
from flask import Flask, request, jsonify
from threading import Thread
from gtts import gTTS
import pygame
import time

# Start with object detection instead of OCR
current_endpoint = "OR_OA"

# Model path
model_path = "/home/pi/Desktop/luxonis/depthai-python/examples/models/yolo-v4-tiny-tf_openvino_2021.4_6shave.blob"

# Class labels for Tiny YOLOv4
class_labels = [
    "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
    "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
    "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
    "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
    "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
    "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
    "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard",
    "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase",
    "scissors", "teddy bear", "hair drier", "toothbrush"
]

# Initialize Flask app
app = Flask(__name__)

# Dictionary to keep track of the last time each label was spoken
last_spoken_time = {label: 0 for label in class_labels}

@app.route('/switch_endpoint', methods=['POST'])
def switch_endpoint():
    global current_endpoint
    new_endpoint = request.json.get('endpoint')
    if new_endpoint in ["OCR", "OR_OA", "IDLE"]:
        current_endpoint = new_endpoint
        return jsonify({"status": "success", "current_endpoint": current_endpoint})
    else:
        return jsonify({"status": "error", "message": "Invalid endpoint"}), 400

@app.route('/ocr', methods=['POST'])
def handle_ocr():
    data = request.json
    text = data.get("ocr_results", "")
    if text.strip():
        print(f"OCR Results: {text}")  # Logging OCR results for debugging
        send_to_gtts(text)
    return jsonify({"status": "success"})

@app.route('/or_oa', methods=['POST'])
def handle_or_oa():
    data = request.json
    results = data.get("or_oa_results", [])
    if results:
        print(f"OR_OA Results: {results}")  # Logging OR_OA results for debugging
        labels = [class_labels[result['label']] for result in results if result['label'] < len(class_labels) and result['label'] is not None]
        current_time = time.time()
        for label in labels:
            if current_time - last_spoken_time[label] > 5:
                print(f"Detected: {label}")  # Logging detected labels for debugging
                send_to_gtts(label)
                last_spoken_time[label] = current_time
    return jsonify({"status": "success"})

def send_to_gtts(text):
    tts = gTTS(text)
    tts.save("response.mp3")
    pygame.mixer.init()
    pygame.mixer.music.load("response.mp3")
    pygame.mixer.music.play()
    while pygame.mixer.music.get_busy():
        pygame.time.Clock().tick(10)
    os.remove("response.mp3")

# Initialize pipeline with both OCR and Object Detection
def create_pipeline():
    pipeline = dai.Pipeline()

    # Create and configure color camera node
    cam_rgb = pipeline.create(dai.node.ColorCamera)
    cam_rgb.setPreviewSize(416, 416)
    cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
    cam_rgb.setInterleaved(False)
    cam_rgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
    cam_rgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
    
    # Adjust camera settings for better brightness and enable auto-exposure
    control = dai.CameraControl()
    control.setAutoExposureEnable()
    control.setAutoWhiteBalanceMode(dai.CameraControl.AutoWhiteBalanceMode.AUTO)
    control.setAutoExposureCompensation(2)  # Increase exposure compensation
    cam_rgb.initialControl.setManualExposure(20000, 800)
    cam_rgb.initialControl.setAutoWhiteBalanceMode(dai.CameraControl.AutoWhiteBalanceMode.WARM_FLUORESCENT)

    # Create XLink output for video
    xout_video = pipeline.create(dai.node.XLinkOut)
    xout_video.setStreamName("video")
    cam_rgb.preview.link(xout_video.input)

    # Create ImageManip node to resize the image
    manip = pipeline.create(dai.node.ImageManip)
    manip.initialConfig.setResize(416, 416)
    manip.initialConfig.setFrameType(dai.RawImgFrame.Type.BGR888p)
    cam_rgb.preview.link(manip.inputImage)

    # Initialize object detection
    object_detection_nn = pipeline.create(dai.node.YoloDetectionNetwork)
    object_detection_nn.setBlobPath(model_path)
    object_detection_nn.setConfidenceThreshold(0.5)
    object_detection_nn.setNumClasses(80)  # Adjust based on your model
    object_detection_nn.setCoordinateSize(4)
    
    # Add appropriate anchors
    anchors = [
        10, 14, 23, 27, 37, 58, 81, 82, 135, 169, 344, 319
    ]
    object_detection_nn.setAnchors(anchors)
    
    object_detection_nn.setAnchorMasks({"side26": [1, 2, 3], "side13": [3, 4, 5]})
    object_detection_nn.setIouThreshold(0.5)
    
    manip.out.link(object_detection_nn.input)

    xout_nn = pipeline.create(dai.node.XLinkOut)
    xout_nn.setStreamName("detections")
    object_detection_nn.out.link(xout_nn.input)
    
    return pipeline

# Function to handle OCR
def perform_ocr(frame):
    return pytesseract.image_to_string(frame)

# Function to perform object recognition and obstacle avoidance
def perform_object_recognition_and_obstacle_avoidance(detections):
    detection_results = detections.detections
    return [{"label": det.label, "confidence": det.confidence, "bbox": (det.xmin, det.ymin, det.xmax, det.ymax)} for det in detection_results]

# Main processing function
def main():
    pipeline = create_pipeline()

    with dai.Device(pipeline) as device:
        video = device.getOutputQueue(name="video", maxSize=4, blocking=False)
        detections = device.getOutputQueue(name="detections", maxSize=4, blocking=False)
        
        while True:
            video_frame = video.get()
            frame = video_frame.getCvFrame()

            if current_endpoint == "OCR":
                ocr_results = perform_ocr(frame)
                if ocr_results.strip():  # Only send non-empty OCR results
                    requests.post('http://localhost:5000/ocr', json={"ocr_results": ocr_results})
            
            elif current_endpoint == "OR_OA":
                dets = detections.get()
                or_oa_results = perform_object_recognition_and_obstacle_avoidance(dets)
                if or_oa_results:
                    requests.post('http://localhost:5000/or_oa', json={"or_oa_results": or_oa_results})
                    # Read detected labels
                    labels = [class_labels[result['label']] for result in or_oa_results if result['label'] < len(class_labels) and result['label'] is not None]
                    current_time = time.time()
                    for label in labels:
                        if current_time - last_spoken_time[label] > 5:
                            print(f"Detected: {label}")  # Logging detected labels for debugging
                            send_to_gtts(label)
                            last_spoken_time[label] = current_time

                    # Draw bounding boxes and labels on the frame
                    for detection in or_oa_results:
                        bbox = detection['bbox']
                        label = class_labels[detection['label']]
                        confidence = detection['confidence']
                        cv2.rectangle(frame, (int(bbox[0] * frame.shape[1]), int(bbox[1] * frame.shape[0])), 
                                      (int(bbox[2] * frame.shape[1]), int(bbox[3] * frame.shape[0])), (0, 255, 0), 2)
                        cv2.putText(frame, f"{label}: {confidence:.2f}", (int(bbox[0] * frame.shape[1]), int(bbox[1] * frame.shape[0]) - 10), 
                                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

            elif current_endpoint == "IDLE":
                # If in IDLE mode, do nothing
                pass

            # Display the frame
            cv2.imshow("Video", frame)

            if cv2.waitKey(1) == ord('q'):
                break

            # Add a small delay to reduce the frequency of requests
            time.sleep(0.1)  # Reduced delay to improve responsiveness

    cv2.destroyAllWindows()

if __name__ == "__main__":
    # Start the Flask server in a separate thread
    flask_thread = Thread(target=lambda: app.run(host='0.0.0.0', port=5000, debug=True, use_reloader=False))
    flask_thread.start()

    # Run the main pipeline function
    main()

DVA API tactile controller

MicroPython
This code is flashed to a M5stack core2. A user must add their WiFi credentials to the script. I recommend using a GL Inet WiFi network/AP. It controls the selection of neural networks via the DVA API as well as lighting the mechanical keys connected to the core2 to denote which NN is selected or if the NNs are muted
from m5stack import *
from m5stack_ui import *
from uiflow import *
import wifiCfg
import time
import urequests
import unit


screen = M5Screen()
screen.clean_screen()
screen.set_screen_bg_color(0x000000)
key_0 = unit.get(unit.KEY, unit.PORTA)
key_1 = unit.get(unit.KEY, unit.PORTB)
key_2 = unit.get(unit.KEY, unit.PORTC)






label0 = M5Label('Dynamic', x=75, y=20, color=0xff02ec, font=FONT_MONT_38, parent=None)
label1 = M5Label('Vision', x=103, y=95, color=0xff02ec, font=FONT_MONT_38, parent=None)
label2 = M5Label('Assist', x=106, y=160, color=0xff02ec, font=FONT_MONT_38, parent=None)


wifiCfg.doConnect('YOUR SSID', 'YOUR PASSWORD')
key_0.set_color(0xff0000, 1)
key_1.set_color(0x33ff33, 1)
key_2.set_color(0xffffff, 1)
screen.set_screen_brightness(30)
wait(3)
label0.set_hidden(True)
label1.set_hidden(True)
label2.set_hidden(True)
while True:
  if (key_0.get_switch_status()) == 0:
    try:
      req = urequests.request(method='POST', url='http://192.168.1.241:5000/switch_endpoint',json={'endpoint':'IDLE'}, headers={'Content-Type':'application/json'})
      gc.collect()
      req.close()
    except:
      pass
    screen.set_screen_bg_color(0xff0000)
    label0.set_text_color(0x000000)
    key_0.set_color(0xff0000, 100)
    key_1.set_color(0x33ff33, 1)
    key_2.set_color(0xffffff, 1)
    speaker.playTone(523, 1/4, volume=4)
    speaker.playTone(392, 1/2, volume=4)
  if (key_1.get_switch_status()) == 0:
    try:
      req = urequests.request(method='POST', url='http://192.168.1.241:5000/switch_endpoint',json={'endpoint':'OR_OA'}, headers={'Content-Type':'application/json'})
      gc.collect()
      req.close()
    except:
      pass
    screen.set_screen_bg_color(0x33ff33)
    label0.set_text_color(0x33ff33)
    key_1.set_color(0x33ff33, 100)
    key_2.set_color(0xff0000, 1)
    key_0.set_color(0xffffff, 1)
    speaker.playTone(440, 1/2, volume=4)
  if (key_2.get_switch_status()) == 0:
    try:
      req = urequests.request(method='POST', url='http://192.168.1.241:5000/switch_endpoint',json={'endpoint':'OCR'}, headers={'Content-Type':'application/json'})
      gc.collect()
      req.close()
    except:
      pass
    screen.set_screen_bg_color(0xffffff)
    label0.set_text_color(0xffffff)
    key_2.set_color(0xffffff, 100)
    key_1.set_color(0x33ff33, 1)
    key_0.set_color(0xff0000, 1)
    speaker.playTone(587, 1/2, volume=4)
  wait_ms(2)

Head haptic feedback unit

MicroPython
This code is flashed to a M5stack Stick-C. It handles time of flight distance detection as well as haptic feedback using a vibration motor hat. Essentially when an object comes within range of a wearer's face, the face/head detection unit vibrates and even beeps as an alarm for objects that are in danger of hitting a user's face
from m5stack import *
from m5ui import *
from uiflow import *
import i2c_bus
import time
from easyIO import *
import unit
import hat


setScreenColor(0x111111)
tof4m_0 = unit.get(unit.TOF4M, unit.PORTA)

hat_vibrator_0 = hat.get(hat.VIBRATOR)

distance = None
batt = None



label1 = M5TextBox(13, 39, "Dynamic", lcd.FONT_DejaVu24, 0xe600ff, rotate=0)
label2 = M5TextBox(29, 134, "Assist", lcd.FONT_DejaVu24, 0x9900ff, rotate=0)
label3 = M5TextBox(46, 186, "1.0", lcd.FONT_DejaVu24, 0x0041ff, rotate=0)
label0 = M5TextBox(29, 86, "Vision", lcd.FONT_DejaVu24, 0x4d00ff, rotate=0)


# Describe this function...
def head_distance():
  global distance, batt, i2c0
  distance = tof4m_0.get_single_distance_value
  if distance==distance < 1000:
    hat_vibrator_0.set_duty(100)
  else:
    hat_vibrator_0.turn_off()

# Describe this function...
def one_pulse():
  global distance, batt, i2c0
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)

# Describe this function...
def two_pulse():
  global distance, batt, i2c0
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)
  hat_vibrator_0.set_duty(50)
  wait_ms(100)
  hat_vibrator_0.turn_off()
  wait_ms(50)


def buttonA_wasPressed():
  global distance, batt, i2c0
  hat_vibrator_0.turn_off()
  if batt==batt < 10:
    hat_vibrator_0.turn_off()
    speaker.setVolume(25)
    speaker.tone(900, 200)
  elif batt==batt < 20:
    one_pulse()
  elif batt==batt < 30:
    two_pulse()
  elif batt==batt < 40:
    one_pulse()
    two_pulse()
  elif batt==batt < 50:
    two_pulse()
    two_pulse()
  elif batt==batt < 60:
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt < 70:
    two_pulse()
    two_pulse()
    two_pulse()
  elif batt==batt < 80:
    two_pulse()
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt < 90:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
  elif batt==batt < 95:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
    one_pulse()
  elif batt==batt <= 100:
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
    two_pulse()
  else:
    hat_vibrator_0.turn_off()
  pass
btnA.wasPressed(buttonA_wasPressed)


axp.setLcdBrightness(0)
i2c0 = i2c_bus.easyI2C(i2c_bus.PORTA, 0x00, freq=400000)
hat_vibrator_0.turn_off()
axp.setLcdBrightness(50)
wait(3)
axp.setLcdBrightness(0)
hat_vibrator_0.set_duty(100)
wait_ms(100)
hat_vibrator_0.set_duty(0)
wait_ms(50)
hat_vibrator_0.set_duty(100)
wait_ms(100)
hat_vibrator_0.set_duty(0)
while True:
  batt = map_value((axp.getBatVoltage()), 3.7, 4.1, 0, 100)
  head_distance()
  wait_ms(2)

Credits

Colonel Panic

Colonel Panic

2 projects • 5 followers
Heckr
Thanks to Friedelino and Caban.

Comments