First and foremost, I want to thank Seeed Studio for selecting me as an Alpha Tester for their new device, the SenseCap Watcher.
StoryThroughout my research career, I have specialized in the development of IoT agents focused on monitoring and assisting elderly individuals. I have published numerous scientific articles documenting my research in this area, as well as presented various projects on the Hackster platform. These projects feature a range of capabilities, including the measurement of physiological parameters, ECG, PPG, and most recently, the integration of Seeed Studio's Grove Vision V2. This latest project stands out for its ability to monitor upper limb exercises using Yolo 8 pose estimation.
Introduction to IoT Agents in HealthcareThe integration of Internet of Things (IoT) technology into the healthcare sector has revolutionized how medical services are managed and delivered. IoT agents in healthcare are intelligent, network-connected devices and systems that collect, transmit, and analyze health data in real-time. These agents play a crucial role in improving the quality of medical care, facilitating remote patient monitoring, and optimizing resource management in medical environments.
What Are IoT Agents?IoT agents are interconnected devices capable of interacting with their environment by collecting data through sensors, processing information, and communicating this data to other systems or devices. In the healthcare field, these agents can include:
1. Wearable Devices: Such as smartwatches and fitness bands that monitor heart rate, activity levels, sleep quality, and other vital parameters.
2. Implantable Sensors: Devices that can be implanted in the body to monitor specific conditions, like pacemakers and glucose sensors.
3. Connected Medical Equipment: Such as vital sign monitors, insulin pumps, and imaging devices that transmit data directly to hospital management systems.
4. Mobile Applications and Platforms: That collect and analyze health data from various sources to provide useful information to both patients and healthcare professionals.
Benefits of IoT Agents in Healthcare1. Continuous and Real-Time Monitoring: IoT agents enable constant patient monitoring, facilitating early detection of health issues and allowing for rapid responses to critical situations.
2. Personalized Care: Detailed data collection allows healthcare professionals to offer personalized treatments and real-time adjustments according to each patient's specific needs.
3. Cost Reduction: Remote monitoring and efficient resource management reduce the need for frequent hospitalizations and unnecessary doctor visits, lowering associated costs.
4. Improved Operational Efficiency: Integrating IoT agents into hospital systems optimizes resource management and logistics, enhancing operational efficiency in medical settings.
SenseCap Watcher as a Health AgentThe SenseCap Watcher (Figure 1) is a new device introduced by Seeed Studio, featuring a range of functionalities that enable technology enthusiasts and researchers to develop new applications related to embedded AI. This device's architecture showcases its high capabilities, including audio capture and playback, embedded AI models, peripheral communication, and more.
Its architecture (Figure 2) showcases the advanced capabilities of this device, integrating audio capture and playback, embedded AI models, peripheral communication, and more.
As I mentioned earlier, in a previous project using the Grove Vision V2 system from Seeed Studio, I developed an assistant for exercise monitoring. This assistant features a robotic torso that demonstrates exercises for the user to replicate. The assistant analyzes the user's position and determines whether the exercises are being performed correctly.
The goal of this project is to perform the same tasks but using the SenseCap Watcher and a simulated robot model with MuJoCo. The first step is to create an account on the website of SenseCap.
Once logged in, the next step is to register the SenseCap Watcher device. You can follow the steps provided on the official website.
Once registered on the website and with the Watcher device registered, you need to go to the Security tab and click on "Access with API Key." By doing this, the website will allow you to create a key that will then be used to access the Watcher's data from external sources.
After completing this, it's time to connect to the device from the outside. To do this, you can use the following link to establish the connection:
You can do this from various platforms, such as NodeJS, curl, or the Java SDK. Personally, I prefer Python, so we will access the Watcher’s data using Python.
First, we need to set up the configuration parameters:
host = "https://sensecap.seeed.cc/openapi"
url = f"{host}/list_telemetry_data"
username = "api_id"
password = "access_api_keys"
params = {
"device_eui": "Put your Device ID",
"channel_index": 1
}
Once that is done, the next step is to connect and retrieve the information that our Watcher has sent to the SenseCap website.
import requests
from requests.auth import HTTPBasicAuth
import json
def get_sensecap_watcher_image(url, username, password, params):
# Make a GET request with basic authentication
response = requests.get(url, params=params, auth=HTTPBasicAuth(username, password))
# Retrieve the response headers and content
headers = response.headers
content = response.content
# Check the status of the response
if response.status_code == 200:
# Parse the JSON content
data = json.loads(content)
# Return the parsed data
return data
else:
# Print the status code and response content in case of an error
print(f"Error: {response.status_code}")
print("Content:", content.decode('utf-8'))
This information is sent when we add a task to our Watcher. In my case, I’ve added a person recognition task, and each time the device detects a person, it sends a notification to the SenseCap website.
Using the previous script, we can extract this information and download the image for further processing and analysis.
{
"code": "0",
"data": {
"list": [
[
[
[
[
[
{
"tlid": 3,
"tn": "Local Human Detection",
"content": "human detected",
"image_url": "https://sensecraft-statics.seeed.cc/mperdoidau/Image.jpeg"
}
],
"2024-08-03T09:31:15.971Z"
]
]
]
]
]
}
}
The next step is to read the image from the URL. To do this, we use the following code:
import requests
import numpy as np
import cv2
from io import BytesIO
def read_image(image_url):
global image
# Make a GET request to fetch the image
response = requests.get(image_url)
# Check if the request was successful
if response.status_code == 200:
# Read the image from the obtained bytes
image_bytes = BytesIO(response.content)
# Convert the bytes into a numpy array
image_np = np.frombuffer(image_bytes.read(), np.uint8)
# Decode the numpy array into an image using OpenCV
image = cv2.imdecode(image_np, cv2.IMREAD_COLOR)
# Verify that the image has been loaded correctly
if image is not None:
# Process the image with pose estimation
pose_estimation(image)
# Save the image to a file
cv2.imwrite("Image.jpg", image)
# Display the image in a window (optional)
# cv2.imshow('Image', image)
# cv2.waitKey(0) # Wait for a key press
# cv2.destroyAllWindows() # Close the image window
else:
print("Error: The image could not be decoded.")
else:
print(f"Error: Could not retrieve the image. Status code {response.status_code}")
Once this is done, the previous function calls the AI model; in my case, it performs pose estimation.
from ultralytics import YOLO # Make sure to import the YOLO module or the library you're using
def pose_estimation(image):
global pose_data
# Load a model
model = YOLO("models/yolov8n-pose.pt") # Load a custom-trained pose estimation model
# model = YOLO("yolov8n.pt") # Load a pretrained model (recommended for training)
# Perform pose estimation
results = model(source=image, show=True, conf=0.3)
# Process each result
for result in results:
boxes = result.boxes # Bounding box outputs
masks = result.masks # Segmentation mask outputs
probs = result.probs # Class probabilities
#print(boxes)
#print(masks)
#print(probs)
# Access bounding box data
boxes = results[0].boxes
box = boxes[0] # Get the first bounding box
box.xyxy # Bounding box coordinates in (N, 4) format
boxes.xyxy # Bounding boxes in (N, 4) format
boxes.xywh # Bounding boxes in (N, 4) format (width and height)
boxes.xyxyn # Normalized bounding boxes in (N, 4) format
boxes.xywhn # Normalized bounding boxes in (N, 4) format (width and height)
boxes.conf # Confidence scores for each box
boxes.cls # Class labels for each box
boxes.data # Raw bounding box tensor
# Convert results to JSON
for r in results:
pose_data = r.tojson(normalize=False)
# Optionally save crops and plot the results
# r.save_crop(save_dir='sample')
# image_array = r.plot(conf=True, boxes=True)
# Optionally display the image
# plt.imshow(image_array)
# plt.axis('off') # Turn off axis numbers and ticks
# plt.show()
This function returns the joint points; however, the following function organizes them into a tuple.
import json
import numpy as np
def get_points(data_yolo):
global points_1
# Parse the JSON data
data_yolo = json.loads(data_yolo)
# Extract keypoints from the JSON data
keypoints = data_yolo[0]['keypoints']
keypoint_tuples = list(zip(keypoints['x'], keypoints['y']))
# Print a separator for clarity
print("-" * 50)
# Convert keypoints to a list of integer tuples
points = [[int(x), int(y)] for x, y in keypoint_tuples]
# Convert the list of points to a NumPy array
points_1 = np.array(points)
# Print the array of points
print(points_1)
Once this is done, the next step is to display the joint points on the image.
import cv2
def show_points(image, points):
"""
Draw keypoints and connections on an image.
Parameters:
- image: The image on which to draw the keypoints and connections.
- points: A list of (x, y) tuples representing keypoints.
"""
# Define the connections between keypoints
connections = [
(0, 1), (1, 3), (0, 2), (2, 4), (4, 6),
(3, 5), (6, 8), (8, 10), (6, 12), (5, 7),
(7, 9), (5, 11), (12, 11), (5, 6)
]
# Draw the connections between keypoints
for connection in connections:
point1 = tuple(points[connection[0]])
point2 = tuple(points[connection[1]])
# Check if either point is at the origin (0, 0); skip drawing if so
if point1 != (0, 0) and point2 != (0, 0):
cv2.line(image, point1, point2, (255, 255, 255), 2)
# Draw circles for specific keypoints
left_arm_points = [12, 6, 8, 10]
right_arm_points = [11, 5, 7, 9]
for point in left_arm_points:
if points[point] != (0, 0):
cv2.circle(image, points[point], 5, (255, 0, 0), 2) # Left arm in red
for point in right_arm_points:
if points[point] != (0, 0):
cv2.circle(image, points[point], 5, (0, 0, 255), 2) # Right arm in blue
# Display the image
cv2.imshow('Image', image)
cv2.waitKey(0) # Wait for a key press
cv2.destroyAllWindows() # Close the image window
As shown in the following image:
Once this is done, the angles are extracted and used by the virtual robot. Currently, the calculated angles are manually sent to the virtual robot, so the process is not fully automated yet.
Calculate Left Arm Angles:
- Angle between elbow and wrist: 74.50934819803324
- Angle between shoulder and elbow: 35.66130521381716
- Angle between shoulder and wrist: 101.09777648021138 ------------------------------------------------------------
- Angle between hip and shoulder: 39.69438442964209
- Angle between hip and wrist: 96.93745533054904
Calculate Right Arm Angles:
- Angle between elbow and wrist: 54.69779657924642
- Angle between shoulder and elbow: 43.32246430554557
- Angle between shoulder and wrist: 31.586736737291748 ------------------------------------------------------------
- Angle between hip and shoulder: 20.79273156354667
- Angle between hip and wrist: 3.8244065667499907
Note: This project will be updated in the coming months. Currently, I am on vacation and don't have access to my full mini-laboratory. At the moment, I only have the SenseCap Watcher, my laptop, and some free time.
Acknowledgments:I would like to express my sincere gratitude to Seeed Studio and Alison Yang.
Comments
Please log in or sign up to comment.