Falls among elderly individuals represent a critical health issue, particularly as global demographics shift. By 2050, over 1.5 billion people worldwide will be aged 65 and older, with a substantial portion of them living independently. However, this independence brings risks; studies show that around 28-35% of individuals over 65 experience a fall each year, with this number rising to nearly 42% for those over 70. Fall-related injuries, particularly hip fractures and traumatic brain injuries, account for a high percentage of emergency room visits and hospitalizations among seniors. Repeated falls and the fear of falling again are common, impacting quality of life and mobility.
The first hour following a fall—often referred to as the "golden hour"—is crucial for receiving timely care. Delays in assistance can significantly raise risks of severe injury, chronic disability, and even mortality. Fall detection systems have emerged as a solution, using sensor-based alerts to immediately notify caregivers, reducing emergency response times. These systems have high acceptance among seniors, who view them as a way to maintain independence while reducing the psychological burden of fall anxiety. In this project, we aim to create an affordable yet highly accurate wearable fall detector by combining deep learning techniques with the Particle Photon 2 and the ADXL362 accelerometer.
DatasetA quality dataset is essential for building an accurate fall detection system, especially for wearable devices targeting elderly users. Fall detection systems require data that includes a variety of real-world scenarios, such as different types of falls and daily non-fall activities, to ensure that the model can reliably distinguish between actual falls and ordinary movements. Gathering such a dataset, however, is highly challenging. It involves carefully orchestrated data collection sessions with volunteers simulating various falls (like forward, backward, and lateral falls) and common activities, which are recorded using specialized motion sensors. This process is not only labor-intensive but also requires high standards of safety and consistency, especially if the goal is to represent different types of falls with realistic intensity and context.
The SisFall dataset addresses this need comprehensively. It consists of 19 ADLs (Activities of Daily Living) and 15 types of Falls, with data collected from 38 volunteers—15 elderly (ages 60-75) and 23 young adults (ages 19-30). The details of different classes of Fall included in the dataset are shown below.
Code Activity Trials Duration
-----------------------------------------------------------------------------
F01 Fall forward while walking caused by a slip 5 15s
F02 Fall backward while walking caused by a slip 5 15s
F03 Lateral fall while walking caused by a slip 5 15s
F04 Fall forward while walking caused by a trip 5 15s
F05 Fall forward while jogging caused by a trip 5 15s
F06 Vertical fall while walking caused by fainting 5 15s
F07 Fall while walking, with use of hands in a
table to dampen fall, caused by fainting 5 15s
F08 Fall forward when trying to get up 5 15s
F09 Lateral fall when trying to get up 5 15s
F10 Fall forward when trying to sit down 5 15s
F11 Fall backward when trying to sit down 5 15s
F12 Lateral fall when trying to sit down 5 15s
F13 Fall forward while sitting, caused by fainting
or falling asleep 5 15s
F14 Fall backward while sitting, caused by fainting
or falling asleep 5 15s
F15 Lateral fall while sitting, caused by fainting
or falling asleep 5 15s
All ADL classes included in the dataset are shown in the table below.
Code Activity Trials Duration
-----------------------------------------------------------------------------
D01 Walking slowly 1 100s
D02 Walking quickly 1 100s
D03 Jogging slowly 1 100s
D04 Jogging quickly 1 100s
D05 Walking upstairs and downstairs slowly 5 25s
D06 Walking upstairs and downstairs quickly 5 25s
D07 Slowly sit in a half height chair, wait a moment,
and up slowly 5 12s
D08 Quickly sit in a half height chair, wait a moment,
and up quickly 5 12s
D09 Slowly sit in a low height chair, wait a moment,
and up slowly 5 12s
D10 Quickly sit in a low height chair, wait a moment,
and up quickly 5 12s
D11 Sitting a moment, trying to get up, and collapse
into a chair 5 12s
D12 Sitting a moment, lying slowly, wait a moment,
and sit again 5 12s
D13 Sitting a moment, lying quickly, wait a moment,
and sit again 5 12s
D14 Being on one’s back change to lateral position,
wait a moment, and change to one’s back 5 12s
D15 Standing, slowly bending at knees, and getting up 5 12s
D16 Standing, slowly bending without bending knees,
and getting up 5 12s
D17 Standing, get into a car, remain seated and
get out of the car 5 25s
D18 Stumble while walking 5 12s
D19 Gently jump without falling
(trying to reach a high object) 5 12s
This dataset consists of data collected from 38 volunteers divided into two groups: elderly people and young adults. The elderly people group was formed by 15 participants (8 male and 7 female), and the young adults group was formed by 23 participants (11 male and 12 female).
The below table shows the age, weight, and height of each group
Sex Age Height(m) Weight(kg)
-----------------------------------------------------------------------------
Elderly Female 62–75 1.50–1.69 50–72
Male 60–71 1.63–1.71 56–102
-----------------------------------------------------------------------------
Adult Female 19–30 1.49–1.69 42–63
Male 19–30 1.65–1.83 58–81
Preparing The DatasetThe dataset was captured using three sensors, including two accelerometers - ADXL345 accelerometer (configured for ±16g, 13 bits of analog to digital converter – ADC), a Freescale MMA8451Q accelerometer (±8g, 14 bits of ADC) and one gyroscope - ITG3200 gyroscope (±2000∘/s, 16 bits of ADC), at a sampling frequency of 200 Hz. The sample data of a fall is shown below
-9, -257, -25, 84, 247, 27, -120, -987, 63;
-3, -263, -23, 99, 258, 35, -110, -1016, 68;
-1, -270, -22, 114, 272, 45, -94, -1037, 69;
1, -277, -24, 127, 286, 57, -81, -1062, 69;
2, -281, -25, 134, 303, 70, -71, -1079, 63;
11, -290, -24, 135, 322, 83, -51, -1097, 59;
12, -296, -29, 134, 342, 96, -43, -1114, 56;
13, -296, -29, 125, 364, 113, -33, -1131, 57;
17, -300, -23, 119, 384, 130, -15, -1143, 59;
16, -302, -21, 117, 408, 152, -1, -1159, 65;
For this project, we utilize the raw acceleration data captured by the ADXL345 accelerometer, focusing on the first three columns of each data entry corresponding to the X, Y, and Z axes.
Now the accelerometer data (x, y, z) is converted to gravitational units using the following formula:
Acceleration [g]= [ (2*Range) / (2^Resolution) ]*raw_acceleration
We’re using the ADXL362 accelerometer for this project, which provides 12-bit resolution and can measure up to a ±8g range.
Uploading Dataset To Edge ImpulseTo upload our accelerometer data to Edge Impulse for machine learning model training, we first need to prepare it in Edge Impulse's data acquisition format. The dataset, which contains accelerometer readings, is categorized into two classes: ADL (Activities of Daily Living) and FALL. Each reading includes acceleration values in three axes, which we convert from raw sensor units to m/s² to standardize the measurements.
To ensure secure data transfer, we need the API and HMAC keys from our Edge Impulse project. These keys are accessible from the Dashboard > Keys tab in the Edge Impulse Studio, allowing us to generate signatures for data validation. Once the data is formatted and signed, it’s ready for upload to the Edge Impulse Studio, where it will be used to train our machine-learning model for fall detection.
Below is a Python script that facilitates this conversion and packaging, transforming the raw accelerometer data into the required JSON format compatible with Edge Impulse Studio’s data ingestion API.
import json
import hmac
import hashlib
import glob
import os
import time
import sys
# Define constants
HMAC_KEY = "<hmac_key>" # Replace with your HMAC key for Edge Impulse
OUTPUT_FOLDER = "./data/"
CONVERT_G_TO_MS2 = 9.80665 # Conversion factor from G to m/s²
INTERVAL_MS = 20 # Set interval (50 Hz)
# HMAC signature placeholder
empty_signature = ''.join(['0'] * 64)
# Label mapping based on filename prefix
ADL_CODES = {f"D{i:02}" for i in range(1, 20)}
FALL_CODES = {f"F{i:02}" for i in range(1, 16)}
# Generate label from filename prefix
def get_label_from_filename(filename):
prefix = filename.split('_')[0]
if prefix in ADL_CODES:
return "ADL"
elif prefix in FALL_CODES:
return "FALL"
else:
raise ValueError(f"Unknown prefix in filename: {filename}")
# Convert data to JSON format with HMAC signature
def create_json_data(filename, values):
data = {
"protected": {
"ver": "v1",
"alg": "HS256",
"iat": int(time.time())
},
"signature": empty_signature,
"payload": {
"device_name": "aa:bb:cc:dd:ee:ff",
"device_type": "generic",
"interval_ms": INTERVAL_MS,
"sensors": [
{"name": "ax", "units": "m/s2"},
{"name": "ay", "units": "m/s2"},
{"name": "az", "units": "m/s2"}
],
"values": values
}
}
# Sign the message
encoded = json.dumps(data)
signature = hmac.new(HMAC_KEY.encode('utf-8'), msg=encoded.encode('utf-8'), digestmod=hashlib.sha256).hexdigest()
data['signature'] = signature
return data
# Main function to process files
def process_files(data_folder):
files = glob.glob(os.path.join(data_folder, "*/*.txt"))
for path in files:
filename = os.path.basename(path)
label = get_label_from_filename(filename)
output_filename = os.path.join(OUTPUT_FOLDER, f"{label}.{os.path.splitext(filename)[0]}.json")
values = []
with open(path) as file:
for i, line in enumerate(file):
if line.strip() and i % 4 == 0: # Sample down every 4th line
columns = line.strip().split(',')
# Extract only the first three values (ax, ay, az)
ax, ay, az = (float(columns[i]) for i in range(3))
ax = ax * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
ay = ay * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
az = az * ((2 * 16) / (2 ** 13)) * CONVERT_G_TO_MS2
values.append([ax, ay, az])
if values:
json_data = create_json_data(filename, values)
with open(output_filename, 'w') as outfile:
json.dump(json_data, outfile, indent=4)
if __name__ == "__main__":
# Ensure the output folder exists
os.makedirs(OUTPUT_FOLDER, exist_ok=True)
# Check for data folder argument
if len(sys.argv) < 2:
print("Usage: python preprocess.py <input_data_folder>")
sys.exit(1)
# Get data folder from command-line argument
data_folder = sys.argv[1]
# Run the file processing
process_files(data_folder)
print("Data processing complete. JSON files saved in:", OUTPUT_FOLDER)
Once you have successfully downloaded the SisFall dataset, you can convert the raw data into the Edge Impulse Data Acquisition Format by running the following command:
python3 preprocess.py SisFall_Dataset
Here, 'SisFall_Dataset' refers to the name of the input folder. After executing this command, an output folder named 'data' will be created, containing all the data files. Now upload the dataset to Edge Impulse from the Data Acquisition tab.
After you upload the dataset resample the data using resample.py using the following command
python3 resample.py
After resampling is done, perform a Train/Test split which will split the dataset into training and testing datasets in a ratio of 80/20.
Create ImpulseTo build an ML model in Edge Impulse you should start by Creating An Impulse. This is the starting point where you define the entire pipeline for processing and analyzing your sensor data.
To create an impulse in Edge Impulse, follow these steps:
1. Create A New Impulse
- Navigate to the Impulse Design section.
- Click on Create Impulse to start the process of setting up your impulse pipeline.
2. Add Processing Block
- Click on Add a processing block.
- Select Raw Data from the list of available processing blocks.The Raw Data block processes the raw sensor data without any pre-processing. It allows the deep learning model to learn features directly from the raw data, automatically identifying patterns and features relevant for classification. You can also use Spectral Analysis which is great for analyzing repetitive motion, such as data from accelerometers. Extracts the frequency and power characteristics of a signal over time. But as we are going for a deep-learning model we proceeded without any pre-processing.
3. Add aLearning Block
- Click on Add a Learning Block.
- Choose Classification as the learning block.The Classification block is responsible for learning from the features in the raw data and applying this knowledge to classify new, unseen data. It identifies patterns in the data that correspond to different classes (e.g., ADL, FALL).
4. Configure Window Size and Increase
Set both the Window size and Window increase to 4000ms.
- This means that the data will be divided into frames of 4000 milliseconds (4 seconds), and the frames will not overlap, resulting in distinct, independent windows of data for classification.
- A 4000ms window size is chosen because it provides sufficient context for capturing meaningful data patterns, making it ideal for classification tasks.
5. Save Impulse
After configuring the processing and learning blocks, click Save Impulse.
Feature GenerationAt this stage, we are ready to proceed to the Raw Data tab and begin the feature generation process. The Raw Data tab offers various options for manipulating the data, including adjusting axis scales and applying filters. However, for this project, we have opted to retain the default settings and move directly to generating the features.
To generate features, we will utilize a range of algorithms and techniques designed to identify key patterns and characteristics within the data. These extracted features will be employed by the learning block of our impulse to categorize the accelerometer data into one of two predefined classes. By carefully selecting and extracting relevant features, we aim to develop a more accurate and robust model for classifying accelerometer data.
Model TrainingHaving extracted and prepared our features, we are now ready to proceed to the Classifier tab to begin training our model. The Classifier tab offers various options for fine-tuning the model's behavior, such as adjusting the number of neurons in the hidden layers, setting the learning rate, and determining the number of training epochs.
But in our case, the default model is not enough, so Switch To Expert Mode which will give us the space to build our own deep learning model. For this project, we're using Temporal CNN.
A Temporal Convolutional Neural Network (Temporal CNN or TCN) is a type of deep learning model that is designed to handle sequential data, such as time-series data, by applying convolutional layers in a way that captures temporal dependencies within the data. Unlike traditional convolutional neural networks (CNNs), which are typically used for image processing, Temporal CNNs are specifically optimized to deal with the sequential nature of time-series data, making them suitable for tasks like speech recognition, video processing, and sensor data classification.
How Temporal CNN Works:
- Convolutional Layers: Temporal CNNs use 1D convolutional layers that slide over time-series data, applying filters to capture important patterns or features over time. These filters can capture local features like trends, spikes, or periodicity, which are essential for understanding time-based data.
- Dilation: One key feature of Temporal CNNs is the use of dilated convolutions. These allow the model to capture long-range dependencies across time steps without requiring an excessively deep network. By skipping some time steps between the filters, dilated convolutions enable the model to process wider temporal contexts efficiently.
- Residual Connections: Temporal CNNs often use residual connections, which help to prevent vanishing gradients and allow the network to learn deeper representations without losing important features from earlier layers.
Why Is It Useful
Accelerometer data, such as that provided by the SisFall dataset, is sequential by nature—it consists of measurements taken at regular time intervals. Falls are events that occur suddenly and exhibit distinct patterns in accelerometer data, such as rapid changes in acceleration, sudden peaks, or sharp changes in orientation. These features are crucial for fall detection systems, but identifying them requires analyzing temporal dependencies and patterns across multiple time steps.
Temporal CNNs are well-suited for Fall Detection because:
- Capturing Temporal Patterns: Falls are typically characterized by rapid and abrupt changes in acceleration. Temporal CNNs excel at identifying these temporal patterns, such as spikes or trends in accelerometer signals, over time. The convolutional filters capture these temporal dependencies effectively.
- Scalability to Large Datasets: The SisFall dataset, which contains a large collection of labeled accelerometer data for both activities of daily living (ADL) and fall events, is large and highly varied. Temporal CNNs are well-suited to handle large datasets because of their ability to extract hierarchical features at different temporal scales. By processing large amounts of data efficiently, they can learn to distinguish between subtle differences in data across many classes (like different types of movements or falls).
- Efficiency with Long Sequences: Temporal CNNs are particularly good at processing long sequences of data. With datasets like SisFall, which may contain long time-series data from accelerometers, Temporal CNNs can capture long-range dependencies between events, like the buildup to a fall or the post-fall behavior, without needing excessively deep models.
- Effective Generalization: Temporal CNNs can generalize well to unseen fall events because they focus on learning robust features from the raw data. This is critical in fall detection, as real-world fall events can vary widely in terms of intensity, direction, and body orientation.
- Dimensionality Reduction: The model can learn to focus on the most relevant features by learning temporal dependencies. This reduces the need for extensive manual feature engineering, which is especially useful when dealing with large datasets like SisFall.
Our TCN model summary is as follows:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 600)] 0
reshape (Reshape) (None, 200, 3) 0
normalization (Normalizatio (None, 200, 3) 0
n)
conv1d (Conv1D) (None, 200, 64) 640
dropout (Dropout) (None, 200, 64) 0
conv1d_1 (Conv1D) (None, 200, 64) 12352
dropout_1 (Dropout) (None, 200, 64) 0
conv1d_2 (Conv1D) (None, 200, 64) 12352
dropout_2 (Dropout) (None, 200, 64) 0
conv1d_3 (Conv1D) (None, 200, 64) 12352
dropout_3 (Dropout) (None, 200, 64) 0
global_average_pooling1d (G (None, 64) 0
lobalAveragePooling1D)
dense (Dense) (None, 32) 2080
dropout_4 (Dropout) (None, 32) 0
dense_1 (Dense) (None, 2) 66
=================================================================
Total params: 39,842
Trainable params: 39,842
Non-trainable params: 0
The model processes the input data through several layers, including:
- Reshaping and Normalization: The input data is reshaped, and a normalization layer is applied with predefined mean and variance values for the accelerometer data, ensuring the data is standardized before being passed through the network.
- Temporal CNN Block: Multiple 1D convolutional layers with increasing dilation rates (1, 2, 4, 8) capture temporal dependencies in the sequential data. These layers use ReLU activations and dropout for regularization.
- Global Average Pooling: After processing through convolutional layers, global average pooling reduces the temporal dimension, retaining only the most important features.
- Fully Connected MLP Layers: The pooled features are passed through a multi-layer perceptron (MLP) with ReLU activations and dropout for further processing.
- Output Layer: The final output layer uses a softmax activation to classify the data into predefined categories (such as FALL or ADL).
The model is trained with a learning rate of 0.0005, a batch size of 32, and 20 epochs using the Adam optimizer. The training process incorporates callbacks to monitor progress. The whole model training code is given here
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Input, Conv1D, Dropout, GlobalAveragePooling1D, LayerNormalization, Normalization, Reshape
from tensorflow.keras.optimizers import Adam
EPOCHS = 20
LEARNING_RATE = 0.0005
BATCH_SIZE = 32
# TCN block
def temporal_cnn_block(inputs, filters, kernel_size, dilation_rate, dropout=0):
x = Conv1D(filters=filters, kernel_size=kernel_size, padding="causal", dilation_rate=dilation_rate, activation="relu")(inputs)
x = Dropout(dropout)(x)
return x
def build_model(
input_shape,
filters,
kernel_size,
dilation_rates,
mlp_units,
dropout=0,
mlp_dropout=0,
):
inputs = Input(shape=input_shape)
x = Reshape([int(input_length/3), 3])(inputs)
# Normalization layer
x = Normalization(axis=-1, mean=[-0.047443, -6.846333, -1.057524], variance=[16.179484, 33.019396, 22.892909])(x)
# Temporal CNN layers with increasing dilation rates
for dilation_rate in dilation_rates:
x = temporal_cnn_block(x, filters=filters, kernel_size=kernel_size, dilation_rate=dilation_rate, dropout=dropout)
x = GlobalAveragePooling1D()(x)
for dim in mlp_units:
x = Dense(dim, activation="relu")(x)
x = Dropout(mlp_dropout)(x)
outputs = Dense(classes, activation="softmax")(x)
return Model(inputs, outputs)
input_shape = (input_length, )
model = build_model(
input_shape,
filters=64,
kernel_size=3,
dilation_rates=[1, 2, 4, 8],
mlp_units=[32],
mlp_dropout=0.40,
dropout=0.25,
)
# Optimizer and compilation
opt = Adam(learning_rate=LEARNING_RATE, beta_1=0.9, beta_2=0.999)
# Add any callbacks you might have
callbacks = []
callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count, epochs=EPOCHS))
# Train the neural network
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.summary()
train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False)
model.fit(train_dataset, epochs=EPOCHS, validation_data=validation_dataset, verbose=2, callbacks=callbacks)
After training the model, we achieved an impressive accuracy of 98.9%, which is particularly remarkable considering the size of the dataset.
After training and fine-tuning our model, we proceeded to test its performance on unseen data using the Model Testing tab and the Classify All feature. This step allows us to evaluate the model's ability to accurately detect falls. The model's strong performance on the test data, achieving high classification accuracy, suggests that it is reliable and capable of providing valuable insights for real-world applications.
On the Deployment page, select the "Create Library" option and choose "Particle Library" which will create a particle library.
1.Connecting the Photon to Your Computer: Start by connecting the Photon device to your computer using the included micro USB cable. Confirm that the device powers up, indicated by the LED light turning on.
2.Create and Access Your Particle Account: If you do not already have a Particle account, visit the Particle website to sign up. After registering, log in to your account to proceed with the device setup and manage your devices.
3. Setup Photon2
- Open your browser and go to setup.particle.io to begin configuring your Photon device. The setup wizard will guide you through linking the device to your Particle account.
- Ensure that your Photon is connected to your computer and powered on before continuing.
- If the Photon is detected, it will appear as "P2" in the setup interface. Select this option to move forward.
- Select P2 and continue the setup.
- When you select "P2, " the device will automatically switch to DFU mode, where the LED will blink yellow, signaling it's ready for firmware flashing.
- With the Photon now in DFU mode, the next step is to flash the device with the necessary firmware. This step ensures that your Photon is ready to function properly and communicate with the cloud platform.
- At this point, you can either create a new product or assign the Photon to an existing one.
- It is essential to assign a unique name to the device so it can be easily identified within your Particle account.
- After flashing, you will need to configure the Photon’s Wi-Fi settings. Enter the required credentials for your Wi-Fi network so that the device can connect to the internet and become operational.
- And, you're all set!
Setting Up Particle Workbench
If you want to develop the firmware and flash it locally without using Particle Web IDE, you can follow this guide to set up Particle Workbench in VS Code.
Integrating Twilio For SMS AlertsIntegrating Twilio with a Particle Photon 2 device allows you to send SMS alerts directly from the Photon 2, making it ideal for applications where immediate notifications are crucial, such as in fall detection systems. Follow these steps to set up and integrate Twilio with your Photon 2 for sending SMS alerts.
Set Up a Twilio Account and Get Your Credentials
- If you don’t already have one, sign up for a Twilio account at Twilio's website.
- Once your account is set up, obtain your Account SID and Auth Token from the Twilio Console, as well as a Twilio phone number. You’ll need these to authorize and send SMS messages through Twilio.
Set Up a Twilio Integration in the Particle Console
- Log in to the Particle Console and navigate to the Integrations section.
- Click on New Integration and select Twilio.
- Event Name: Choose an event name, like
twilio_sms_alert
, which Photon 2 will trigger when it needs to send an SMS. - Parameters: Set up parameters fields as follows:
Username
: Your Twilio Account SID.Password
: Your Twilio Auth Token.Twilio SID:
Your Twilio Account SID.- Form Data: Set up form data fields as follows:
From
: Your Twilio phone number.To
: The recipient’s phone number (the one to which SMS alerts will be sent).Body
: The message text, which can include dynamic values such as{{PARTICLE_EVENT_VALUE}}
if you want to customize the message from the Photon 2 code.
If you choose to deploy your project to a Particle Library and not a binary follow these steps to flash your firmware from Particle Workbench:
- Open a new VS Code window, ensure that Particle Workbench has been installed
- Use VS Code Command Palette and type in Particle: Import Project
- Select the
project.properties
file in the directory that you downloaded from Edge Impulse. - Use VS Code Command Palette and type in Particle: Configure Project for Device
- Select
deviceOS@5.9.0
- Choose a target. (e.g. P2, this option is also used for the Photon 2).
- Compile and Flash in one command with Particle: Flash application & DeviceOS (local)
The core of this project is the Particle Photon 2 microcontroller, a lightweight and powerful device ideal for real-time fall detection. It supports 2.4 GHz and 5 GHz Wi-Fi, ensuring reliable connectivity in various network environments. Powered by a Realtek RTL8721DM processor with an ARM Cortex M33 CPU running at 200 MHz, the Photon 2 provides the processing power needed for complex, high-speed applications. Its compact form factor and IoT compatibility make it easy to integrate into wearable devices.
Its onboard RGB LED is utilized in this project to visually indicate device status. The RGB LED changes color to signal different states, such as alerting mode during a detected fall or normal operation during routine monitoring. This built-in LED provides a quick, at-a-glance way to understand the device's status without needing additional display components, enhancing user awareness in a compact and efficient manner.
The ADXL362 3-axis accelerometer is utilized in this project to capture movement data. This high-performance sensor detects sudden accelerations along the x, y, and z axes, essential for recognizing falls. Its ultra-low power consumption drawing just 1.8 µA in measurement mode and 300 nA in standby is a significant advantage, as it ensures minimal battery drain, making it ideal for wearable and battery-powered applications.
Momentary Tactile Push Button Module is used in this project as a button for user input. This component allows users to interact directly with the system, providing a simple way to control the device without additional software interfaces.
Bread Board PrototypeFirst, we connected everything to the breadboard to test out the project.
The enclosure for the fall detection system was modeled using Fusion 360 to achieve a compact, wearable watch design, allowing the device to be comfortably worn on the wrist. This configuration enables close monitoring of movement, enhancing fall detection accuracy.
The watch enclosure consists of an upper part and a lower part that securely holds the internal components. These two parts are fastened together using M3 x 10 mm screws. A small button case is also designed to press the push button.
We began assembly by attaching the accelerometer to the back section of the Photon2, carefully soldering the wires in place.
To power the device, we used a 400 mAh LiPo battery with compatible connectors, placing it above the accelerometer on the Photon2.
Then we secured the push button module.
Finally, we completed all the connections and finished the soldering work.
Next, we secured the upper and lower sections with M3 screws and positioned the push button case in place.
Here is our completed device with the straps attached.
The fall detector system continuously monitors the accelerometer to identify potential falls. When a fall is detected, the onboard RGB LED changes color to notify the wearer. If it’s a false detection, the user can press a push button to cancel the alert, signaling that they are safe. However, if the button is not pressed within 5 seconds, the system interprets the event as a legitimate fall and automatically sends an alert to a designated contact, ensuring timely assistance if needed.
This fall detection wearable redefines independence for seniors, combining practicality with powerful, precise technology to create a safety net where it matters most. By seamlessly integrating the ADXL362 accelerometer with the Particle Photon 2 and leveraging advanced algorithms, this device provides rapid, reliable detection while reducing false alarms through an intuitive push-button feature.
Beyond simple monitoring, this wearable ensures seniors are connected to timely assistance in the critical minutes following a fall when immediate intervention can be life-saving. For caregivers and loved ones, this device offers much-needed peace of mind, fostering a sense of security without compromising the wearer's independence. This fall detector wearable is more than a device; it's a step toward safer, more dignified aging, underscoring the vital role of technology in supporting independence and resilience in the years to come.
Comments