The use of radio-frequency (RF) sensors in systems centered around humans, such as human-computer interfaces or smart environments, is an emerging field that aims to recognize human motion in real time. While various RF sensors are used in this area of research, only a few have conducted a direct comparison between Radar and Wi-Fi technologies. To address this gap, this study collects datasets using Radar and Wi-Fi, creates spectrograms, and conducts a side-by-side comparison to assess the efficiency of both systems.
In order to collect raw radar samples, one needs to use SPI Mode (1.1.3 SPI communication, AN650, page 4). As per Infineon_Team SPI Mode is not supported in S2GO RADAR BGT60LTR11. So, the project moved forward with INGRESS Radarbook2. Also, due to the limited time to explore the firmware hacking of PSoC™ 62S2 Wi-Fi BT Pioneer Kit to obtain Channel State Information(CSI) data from Wi-Fi, Raspberry Pi 3B+ is used. Later on PSOC 62S2 Wi-Fi will be explored to obtain CSI data. The dataset is obtained using Radar and Pi equipped with Nexmon firmware, and both the datasets and the associated code are made publicly accessible. The findings reveal that Radar outperforms Wi-Fi in terms of a 7-class human activity recognition by 32.7%. These results underscore the superiority of radar technology in the field of Human Activity Recognition (HAR) while also highlighting the potential of Wi-Fi for indoor activity monitoring.
INTRODUCTION
Due to the advancement in solid-state transceiver technology, the cost of radio frequency (RF) sensors has become affordable making them easily accessible for a wide range of applications such as human activity recognition (HAR) [1], [2], defense, and security [3], [4], mini-UAV classification [5], advanced driver assistance systems (ADAS) [6]–[8], indoor monitoring [9], and health monitoring [10], etc. Furthermore, the introduction of low-cost software-defined RF sensors has spurred new research in areas like [11] [12]. Unlike cameras, these sensors do not capture biometric information, making them an excellent choice for human activity recognition. RF sensors can detect human motion through backscattering signals, independent of clothing, making them suitable for environments where constant video camera surveillance is not desirable.
With the popularity of research in human activity recognition using Wi-Fi, we embarked on a comparative exploration between two distinct RF technologies: Wi-Fi Channel State Information (CSI) and Radar. Human activity recognition using Wi-Fi and Radar comprises the detection and categorization of physical activities by analyzing the signals coming from Wi-Fi devices like smartphones, laptops, etc and Radar. By training machine learning algorithms, we can discover distinctive patterns that correlate with various human activities. Wi-Fi and Radar technology which operate on a different principle, offer a different perspective. CSI, in particular, provides us with a more comprehensive understanding of the channel state. This extends beyond just the frequency response, encompassing the phase response while also furnishing data on channel gain, delay, and Doppler shift. CSI has been used for device fingerprinting [13] and location fingerprinting [14] [15] and has wide use cases. Whereas, Radar uses a transceiver to obtain the reflected signal with micro-Doppler shift to detect activity. Our examination of these two technologies illuminates their unique potential and limitations in the domain of human activity recognition.
While there are wearable devices capable of identifying motion, particularly falls, they are battery-operated and can be turned off at any time. They rely on either accelerometers to detect motion or require the individual wearing the device to press a button, which is not always reliable. In contrast, the two sensors used in this project, Radar and Wi-Fi, offer non-obtrusive passive motion-sensing technology, providing real-time notifications to caregivers and first responders about critical events related to the health and well-being of the monitored individual.
Previous work [16] has done some comparisons between three sensors: Camera, Radar, and Wi-Fi. [17] has done an amazing job explaining different types of sensors such as Radar, Wi-Fi, Camera, etc. However, all of the literature reviewed during this writing lacked a direct comparison of various sensors. Our proposed work aims to evaluate and compare the efficiency of two distinct technologies, Radar and Wi-Fi, for human activity recognition. This comparative analysis will shed light on the distinctive features and advantages of each approach, contributing to the uniqueness and novelty of this research. By examining these two methods, we seek to provide valuable insights into their respective capabilities, which will help researchers and practitioners make informed decisions when choosing the most suitable technology for their specific applications.
In our research, we have made significant contributions to the field of human activity recognition by utilizing Wi-Fi and Radar technology and introducing novel approaches. Our contributions include:
- Open Access Datasets and Code: We have collected and shared Radar and Wi-Fi datasets [18], along with the corresponding code, to help further research in this domain, making these valuable resources publicly accessible.
- Spectrogram Comparison: Our paper is the first to compare Radar and Wi-Fi technologies using spectrogram analysis, providing a unique perspective on their performance and potential applications in human activity recognition.
In the upcoming sections, we will explore the essential components of our study. In Section II, we will discuss the Wi-Fi signal model and its application in human activity recognition, specifically through CSI. Similarly, Section III will introduce the Radar signal model, providing details on how Radar data can be effectively used in our research. Moving forward, Section IV will outline the experimental setup employed for the collection of both Radar and Wi-Fi data. Section V will present our performance comparison results, offering a comprehensive evaluation of these two technologies. Finally, in Section VI, we will draw conclusions based on our findings and outline potential future works.
II. WI-FI SIGNAL MODEL
In terms of human activity in the presence of Wi-Fi, the signals can be affected by the motion due to any activity. When a person moves, the movement causes the body to reflect and scatter Wi-Fi signals, causing a slight change in the frequency of the signal, hence resulting in a Doppler shift. In other words, what exactly happens is when you move, the signal reflected from you contains a Doppler shift in the original signal which can be observed in the CSI. CSI is represented in the form of a complex-valued matrix, with one matrix element corresponding to each subcarrier in the Wi-Fi signals.
The Doppler effect is the main component of this system. It is the change in frequency or wavelength of a wave in relation to an observer who is moving relative to the wave source. The Doppler effect causes a shift in the frequency of the waves which can be detected by measuring the change in wavelength or frequency of the wave. The Doppler shift is given by the
where f is the frequency of the wave in Hz, v is the relative velocity of the source and observer in meters, c is the speed of the wave, and ± is when the source is moving towards or away from the observer. Wi-Fi uses Orthogonal Frequency Division Multiplexing (OFDM) in which the bandwidth is divided equally into subcarriers as shown in Figure 1. Each subcarrier contains either user data or pilot (for phase synchronization) or null tone(reference signal). In a hardware configuration with t number of transmit antennas and r number of receiving antennas, the data looks as follows:
where Ht,r represents a vector containing complex pairs captured for each subcarrier. The number of subcarriers depends on the hardware configuration as well as the bandwidth of the Wi-Fi protocol. For this experiment, 5GHz Wi-Fi with 80MHz bandwidth is used to extract CSI using Raspberry Pi using Nexmon which is discussed in section IV-A.
III. RADAR SIGNAL MODEL
The INRAS Radarbook2 functions as a frequency-modulated continuous wave (FMCW) radar system, with an operating frequency range spanning from 76 GHz to 80 GHz. This system emits chirp signals directed towards the radar’s field of view. Initially, these transmitted signals bounce off the target, specifically in our case, humans. Consequently, the radar receives a signal that has undergone frequency shifts and time delays, relative to the initially transmitted signal. The kinematic characteristics of each human target movement give rise to a dynamic sequence of micro-motions, such as vibrations and rotations, as outlined in Chen et al.’s work [19]. Each unique gesture generates its own distinct patterns, which can be analyzed effectively through time-frequency analysis techniques. The µ-D spectrogram is then calculated from the square modulus of the Short-Time Fourier Transform (STFT) of the continuous-time input signal x[k] and may be described in terms of the window function, h[k].
Figure 2 illustrates the process of creating a μ-D spectrogram from 2D raw radar data. Figure 3 presents examples of μ-D signatures for various activities, represented through color scaled images. Positive Doppler frequencies are visualized above the horizontal axis, while negative Doppler frequencies are depicted below the horizontal axis, with the frequency scale starting from 0 Hz.
IV. EXPERIMENTAL SETUP AND DATASET
To perform the data collection, three different sensors, INRAS Radarbook2 Radar, Raspberry Pi 3B+, and Camera,
are used in the data collection to capture both kinematic movement and visual data. The Raspberry Pi 3B+ is used for Wi-Fi CSI data collection. The Azure Kinect Camera is used as a reference for each data collected from Radar and Raspberry Pi. In the upcoming section, the setup for radar and Wi-Fi CSI data collection will be discussed.
A. Experimental Setup for Wi-Fi
It is unfortunate that the manufacturers of the Wi-Fi chips have not made CSI accessible for researchers. So, to reduce the effort and provide CSI easily, Atheros CSI [20], 802.11 Linux CSI tool [21], and Nexmon [22] were developed. These tools have been used for many prior types of research. Among all of these techniques, we used Nexmon firmware flashed into Raspberry Pi 3B+ to collect CSI data. Our data collection setup is shown in Figure 4.
Nexmon is the new C-based firmware patch that works for several Broadcom/Cypress Wi-Fi chips. This tool is gaining a lot of popularity in the present days due to its support for bcm43455c0 which is found in Raspberry Pi 3B+/4B. Raspberry Pi is a single-board embedded system with a very low cost. Hence, it is an excellent choice to flash this firmware to collect CSI data [23]. This method allows one to extract CSI of 802.11a/g/n/ac up to 80 MHz bandwidth Wi-Fi channel. Currently, GitHub for Nexmon supports kernel version 5.10.92. However, the current kernel version is 5.15 which is not supported. So, in addition to the steps mentioned in the creator’s repository, we had to perform a hold on the kernel update to compile the firmware successfully. Wi-Fi CSI data is collected using 2.5 GHz Wi-Fi with 80 MHz bandwidth which provides 256 subcarriers. Subcarrier indices 90 - 120 are used to generate spectrograms for each index.
B. Experimental Setup for Radar
The INRAS Radarbook2 is used for radar data collection. The radar operates from 76 to 80 GHz, with two transmitters (TX) and 16 receivers (RX). Since the radar is used only for collecting the movements of the targets, only 1 TX-RX pair has been used for the experiment. The device can be initialized with different parameters depending on the situation. Table I shows the parameters set for the INRAS Radarbook2 radar for the experiment.
The radar was positioned on top of a table with an elevation of 1 meter prior to data collection. Participants were positioned at a distance of 2 meters (6.5 ft) in front of the radar. The data was collected for 10 seconds for each sample.
C. Dataset
A dataset of 7 activities, fall, lie down, pick up, run, sit down, stand up, and walk, is developed for this study. The activities are chosen in such a way that they are distinct from one another. Figure 3 depicts each gesture made by a participant. In total, 700 samples were collected, encompassing 7 distinct gestures. Each gesture was represented by 100 individual samples, which were collected from 5 different subjects. In other words, each subject provided 20 samples for each class the first 16 samples (80%) were used for training and the last 4 samples (20%) were used for testing. The upcoming section will delve into the classification performance achieved by employing both automotive Wi-Fi and FMCW radar data. Additionally, it will provide an in-depth explanation of the 2D Convolution Neural Network CNN architecture used for the classification task.
V. PERFORMANCE COMPARISON
A. 2D DCNN architecture
For the classification of Wi-Fi and radar μ-D spectrograms, a 2D CNN structure has been designed. As depicted in Fig. 5, this CNN architecture is composed of three convolution blocks (CB), with each block featuring two convolution layers. The convolution layers of the first two CBs are equipped with 32 filters, while the last CB employs 64 filters for its two convolution layers. Each convolution layer utilizes a 3×3 kernel size and a 1×1 stride. Following the two convolution layers in each block, there is a sequence of operations: a 3×3 max-pooling, batch normalization, ReLU activation, and dropout with a rate of 0.3. Subsequently, the tensor is flattened and fed into a dense layer with a size of 128. Then a dropout operation with a rate of 0.3 and ReLU activation are applied. Finally, the network concludes with a softmax classifier.
B. Performance Evaluation
For performance evaluation, the dataset was divided into 80% for training and 20% for testing as mentioned in IV-C. Spectrogram images derived from radar and Wi-Fi data were saved as 128x128 grayscale PNG images. The comparison results between radar-based and Wi-Fi-based signatures are presented in Table 1. It’s evident from the table that radar exhibits higher efficiency in human activity classification. Radar achieves a classification accuracy of 97.78%, while Wi-Fi achieves only 65.09%. This indicates that radar outperforms Wi-Fi-based activity classification by a significant margin of 32.7%.
Despite the dataset’s challenging nature, given the variability in performed gestures among individuals, the confusion matrix depicted in Fig. 6b illustrates how accurately the 2D CNN distinguished between different classes for radar based spectrograms. Conversely, the confusion matrix in Fig. 6a shows the performance of Wi-Fi-based human activity classification. While Wi-Fi lags significantly behind radar in terms of accuracy, the results suggest that there is substantial potential for Wi-Fi in this domain. Further advancements in this field could open up new opportunities across various applications for Wi-Fi.
VI. CONCLUSION AND FUTURE WORK
The aim of this research is to conduct an initial comparative analysis to assess the effectiveness of Wi-Fi and Radar in recognizing human gestures. The findings indicate that radar-based activity recognition outperformed Wi-Fi-based recognition by a substantial margin of 32.7%. With a testing accuracy of 97.78%, radar has proven to be highly efficient in this context. While Wi-Fi yielded a more modest accuracy rate of 65.09%, it offers promise for future improvements in indoor monitoring with ongoing advancements in the field. It’s worth noting that our initial study was conducted within a controlled laboratory environment and under specific guidance. Additionally, we have yet to explore data collected from cameras. In our future works, we plan to undertake a more extensive analysis that includes radar, cameras, and Wi-Fi, using a larger and more diverse dataset encompassing a wider range of typical indoor activities. Implementing real-time classification techniques across these sensor modalities is expected to open up new opportunities for investigating and monitoring human body movements in indoor settings. In addition to that, when S2GO RADAR BGT60LTR11 has SPI mode support, raw radar data will be used to generate µ-D signature, and many different experiments can be conducted. For example, the PSoC™ 62S2 Wi-Fi BT Pioneer Kit may be hacked to get Wi-Fi CSI data which can be fused with radar to explore the result.
Lastly, an important step for development is the adaptation of these systems for real-time applications, accomplished through the creation of a real-time notification system using PSoC™ 62S2 Wi-Fi BT Pioneer Kit. This goal involves the optimization of algorithms and models to ensure low-latency and resource-efficient deployment. Such enhancements enable the system to recognize human activities in real time, offering potentially life-saving applications(Fall detection), particularly in environments like nursing homes.
REFERENCES
[1] S. Gurbuz, Ed., Deep Neural Network Design for Radar Applications.London: IET, 2020.
[2] E. Kurtog˘lu, S. Biswas, A. C. Gurbuz, and S. Z. Gurbuz, “Boost- ing multi-target recognition performance with multi-input multi-output radar-based angular subspace projection and multi-view deep neural network,” IET Radar, Sonar & Navigation, vol. 17, no. 7, pp. 1115– 1128, 2023.
[3] F. Fioranelli, M. Ritchie, and H. Griffiths, “Classification of un- armed/armed personnel using the netrad multistatic radar for micro- doppler and singular value decomposition features,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 9, pp. 1933–1937, 2015.
[4] Z. Ni and B. Huang, “Gait-based person identification and intruder detection using mm-wave sensing in multi-person scenario,” IEEE Sensors Journal, vol. 22, no. 10, pp. 9713–9723, 2022.
[5] A. Huizing, M. Heiligers, B. Dekker, J. de Wit, L. Cifola, and
R. Harmanny, “Deep learning for classification of mini-uavs using micro-doppler spectrograms in cognitive radar,” IEEE Aerospace and Electronic Systems Magazine, vol. 34, no. 11, pp. 46–56, 2019.
[6] S. Biswas, B. Bartlett, J. E. Ball, and A. C. Gurbuz, “Classification of traffic signaling motion in automotive applications using fmcw radar,” in 2023 IEEE Radar Conference (RadarConf23), 2023, pp. 1–6.
[7] G. Hakobyan and B. Yang, “High-performance automotive radar: A review of signal processing algorithms and modulation schemes,” IEEE Signal Processing Magazine, vol. 36, no. 5, pp. 32–44, 2019.
[8] S. Biswas, , J. E. Ball, and A. C. Gurbuz, “Radar-lidar fusion for classification of traffic signaling motion in automotive applications,” in 2023 IEEE International Radar Conference Sydney, Australia, 2023.
[9] S. Z. Gurbuz and M. G. Amin, “Radar-based human-motion recognition with deep learning: Promising applications for indoor monitoring,” IEEE Signal Processing Magazine, vol. 36, no. 4, pp. 16–28, 2019.
[10] F. Fioranelli and J. L. Kernec, “Contactless radar sensing for health monitoring,” in Engineering and Technology for Healthcare, 2021, pp. 29–59.
[11] S. Biswas, C. O. Ayna, S. Z. Gurbuz, and A. C. Gurbuz, “Complex sincnet for more interpretable radar based activity recognition,” in 2023 IEEE Radar Conference (RadarConf23), 2023, pp. 1–6.
[12] S. Z. Gurbuz, E. Kurtoglu, M. M. Rahman, and D. Martelli, “Gait variability analysis using continuous rf data streams of human activity,” Smart Health, vol. 26, p. 100334, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S235264832200068X
[13] L. Smith, N. Smith, S. Kodipaka, A. Dahal, B. Tang, J. E. Ball, and M. Young, “Effect of the short time fourier transform on the classification of complex-valued mobile signals,” in Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target RecognitionXXX, May 21 2021, p. 117560Y.
[14] N. Smith, L. Smith, S. Kodipaka, A. Dahal, B. Tang, J. E. Ball, and
M. Young, “Real-time location fingerprinting for mobile devices in an indoor prison setting,” in Proc. SPIE 11756, Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, April 12 2021, p. 1175612.
[15] F. Islam, J. Farmer, A. Dahal, B. Tang, J. E. Ball, and M. Young, “Wi- fi fingerprinting-based room-level classification: Combining short term fourier transform and imbalanced learning method,” in Proceedings Vol- ume 12122, Signal Processing, Sensor/Information Fusion, and Target Recognition XXXI, 2022, p. 121220Y.
[16] R. Zhang, X. Jing, S. Wu, C. Jiang, J. Mu, and F. R. Yu, “Device-free wireless sensing for human detection: The deep learning perspective,” IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2517–2539, 2021.
[17] B. Fu, N. Damer, F. Kirchbuchner, and A. Kuijper, “Sensing technology for human activity recognition: A comprehensive survey,” IEEE Access,vol. PP, pp. 1–1, 01 2020.
[18] A. Dahal, S. Biswas, S. Gurbuz, and A. Gurbuz. (2023) Wi- fi radar comparison. GitHub repository. [Online]. Available: https://github.com/AjayaDahal/Wi-Fi-Radar-Comparison
[19] V. C. Chen, D. Tahmoush, and W. J. Miceli, Radar micro-Doppler signatures. Institution of Engineering and Technology, 2014.
[20] “Atheros-csi - https://wands.sg/research/wifi/atheroscsi/.” [Online].
Available: https://wands.sg/research/wifi/AtherosCSI/
[21] “802.11n csi linux tool-https://dhalperi.github.io/linux-80211n-csitool/.”[Online]. Available: https://dhalperi.github.io/linux-80211n-csitool/
[22] seemoo lab, “seemoo-lab/nexmon,” Jan 2019. [Online]. Available: https://github.com/seemoo-lab/nexmon
[23] F. Gringoli, M. Schulz, J. Link, and M. Hollick, “Free your csi: A channel state information extraction platform for modern wi-fi chipsets,” in Proceedings of the 13th International Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization, ser. WiNTECH ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 21–28. [Online]. Available: https://doi.org/10.1145/3349623.3355477
Comments