Let's Understand Menopause.
Menopause is the stage of life that follows the end of the menstrual cycles.
Menopause affects a person’s health and well-being, but it is not a disease, and it does not mean the body is failing or that the person is getting old. The hormonal changes that precede menopause begin when a person is in their 30s or 40s. In 2017, the average 50-year old female could expect to live to at least 83 years of age. Perimenopause starts less than halfway through the average lifespan. As life expectancy increases and attitudes to aging evolve, people are starting to see menopause as a new beginning rather than an end.
The average age of menopause is 51.
According to hinduism this stage of life comes under the Vanaprastha, the retirement stage, where a person handed over household responsibilities to the next generation, took an advisory role, and gradually withdrew from the world.Vanaprastha stage was a transition phase from a householder's life with its greater emphasis on Artha and Kama (wealth, security, pleasure and sexual pursuits) to one with greater emphasis on Moksha (spiritual liberation).
Problems Related To Menopause:
Women have historically had to “suffer in silence” when it came to their health concerns around menopause, the symptoms of which can last from 4 to 20 years or more. And, not only will nearly every woman in the world experience menopause at some point, but the people in their lives may also be impacted as they navigate this important phase.
Moreover according to AARP Menopause Survey 42 percent of the women between the ages of 50 and 59 who participated in the survey say they've never discussed menopause with a health care provider. Though the figure keep on increasing as there will be 1 billion women globally experiencing the symptoms of menopause by the year 2025. Three out of four women who seek help for symptoms don’t receive it.
With children grown or on their way to independence and a career that's well established, women in menopause have more time to take care of themselves, but they need proper diet and brain food so that they can manage stress and fatigue.
"There is no better time for a health makeover. Many women in menopause are receptive to making changes that will maintain or improve their health." - Dr. Santoro
Each woman is different and will respond to the menopause in her own way - both physically and emotionally - to the changes that menopause brings. Many femtech companies have contributed their effort for the problems like hot flashes, sleep disorder and others.
According to AARP menopause studies mood swing, stress and sleep problems are one of the biggest problems that a woman and his partner are worried for. To detect this mental trauma some technology like ECG can give some spikes but are not always accurate, but the interactive assistant who can respond to them and motivate them helps to maintain their attitude and well presence. A proper balance of sound/music and light therapy (red light therapy) can help to manage these symptoms.
An assistant which can response in an interactive way by showing GIF and videos/music according to the mood and commands given. It sounds amazing as I tried to make my assistant to give motivation and increase spiritualism. It will not only reduce anxiety of women but also give them self control.
Sound Therapy has given me more than I could have asked for or expected, I have suffered serious Post Natal Depression for almost 18 months involving hospital and loads of medication but in the last couple of months I can confidently say that my mental health has been free from the dark and frightening choke hold on my daily life and I have felt fantastically normal for the first time in what feels like years. Where has Sound Therapy been all my life!? -Yasmin Hibbins, Victoria
Mental trauma can be treated if we impose some signals on senses like touch, vision, sound, smell, temperature etc. Moreover it shows significant results when these exercises and the practice is added in the daily routine. It is very much noticeable that how we react when we smell something pleasant, we just forget all our thought and try to guess what it might be or even get attracted to it and try to find from where it came. Possibly the smell effect is added to the device so that it can deviate the mind and change to peace and calm mode.
Matrix Creator comes with on board microphone array and led array and different sensor, the device can easily be trained as an assistant using Rhasspy.
The LED array gave the extension for the light therapy as we can control the frequency and the pattern according to the data provided moreover the connected screen may even show visuals/illusions.
What inspired me?I had very little knowledge about menopause and all the troubles suffered my the women. I was shocked that such an important topic is kept hidden in the darker world as we get very little or no education regarding menopause in schools (not even in high school ).
The recent studies shows that stress and depression and fatigue can be the area in which woman want some tech gadgets so that they can cope up with the new phase. Most of the reasons for any menopausal women is anxiety, it is due to which women cannot sleep, get irritated, lose sexual interest and curses herself most of her time.
These problems affect the family relationships the most but the good news is that many women have positive aspects toward this phenomena and consider it to be the best time to get the best out of them and the family members are encouraged with the new attitude.
Menopause Unveils Itself As The Next Big Opportunity In Femtech - Forbes
This opportunity remind me of Susan B. Anthony.
The day will come when men will recognize woman as his peer, not only at the fireside, but in councils of the nation. Then, and not until then, will there be the perfect comradeship, the ideal union between the sexes that shall result in the highest development of the race. -Susan B. Anthony
“I’m a big proponent of education and want to help women know what to expect during the menopausal transition so she can get the help she needs, and so she can feel empowered during this time to make necessary lifestyle changes.” - Dr. Williams.Step 1:
Setting up Raspberry pi and Matrix Creator:
You will need a raspberry pi, a microsd card and a matrix creator. For installation you just easily follow the official tutorial.
Installation image can be downloaded from :- https://www.raspberrypi.org/downloads/raspbian/
Remember Raspbian Buster is going to work, I have tried it with raspbian stretch but it didn't work.
Once done, ensure that you have enabled SSH on your Raspberry Pi.
Install pyqt5 on raspberry pi. It took almost three hours. As I am using Raspberry pi3b+ 1GB variant.
sudo apt-get update
sudo apt-get install qt5-default
sudo apt-get install sip-dev
cd /usr/src
sudo wget https://www.riverbankcomputing.com/static/Downloads/sip/sip-4.19.14.tar.gz
sudo tar xzf sip-4.19.14.tar.gz
cd sip-4.19.14
sudo python3.6 configure.py --sip-module PyQt5.sip
sudo make
sudo make install
cd /usr/src
sudo wget https://www.riverbankcomputing.com/static/Downloads/PyQt5/PyQt5_gpl-5.12.tar.gz
sudo tar xzf PyQt5_gpl-5.12.tar.gz
cd PyQt5_gpl-5.12
sudo python3.6 configure.py
sudo make
sudo make install
Step 3:-Setting up rhasspy:
- We need to install Docker and add the pi user to the Docker user group.
curl -sSL https://get.docker.com | sh
sudo usermod -a -G docker $USER
- In order to use the microphones of your MATRIX device, we need to install the kernel modules to enable an ALSA interface for the mics.
curl https://apt.matrix.one/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.matrix.one/raspbian $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/matrixlabs.list
sudo apt-get update
sudo apt-get upgrade
- We can give a reboot.
sudo reboot
- Now we need to install Kernel Modules and a reboot will be needed.
sudo apt install matrixio-kernel-modules
sudo reboot
Before moving ahead we should check that our microphone is working. Connect an audio output device to the raspberry pi. It will record and play a 5 second long audio file on your Raspberry Pi.
arecord recording.wav -f S16_LE -r 16000 -d 5
aplay recording.wav
- Now we will download and run the Rhasspy image. This creates a configuration folder for your assistant in ~/.config/rhasspy It will take some time so be patient.
docker run -d -p 12101:12101 \
--restart unless-stopped \
-v "$HOME/.config/rhasspy/profiles:/profiles" \
--device /dev/snd:/dev/snd \
synesthesiam/rhasspy-server:latest \
--user-profiles /profiles \
--profile en
Now our assistant is running, we can configure your assistant by going to http://YOUR_PI_IP:12101
And then we need to change some settings like,
Wake Word Engine to porcupine and change the microphone settings to
hw:CARD=MATRIXIOSOUND, DEV=0: Direct hardware device without any conversions,
Save the settings and wait for rhasspy to restart.
- Now let's check whether everything is working or not.
Just speak the wake word and try to say any of the pre-trained sentences.
Hey porcupine, what time is it.
Now the actual thing comes in our project.
To program your MATRIX device, you'll need to install MATRIX Lite. SSH into your Raspberry Pi and install the following:
For intent catching, we'll be using Rhasppy's Web Socket API. There are also options to interface with Node-Red and Home Assistant.
To keep things simple, the program to listen for Intents will be on the same Pi with the Rhasspy assistant. SSH into your Pi and run the following commands:
Install git.
sudo apt-get install git
Download the example repository.
git clone https://github.com/matrix-io/rhasspy-examples
Move into web socket example and install the project dependencies.
cd rhasspy-examples/websocket/matrix_example/python
sudo python3 -m pip install -r ./requirements.txt
Run the program.
python3 app.py
Training sentences.We need to create intents. The training rules are clearly stated in the official documentation.
#One can even some more sentences as per their need
[GetSleep]
its time to (sleep | bed | meditate){med}
[am] not able to (sleep | meditate | concentrate){med}
cant (sleep | meditate | concentrate){med}
cannot (sleep | meditate | concentrate){med}
help [me] to (sleep | meditate | concentrate){med}
[GetAbout]
what you can do [for me]
what you are [upto]
tell [me] your (feature | features)
why (are | were) you made
[GetEmotion]
i feel like being (happy | sad){emotion}
am (happy | sad){emotion}
i cannot do this
am (not confident | confident)
i feel like (demotivated | motivated){state}
[GetAction]
laugh [for me]
(help me | make me) [to be] happy
(play | show) some (music | visuals | patterns){task}
motivate me
(whats | what is) my status
show my status
[GetJob]
[please] help me to manage (stress | incontinence){prob}
how to manage (stress | incontinence){prob}
Click on Save Sentences and rhasspy will save train the sentences automatically. Or we can train the sentences by just simply clicking on the the train button.
Now we can check whether everything is working or not.
Speak anyone one of the trained sentence.
Hey Porcupine, what you can do for me.
Now let's see inside our app.py.
# Intents are passed through here
def on_message(ws, message):
data = json.loads(message)
print("**Captured New Intent**")
print(data)
if ("Intent_Name" == data["intent"]["name"]):
#we need to replace the Intent_Name with our intent's name
#we can do the desired tasks here
Let's create the features we want to include in our application.
We need to install giphy python-client so that we can show GIF according to the data feeded.
pip install giphy_client
We can make a separate python file for the giphy client and can call the function from the main file. Let's check whether we get the desired image or not.
Importing necessary packages.
import time
import giphy_client
from giphy_client.rest import ApiException
#import webbrowser #we can use webbrowser to open the gif in the default browser
import json
import random
#import subprocess #we can use to open the chromium-browser with the link
We need to create an instance of the API class.
api_instance = giphy_client.DefaultApi() # instance of the API class created
api_key = 'dc6zaTOxFJmzC' # str | Giphy API Key.
q = emotion # str | Search query term or phrase.
limit = 70 # int | The maximum number of records to return. (optional) (default to 25)offset = 0 # (int) An optional result offset. Defaults to 0. (optional) (default to 0)
rating = 'g' # str | Filters results by specified rating. (optional)
lang = 'en' # str | Specify default country for regional content.
fmt = 'json' # str to indicate the expected response format. Default is Json.
category = 'motivation' # change according to your need
Now we will do the search endpoints. We can add and remove the endpoints as per our need. I have used six different search endpoints so that I can access variety of GIFs for the desired word.
api_response = api_instance.gifs_search_get(api_key, q, limit=limit, offset=offset, rating=rating, lang=lang, fmt=fmt) # first search endpoint
api_response = api_instance.gifs_translate_get(api_key, q)# second search endpoint
api_response = api_instance.gifs_categories_category_tag_get(api_key, category, tag=emotion, limit=limit, offset=offset)# third search endpoint
api_response = api_instance.gifs_random_get(api_key, tag=emotion, rating=rating, fmt=fmt)# fourth search endpoint
api_response = api_instance.gifs_translate_get(api_key, emotion)#fifth search endpoint
api_response = api_instance.gifs_trending_get(api_key, limit=limit, rating=rating, fmt=fmt)# sixth search endpoint
To understand quickly I will attach a glimpse of my code below.
Let's test this out.
Now we will create a pyqt application to show the GIF.
Importing necessary packages.
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.QtWebKit import *
from PyQt5.QtWebKitWidgets import *
from PyQt5.QtWidgets import QApplication, QWidget, QMainWindow
Program begins.
app = QApplication(sys.argv)
web = QWebView()
web.setUrl(QUrl('http://www.giphy.com'))
#web.load(QUrl('http://www.giphy.com'))
web.show()
web.resize(1240,640)
app.exec()
We will replace "http://www.giphy.com" with the link obtained from the python-giphy-client.
Now we make a pyqt application to play video. So that we can show motivational video as well as
Importing necessary packages.
import platform
#import os
import sys
from PyQt5 import QtWidgets, QtGui, QtCore
import vlc
from time import sleep
process = []
Class player attribute.
class Player(QtWidgets.QMainWindow):
def __init__(self, data, master=None):
QtWidgets.QMainWindow.__init__(self, master)
# Create a basic vlc instance
self.instance = vlc.Instance()
self.media = None
# Create an empty vlc media player
self.mediaplayer = self.instance.media_player_new()
self.widget = QtWidgets.QWidget(self)
self.setCentralWidget(self.widget)
# In this widget, the video will be drawn
if platform.system() == "Darwin": # for MacOS
self.videoframe = QtWidgets.QMacCocoaViewContainer(0)
else:
self.videoframe = QtWidgets.QFrame()
self.palette = self.videoframe.palette()
self.palette.setColor(QtGui.QPalette.Window, QtGui.QColor(150, 60, 25))
self.videoframe.setPalette(self.palette)
self.videoframe.setAutoFillBackground(True)
self.vboxlayout = QtWidgets.QVBoxLayout()
self.vboxlayout.addWidget(self.videoframe)
self.widget.setLayout(self.vboxlayout)
filename = '/home/pi/Desktop/Hacking Menopause/'+data, 'All Files (*)' # location of the video file.
self.media = self.instance.media_new(filename[0])
self.mediaplayer.set_media(self.media)
self.media.parse()
# Set the title of the track as window title
self.setWindowTitle('For You')
# The media player has to be 'connected' to the QFrame (otherwise the
# video would be displayed in it's own window). This is platform
# specific, so we must give the ID of the QFrame (or similar object) to
# vlc. Different platforms have different functions for this
if platform.system() == "Linux": # for Linux using the X Server
self.mediaplayer.set_xwindow(int(self.videoframe.winId()))
elif platform.system() == "Windows": # for Windows
self.mediaplayer.set_hwnd(int(self.videoframe.winId()))
elif platform.system() == "Darwin": # for MacOS
self.mediaplayer.set_nsobject(int(self.videoframe.winId()))
#sleep(1.5)
self.mediaplayer.play()
sleep(1.5) # startup time.
duration = self.mediaplayer.get_length() / 1000
print(duration)
process.append(duration)
Function to call player attribute.
def late(da):
app = QtWidgets.QApplication(sys.argv)
player = Player(da)
player.showMaximized()
app.exec_()
Let's test it out.
Now we will make a servo program to spray perfume as per the instruction, so that we can somehow distract and have an mellowed out impact on the user.
from matrix_lite import gpio # matrix gpio packages to connect peripherals.
import time
Assigning pins to use.
# Tell pin 3 and pin 4 to set servo to 0 degrees
gpio.setFunction(3, 'PWM')
gpio.setFunction(4, 'PWM')
gpio.setMode(3, 'output')
gpio.setMode(4, 'output')
Controlling the servos.
# set initial position
def initial_position():
gpio.setServoAngle({
"pin": 3,
"angle": 0,
# min_pulse_ms (minimum pulse width for a PWM wave in milliseconds)
"min_pulse_ms": 0.8,
})
gpio.setServoAngle({
"pin": 4,
"angle": 0,
# min_pulse_ms (minimum pulse width for a PWM wave in milliseconds)
"min_pulse_ms": 0.8,
})
#change the position to control
def change_position():
initial_position()
gpio.setServoAngle({
"pin": 3,
"angle": 150,
# min_pulse_ms (minimum pulse width for a PWM wave in milliseconds)
"min_pulse_ms": 0.8,
})
gpio.setServoAngle({
"pin": 4,
"angle": 150,
# min_pulse_ms (minimum pulse width for a PWM wave in milliseconds)
"min_pulse_ms": 0.8,
})
sleep(2)
initial_position()
Now we will code some sleep patterns and classic pattern for light therapy to relieve stress and help in meditation.
import vlc # importing package
def play_music(file): # function to start the audio player
p = vlc.MediaPlayer(file)
p.play()
Importing necessary packages and some initial setup.
from matrix_lite import gpio
from matrix_lite import led
from time import sleep
from math import pi, sin
import random
everloop = ['black'] * led.length
ledAdjust = 0.0
if len(everloop) == 35:
ledAdjust = 0.51 # MATRIX Creator
else:
ledAdjust = 1.01 # MATRIX Voice
Classic pattern.
def classic_pattern():
frequency1 = 0.9947961070906801
print(frequency1)
#frequency1 = 0.0375
counter = 0.0
tick = len(everloop) - 1
c=1
#inten = random.randint(1,10)
inten = 10
while (c<4000):
# Create pattern
for i in range(len(everloop)):
r = round(max(0, (sin(frequency1*counter+(pi/180*240))*155+100)/inten))
g = round(max(0, (sin(frequency1*counter+(pi/180*120))*155+100)/inten))
b = round(max(0, (sin(frequency1*counter)*155+100)/inten))
counter += ledAdjust
everloop[i] = {'r':r, 'g':g, 'b':b}
# Slowly show rainbow
if tick != 0:
for i in reversed(range(tick)):
everloop[i] = {}
tick -= 1
led.set(everloop)
c = c+1
#print(c)
sleep(.035)
led.set('black')
Sleep pattern.
def sleep_pattern():
#frequency1 = random.uniform(0.0, 1.0) #it produces more frequency combination
frequency1 = 0.00375
print(frequency1)
counter = 0.0
tick = len(everloop) - 1
c=1
while (c<4000):
# Create pattern
for i in range(len(everloop)):
r = round(max(0, (sin(frequency1*counter+(pi/180*240))*155+100)/2))
counter += ledAdjust
everloop[i] = {'r':r}
# Slowly show the pattern
if tick != 0:
for i in reversed(range(tick)):
everloop[i] = {}
tick -= 1
led.set(everloop)
c = c+1
#print(c)
sleep(.035)
led.set('black')
Let's test this out.
Currently working on.Firstly we are just showing kegel video in pyqt video player with the responsive command, so I am trying to add ML to predict correct posture for it.
Secondly I wanted to use the sensor data with node-red to create dashboard. I have done the coding part for it but got some package issue to forcibly I have to put it into my future to-do list.
With the help of our device we can get rid off anxiety, sleep disorder, depression and get mental exercise too.
Comments