In recent years there has been a growing interest in voice assistants. Large companies such as Amazon, Google, Microsoft and Apple, have presented their devices. These allow us to control objects in our home (lights, TV, etc.), using voice commands. At the same time, these devices allow us to access news, listen to music, access our appointment calendar, etc. However, one of the problems these applications have is the ability to develop new applications. With Alexa it is possible to quickly make applications or skills, however, when wanting to connect these applications with our developments is very complicated. The advantage of this device is that being open-source is much easier to connect to our projects and thus increase our applications. The assistant robots represent one of the best elements to accompany elderly people and it is very important that these robots have image and voice recognition systems.
The introduction of a voice recognition system as is the case with Snips (Figure 1), is that it facilitates the form of interaction between the human and the robot. This way we achieve a natural interaction, since Snips recognizes and executes the orders provided by the human. This way the human can ask for the state of the weather, news or order it to move.
But before starting with the construction of the robot and the incorporation of Snips as a voice interaction system, it was necessary to perform some previous configuration steps (https://docs.snips.ai/getting-started/quick-start-raspberry-pi). Here are some images and videos.
In order for the robot to perform this type of action, it is necessary for it to have a series of elements that allow it to move around, recognize objects, identify people, etc. This is why Snips-Robot has 2 DC motors, a camera and a structure made of aluminium as shown in Figure 3.
In order for the Snip to control the robot, a Skill was controlled which was divided into three intents (Figure 4).
1. snips_robot_interaction2. snips_robot_meteo3. controlling_snips_robotWithin each of these intents a slot series was created, which allows the user to interact with the robot through Snips. Some of the slots designed so that the user can intellectually interact with the robot, these slots are described below (Figure 5, 6, 7).
In order for these slots to interact with the Snips and robot peripherals, a series of MQTT clients were developed. These clients are integrated into the Snips platform, creating a specific "chat room" to control and acquire data from the different sensors (internal or by BLE). A series of topics were created to control the robot ("actuator/motors", "sensors/ultra_sound_sensors", "sensors/encoder", "sensors/adc") and another topic to access the Rapid IoT device ("sensors/environment") (Figure 8).
Rapid IoT
One of the main ideas of this robot, is the ability to be interconnected with other IoT devices. that is why it was decided to incorporate the Rapid IoT system presented by NXP in the previous contest. This device incorporates sensors of air quality, CO2 levels, accelerometers, light level sensors, among others. Snips-Robot can access the information provided by this device and communicate it to the user, allowing to create alarms that inform if the air quality is good or if CO2 levels exceed the permitted ranges. At the same time, the user can request that Snips-Robot, report on these levels.
Note: This project incorporates a Rapid IoT system, which was used in the contest: https://www.hackster.io/jarain78/smart-wear-for-application-in-internet-of-medical-thing-iom-a4ec76
Comments