The kit consists of a terminal/display with built in sensors (sound, light), a LoRa transceiver, a couple of Groove sensors and a special camera module.
The LoRa transceiver can be used for three different frequencies, whatever is appropriate fr the country where you are using it.
The camera module has a processor plus AI software. It does not send pictures (too many data for LoRa) but the (simple) numeric outcome of a AI process (does this image match the object we are looking for).
Process:
- The first activity is bringing the terminal up and running, experiment with the built-in and separate sensors and see what happens.
- The second activity is connecting the Lora Transceiver and configuring it for the right frequency (for Indonesia 923 MHz).
- The third activity is connecting with some LoRaWan in order to receive the data sent by the terminal. We conected with a LoRa Gateway, which was connected with the SenseCAP network were our data would become available.
- The fourth activity is installing the SenseCAP app and exploring the SenseCAP website to seeour data arriving and changing.
- The fifth activity is using the cameramodule loaded with AI to recognize human faces, experimenting with it and seeing the results via LoRaWAN.
- The sixth activity is downloading the datamodel for another object, 'apple' and place it in the terminal, experimenting with recognising apples, etc.
Comments