Introduction:
The combination of Machine Learning and connected ARM based IoT Smart Hardware Devices will bring about a game changing innovation that could revolutionised how innovators, gadget designers and hardware hackers use "Nearby IoT Communication", "Mesh Protocol", "MQTT" couple with "Deep Learning" for "SLAM" and "Point-Cloud" to provide a World-wide device-to-device (M2M) data visualisation, monitoring for connected Robots, Drones, Submarine, Agricultural devices and products. The technology could also be use for indoor location based services in large retail or shopping mall, for example it could be for providing virtual-assistant for products or in an industry having Robots doing the mundane jobs including heavy lifting of equipment. Remote monitoring of items on each store floor level (e.g. 1st floor, 2nd floor, etc...), directing people to save locations during an Earthquake or Volcanic hazard using Land Drones or UAV or when selected on their mobile phones or virtual screen. Other possible areas are: Home monitoring, Elderly assistant and care, Asset tracking, in-Car infotainment, Amusement Park Guide and Ride, etc...
- Develop solutions that significantly improve preparedness for natural disasters and relief before they hit.
- Enables collaborations among rescuers and victims in hit zones
- Mission to reduce human suffering.
What we will be doing with Donkey Car kit:
1.) Give a large Brain (Modify, Extend, Enhance)
Deeplinking and Mapping API?
Deeplinking API enables use of HTTP/URL routines that allows locations and routes to be shared between users and devices being mapped. It makes it possible to track, follows and monitor connected devices in real-time and non real-time situations. It allow a Restful resources to share and be shared among connected located devices through a middle-ware relay server(s) to various subscribers (devices: Robots, Drones, etc...).
In this project, Donkey Car and other Robots that we will mod are going to be driven by Raspberry Pi 3, BeagleBone Black, Orange Pi, Udoo Quad running our customised:- ROS (Robotics Operating Systems), JeVois and PX4 with Cloud Based Machine Learning Software (Tensorflow, Caffe) that act as the brain be deployed as "Location Based Services solution" for a Mobile Positioning System (MPS) which supports complementary positioning methods for 2G, 3G and 4G/LTE networks. This system will be mounted on the robots and a drone built for this competition.
Quick Glance:
A Quick mock-up of the project for Quad Copter wired with ARM based Nova Modem in Raspberry Pi 3 with Driver Shield is shown below:
Hologram Nova USB Modem acting as cellular providing the location service for our Deep Earth Water Seeking Robots that will be roaming famine areas of the World looking for water tables and mapping them to and from the cloud via the gateway server in order to provide almost real-time machine learning and data visualisation. We will be able to drive, monitor, control and exchange data from Robots and Drone mounted sensors (planned sensors are:- ground penetrating radar, low-cost lidar, range sensors, camera/vision tracker, pressure/altitude, temperature and infra-red)
Connected devices can subscribe and also publish there presence, transmit data, link with other peers to collaborate on completion of task in real-time or on per-scheduled period irrespective of where they are located. Please see below for a typical scenario for a multi-robots all having their own sensors and machine learning portal acting as control centre communicating with various ground and air roaming robots. There will be an mobile App develop to enhance accessibility of this project.
Areas of Application:-
Wide application areas: automation (any); collaborative factory robots, home automation, rescue missions, agriculture and farming, plantation, submarine, small-device-autonomous(e.g. RC racing cars), mining and tunnelling works, remote-repair, education and teaching and many more...
Three of the robots will be fitted with cameras for computer-vision and machine learning to showcase. One of the camera will be Flir based camera. The robot arm with be enhanced with NUCLEO-F429ZI having a web-page and remote access. The Vex robots will also have voice control feature with a customised virtual assistant bot
I would have loved to install a low-cost Lidar to experiment SLAM and PointCloud.
Proposed Framework:-
Note:-
- Contents List Section will be added to the project.
- A GitHub repository and YouTube will be made for the project. (see initial: https://github.com/sanyaade-projects/DonkeyCar-project )
- Full project documentation will be conducted
- Possible a 30-page booklet targeting STEM? (6-15yr) [may include link to VEX EDR e.g. VEX 5 Controller]
I started porting routines, making scratch v3 in readiness for Donkey Car. Upon arrival, if it does then I can start testing within the community workshops. I can see the education teaching values around this robot mostly in schools and colleges.
Our hybrid machine learning and natural language understand development environment will abridge with the above in order to make accessible from Age 6+
Hardware Build:
When we received the DonkeyCar kit from the post, we showed it group of kids age 6 – 16yrs at our Autumn Robotics workshop organised for the age group. They were amazed with the kit, their interests were instantly rekindled like that of glowing splints. It was difficult for us to take them on kit since we only have one kit so we assembled the kit in-togetherness with them meaning we rotate the tasks randomly among them as the kit was being built.
Hardware BuildProcess:
We currently have over 12 young learner that have signed up for our next DonkeyCar workshop in November even though we are still deliberating on how to approach the challenges of the costing and funding. Most of our funder were very reluctant to fund the kit so we may have to go via having most part built by 3D-Printing process.
For sure we will be going beyond the competition and have our work publish on hackster.io as we progress along.
Our Plan:
1.) To bring DonkeyCar to STEM/STEAM platform
2.) We are targeting Embedded Systems Platform as our main platform
3.) Create curriculum around it just like Audacity did it for the older Engineer learners but for schools and Colleges (6-16+yrs)
4.) Create Notes and Workbooks learning
5.) Create IDE platform for teaching and learning
6.) Deliver community workshops for self-driving, accessible machine-learning and data science.
The story so far:
- mBlock v4 & v5 is working with Raspberry Pi 3
- Micropython is being work upon and ported
- Orange3 machine learning platform is also being ported
- Drakon Flow Programming is now added to the Platform
- uTensor, ARM CNN using Compute-Library is on our Radar as we want to get it into MicroPython Environment. We have started work on this replacing DonkeyCar TensorFlow with that of ARM CNN
- I have worked with SOD (https://sod.pixlab.io/) during the DonkeyCar competition project while researching on tools and resources. SOD is an interesting alternative graph tool to TensorFlow. Also being a single file based made me curious on what could be achieved. SOD also run flawlessly on small ARM devices. We already add it to our framework as a plug-in See: https://github.com/symisc/sod/
- Snips AI is working as virtual assistant ( https://github.com/snipsco )
- RasaHQ is also working (https://github.com/RasaHQ/rasa_nlu)
- Mycroft is working as well ( https://github.com/MycroftAI )
Note: Our write-up is now where finished. we have to follow the rules to submit but we will continue to add to this page but we are only submitting to abide with the competition deadline and rule. Many thanks!
Comments