The inspiration for this project comes from the Build2gether Inclusive Innovation Challenge, which calls for participants to build "innovative solutions to help individuals with disabilities overcome their daily struggles".
One of the three solution themes outlined in the challenge is traveling for people with mobility impairments, which aims to assist this community in navigating and traversing the world around them.
Problem identificationWhat were the needs or pain points that you attended to and identified when you were solving problems faced by the Contest Masters?
I had chances to meet with more than 2000 students in different countries. Some schools in rural areas are lack of infrastructure for students to learn scientific subjects. In addition, some of students have NOT been to any other museums or scientific lab in bigger cities due to far distances from their living location. The main problem is getting even worse for children with disabilities in rural areas.
Developing the solution : Haptic-enabled mid-air interfaceThe project is to support students in rural areas and children with disability to study scientific subjects and visiting virtual museums/scientific lab intuitively, interactively and immersively. Virtual artifacts will be displayed as mid-air objects with the Z-frame. Then, users would be able to touch the floating objects in the space with the haptic interface. In addition, we are developing 3D reconstruction algorithm for monocular images running on low-end devices allowing the users to scan real objects. Then, the 3D scanning objects can be shared to children and students in the classroom or their home. Makers can share their 3D scanning artifacts to viewers in different territories.
The goal is to create a 3D inventory system allowing the Makers to scan and share 3D model of the artifacts at museums and the viewers to see and touch the 3D virtual artifacts.
In terms of VIEWERs, for haptic rendering and graphic rendering, Chai3D SDK is utilized to construct the algorithm. The haptic interface is Novint Falcon with built-in driver on Windows. The graphics rendering is normally running at a update rate of 30 FPS. In contrast, the haptic rendering rate is running at a 1000 FPS (at least 500 FPS). Chai3D SDK supports to integrate the haptic rendering thread and graphic rendering thread. We only need to build an executable file in windows to run the program. The 3D models are transferred to the program from the 3D inventory system in the backend. REST api is chosen to download the GLTF file resources to download the 3D models on the fly.
For displaying mid-air virtual objects, the Z-frame with its design is cut by laser with the material of MICA acrylic or wood. All the files for the design are attached in this project files.
For 3D scanning algorithm for MAKERs, the main goal is to let makers to scan 3D model of real artifact instantly. It should NOT require sophisticated devices with LIDAR or complex algorithms running on backend server. Therefore, the variational method is implemented to construct voxel-based 3D models of real artifacts. The detail of the algorithm is depicted in the below section.
Unsupervised 3D Reconstruction algorithmI made up a slide about my 3D reconstruction algorithm for monocular camera. Given a sequence of images and camera orientation from ARCore or ARKit SDK, the algorithm can reconstruct the 3D model of the objects on mobile devices in voxel format (you can convert to point cloud format or signed distance function format if you want). The algorithm runs seamlessly on low-end devices (Android/iOS phones). The resolution of voxels is 50*50*50. You can change the voxel resolution based on your phone computing resources.
Let's talk a bit about fun math. This method is a volumetric approach, where each voxel is assigned two probability values for being in or out of the 3D object P(obj) and P(background). Let’s start.
where:
- i: the index of a frame - i=1…n.
- R: is a random region variable, can be foreground or background - R∈Rf, Rb
- c: color of voxel after being projected to ith image - c∈R3
- v: voxel coordinate in volume V - v∈R3
- u: =u(v)
The problem can be understood that we want to maximize the probability of voxel is foreground or background given the pixel color and position of the voxel in 3D space. The graphical model below shows the assumption for the color intensity of the image.
The problem can be simplified as maximization of the posteriori as shown in the image. I am sorry that Hackster.io does NOT allows to type math equation. Therefore, I used github.io to type the equation. If you want to play with 3D reconstruction, the source code is released in java code for Android devices in github.
User StudyI got chances to demonstrate the project to students at schools in Vietnam and Korea. The feedback is quite trivial. Students need more GAMES :).
Several future upgrades to the project may include:
- Demonstrating moreatschools : Giving more students at schools chances to play with the project. Collecting their feedback to improve the 3D models in different subjects such as biology, physics, history or real artifacts.
- 3D scanning communities : Connects students in the cities with students in rural areas through the 3D reconstruction app.
- Developingthe3Dinventorysystem: 3D model with GLTF format is chosen as the format for 3D models. I am working to develop the 3D inventory system in the back-end with Node JS. Testing the performance of the 3D inventory system with schools in rural areas.
Comments