Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!
Nahian hossain
Published © CC BY-NC-ND

VR-Powered Cultural Exploration App with AMD GPUs

Transport yourself to any cultural landmark with our VR app. Explore history, art, and culture up close. Powered by AMD for stunning visuals

IntermediateOver 5 days200
VR-Powered Cultural Exploration App with AMD GPUs

Things used in this project

Hardware components

AMD Radeon Pro W7900 GPU
AMD Radeon Pro W7900 GPU
×1
AMD Ryzen 5 2400G Processor
×1
Cooler Master G800 Gold Entry Level 80 Plus Gold ATX
×1

Software apps and online services

ContextCapture - Bentley Developer Network
Autodesk Maya
SimLab

Story

Read more

Custom parts and enclosures

Final VR ready file

This File Contains VR ready Simlab files which can be used in any VR headset using Simlab viewer

Schematics

Power supply ( PSU)

Cooler Master G800 Gold Entry Level 80 Plus Gold ATX

GPU

AMD Radeon Pro w7900

Processor

AMD Ryzen 5 2400G

Motherboard

B450M S2H

Code

coding steps involved in creating the app pipeline

Python
Below is an outline of the coding steps involved in creating the app pipeline. This example assumes the use of Python for automation :
import
os
from
realitycapture
import
RealityCaptureAPI
from
autodesk
import
MayaAPI
from
simlab
import
SimLabAPI
import
tensorflow
as
tf
import
cv2
import
numpy
as
np
import
time
# Step 1: Video Upload
def
upload_videos
(
drone_video_path, ground_video_path
):     uploaded_videos = {
"drone"
: drone_video_path,
"ground"
: ground_video_path     }
return
uploaded_videos
# Step 2: Generate 3D Model with RealityCapture
def
generate_3d_model
(
videos, api_key
):     rc_api = RealityCaptureAPI(api_key=api_key)
try
:         model_path = rc_api.create_3d_model(videos[
"drone"
], videos[
"ground"
])
return
model_path
except
Exception
as
e:
print
(
f"Error generating 3D model: {e}"
)
return
None
# Step 3: Polish Model in Autodesk Maya
def
polish_model
(
model_path, api_key
):     maya_api = MayaAPI(api_key=api_key)
try
:         polished_model_path = maya_api.remesh_model(model_path)         polished_model_path = maya_api.add_textures(polished_model_path)         polished_model_path = maya_api.add_details(polished_model_path)
return
polished_model_path
except
Exception
as
e:
print
(
f"Error polishing model: {e}"
)
return
None
# Step 4: Integrate VR Features in SimLab Composer
def
integrate_vr_features
(
model_path, api_key
):     simlab_api = SimLabAPI(api_key=api_key)
try
:         simlab_project_path = simlab_api.create_project(model_path)         simlab_api.add_vr_interactions(simlab_project_path)         simlab_api.add_gemini_pro_vision(simlab_project_path)
return
simlab_project_path
except
Exception
as
e:
print
(
f"Error integrating VR features: {e}"
)
return
None
# Step 5: Generate Final VR Experience
def
generate_vr_experience
(
simlab_project_path, api_key
):     simlab_api = SimLabAPI(api_key=api_key)
try
:         vr_viewer_link = simlab_api.publish_to_vr_viewer(simlab_project_path)
return
vr_viewer_link
except
Exception
as
e:
print
(
f"Error generating VR experience: {e}"
)
return
None
# Main function to run the pipeline
def
run_pipeline
(
drone_video_path, ground_video_path, api_keys
):     api_key_rc = api_keys[
"realitycapture"
]     api_key_maya = api_keys[
"maya"
]     api_key_simlab = api_keys[
"simlab"
]      uploaded_videos = upload_videos(drone_video_path, ground_video_path)     model_path = generate_3d_model(uploaded_videos, api_key_rc)
if
not
model_path:
return
None
polished_model_path = polish_model(model_path, api_key_maya)
if
not
polished_model_path:
return
None
simlab_project_path = integrate_vr_features(polished_model_path, api_key_simlab)
if
not
simlab_project_path:
return
None
vr_viewer_link = generate_vr_experience(simlab_project_path, api_key_simlab)
return
vr_viewer_link
# Example usage
api_keys = {
"realitycapture"
:
"YOUR_REALITYCAPTURE_API_KEY"
,
"maya"
:
"YOUR_MAYA_API_KEY"
,
"simlab"
:
"YOUR_SIMLAB_API_KEY"
} drone_video =
"path/to/drone/video.mp4"
ground_video =
"path/to/ground/video.mp4"
final_vr_link = run_pipeline(drone_video, ground_video, api_keys)
if
final_vr_link:
print
(
f"Final VR experience available at: {final_vr_link}"
)
else
:
print
(
"Pipeline failed"
)
# CNNs Image Detection Integration
def
capture_screenshot
():
# SimLab specific code to capture screenshot (replace with actual implementation)
screenshot = cv2.imread(
'screenshot.png'
)
# Placeholder
return
screenshot
def
preprocess_image
(
image
):
# Resize image to match model input shape (assuming 224x224)
image = cv2.resize(image, (
224
,
224
))
# Normalize pixel values (adjust as needed)
image = image /
255.0
# Add batch dimension
image = np.expand_dims(image, axis=
0
)
return
image
def
predict_image
(
image
):     prediction = model.predict(image)
# Convert prediction to text (replace with your logic)
predicted_text =
"Predicted text based on prediction"
return
predicted_text
def
display_text
(
text
):
# SimLab specific code to display text on screen (replace with actual implementation)
print
(text)
# Placeholder
# Load your CNN model (replace with your actual model path)
model_path =
'path/to/your/model'
model = tf.keras.models.load_model(model_path)
while
True
:
if
controller_input ==
'X'
:
# Replace with SimLab's input handling
screenshot = capture_screenshot()         preprocessed_image = preprocess_image(screenshot)         predicted_text = predict_image(preprocessed_image)         display_text(predicted_text)         time.sleep(
1
)

coding steps involved to integrate CNNs AI image detection model in SimLab composer

Python
import cv2
import numpy as np
import tensorflow as tf

# Import your VR SDK
# Assuming vr_controller is an object from your VR SDK

def capture_screenshot(vr_controller):
    # Logic to capture a screenshot using VR controller input
    # Replace with your VR SDK specific code
    screenshot = vr_controller.capture_image()  # Placeholder function
    return screenshot

def preprocess_image(image):
    # Preprocess image for CNN input
    processed_image = cv2.resize(image, (224, 224))  # Example resizing
    processed_image = processed_image / 255.0  # Normalize pixel values
    processed_image = np.expand_dims(processed_image, axis=0)  # Add batch dimension
    return processed_image

Credits

Nahian hossain
2 projects • 1 follower
I am an AI enthusiast. I make different AI visual arts using Stable diffusion and different other AI.

Comments