Overview
Have you ever wanted to make some ghostly figures that amaze your neighbors and their kids on Halloween? Perhaps you have a Halloween party every year and you want to greet your guests with a ghostly surprise. Well I am going to show you how you can put together a really fun project that will be the talked about by anyone who comes in contact with it.
This project will leverage a well known theatrical illusion called Pepper's ghost. We will be using a Raspberry Pi 2 Model B running Windows 10 IoT Core to project an animated pumpkin onto a reflective surface. You can use an actual projector to display your animation but I have a number of old LED monitors that will work just as well and won't cost me as much to put together. All of my LED monitors have DVI inputs but the Raspberry Pi has an HDMI port for video output. No worries as you can buy a simple HDMI to DVI adapter that will allow you to hook up your Raspberry Pi to most LED monitors. I purchased my HDMI to DVI adapter from Amazon. Using this type of adapter will not allow you to send audio out the HDMI port since the DVI connector and LED monitor only supports video signals. But you can still use the audio jack on the Raspberry Pi to blast horrifying sounds through some amplified speakers.
The application will monitor a Motion Sensor connected up to the Raspberry Pi and when motion is detected an animation will be projected onto the viewing area. This animation will be made up of facial feature movements as well as sounds coming out of a pair of speakers. Once the animation is done the application will wait for another motion detection and repeat the sequence.
The intent of this project is to get the foundation in place so that you can build your own animations. Simple animations can be made using WPF's image transformations and binding these transformation properties to a View Model. You simply change the property values on the view model and the image moves or transforms in a 2D space. If an image transformation does not achieve the correct 2D effect then you can also swap out the image with another one that is shaped a little differently. This is not how you would build a complex graphical game but animating a cartoon like figure works very well using this technique.
Defining facial movements in such a way that does not dictate the shape of the actual face that is being animated was another goal on this project. I didn't want to animate a pumpkin using x/y coordinates because that wouldn't translate well if you had a different shaped pumpkin or perhaps a skeleton head you wanted to animate. In other words I wanted to be able to create a timeline based animation that could be applied to any face. This meant I needed to use a domain specific language that describes a facial expression. Well luck would have it that this domain language already exists and it is called Facial Action Coding System.
Diagram of the high level design
Build the display
Here I will lay out how to build the display that will hold the monitor as well as a scary scene that will be a backdrop for the ghost pumpkin.
You have a number of options on how you want to build your physical display to hold the animation scene. This is entirely up to you but I want to give you a couple key pointers to make sure you are successful. There are 3 primary considerations on the design:
- You should have a low level light condition
- Height of the projected image
- Size of your LED monitor
You want to ensure the lighting is not too bright where the display is going to be. The external light might reflect or distract the ghostly figure you want everyone to focus on.
The height is pretty important as well because you don't want the visitors to be able to see the actual LED monitor. You want them to see the reflection off the acrylic plastic instead of the actual monitor. Keep the height about eye level for the majority of the people that will be viewing the scene.
Depending on the size of the monitor you want to make the container big enough to hold any other Halloween props that are to the side or behind the actual projected ghost. I elected to make the container about 6 inches bigger on each side and 12 inches deeper than the actual monitor.
You can adjust the height of the acrylic plastic in such a way that it is further away from the monitor so that people cannot see the actual monitor. But the further distance you place the acrylic from the monitor the dimmer the ghost will appear to the person viewing the scene. This will require you to have even less ambient light around the scene so the visitors can see the ghost clearly.
Step 1 - Hook up Raspberry Pi
This project consists of a Sonar Sensor that outputs an analog voltage in direct relation to the distance to the object that it detects. Since the Raspberry Pi does not have a hardware Analog to Digital converter we need to add an external one. The MCP3008 chip is an 8 channel 10 bit ADC with an SPI interface. This means the chip can read 8 analog sources with 10 bits of precision and communicate with a microprocessor using the Serial Peripheral Interface Bus. We are only going to use one of those channels for our Sonar Sensor.
The pin out for the 16 pin chip is as follows:
The MaxBotix Sonar Sensor comes mounted on it's own PCB with 7 feed through holes that you will need to solder some wires to or solder a header so that you can mount the sensor on a breadboard or use female jumper wires. I decided to use a header strip and bend the pins 90 degrees so that I could plug the sensor into a female header strip or a breadboard.
Don't make fun of my poor soldering job as I am really out of practice.
Here is the pin out of the Raspberry Pi for your reference.
The following Fritzing Breadboard diagram will guide you through getting the sensor and ADC wired to the Raspberry Pi.
Step 2 - Running the program
This application runs on the Windows 10 IoT Core platform. If you have not setup your Raspberry Pi 2 to run Windows 10 then make sure you follow the detailed instructions for first time setup.
You can fork my code or simply download the zip from my Hackster.Io.Pumpkin Github repository. If you choose to download the zip make sure you unblock the downloaded file before you unzip it.
Even though this is a Universal Windows Platform application I only setup the code to run on the ARM platform. The main thing that prevents the application from running on the x86 platform is the drivers that use the SPI interface. In a future release of the software I will make it detect if it is running on an x86 and ignore the SPI hardware.
Also this application uses mp3 and wav files for the scary sound effects. However the sound files are not part of the application and they must be copied to the Raspberry Pi's music folder before you run the application.
- Make sure the Raspberry Pi is powered up and it is connected to your network
- Make sure the HDMI cable is connected to your DVI port on the LED monitor and the DVI port is the active selection on the monitor
- Make sure your powered speakers are plugged into the audio jack of the Raspberry Pi.
- Using Windows Explorer access the Raspberry Pi via a UNC path like \\192.168.0.40\c$ except make sure you use the IP Address of your Raspberry Pi in place of the 192.168.0.40 example one.
- When prompted for a user name and password make sure you prefix the user name with the Raspberry Pi's IP address.
- If your login was correct then you should be browsing the C: drive of the Raspberry Pi.
- Select Users -> DefaultAccount -> Music. If Music does not exists than create the folder.
- Copy all the MP3 and WAV files from the sounds folder where you cloned or unzip the github source to.
- Paste the copied files into the User -> DefaultAccount -> Music folder
Once you start making your own animations you can add your own sounds to this same folder.
Now you are ready to get the application running on the Raspberry Pi.
- Open up the Pumpkin solution.
- Make sure the Sonar Sensor is pointing into a wide open area at least 12 feet from the sensor.
- Set the target platform to be ARM
- Compile the entire solution
- Set the scare.pumpkin.ui project as the startup project
- Open up the scare.pumpkin.ui project properties
- Select the Debug tab and change the Remote machine to the Raspberry Pi's IP Address and make sure the Use Authentication is unchecked.
- Press F5 to compile, deploy and run the application.
- If all goes well the application should deploy and start to run. If your wiring was correct the Sonar Sensor will be polled every 250 milliseconds. If no objects are detected the monitor will remain black.
- If an object is detected 80 or less inches away from the sensor then a pumpkin animation will be displayed on the monitor and several of the sounds will begin to play.
If the Pumpkin animation seems to play over and over again then your sensor is picking up an object that is closer than 80 inches. The Sonar Beam is fairly wide and it is designed to return the distance to the closest object. It will sometimes get reflections off of a flat surface such as the floor or a table top if the sensor is not positioned away from these surfaces. There is a hidden element on the screen that you can use to trouble shoot the distances your sensor is detecting. If you connect a mouse to the Raspberry Pi and move the cursor to the top left corner a button's border will be come visible. Simply click that button and then the sensor value in inches will begin to display. You can then point the sensor in different directions or use your hand to test that it is returning distances that are accurate. If you click the button again the sensor value will become hidden again.
I have also experienced sensor distance readings that are 30 to 40 inches from the device during the animation playback. once the animation completes the sensor readings return to something greater than 80 inches. I have no idea why that is happening but it forced me to put a delay at the end of my animation so that the sensor readings would have time to recover to normal readings before firing another animation.
Step 3 - Make your own Animation
You can orchestrate your own animation by modifying the scare.pumpkin.ui -> Services - MotionActivatedSimpleAnimation.cs. Animations are time based and they just end up being a collection of Actions. There are only 3 types of actions supported: Facial Coding, Sound and Timer Stop. In addition to making your own animations you can modify the images used to set the background pumpkin head or any of the facial components such as the eyes, eye brows, nose or mouth.
Facial Coding
The facial action coding system is very complex and was designed to be able to describe facial expressions without worrying about the details of a specific face. You can string multiple ActionUnits together to create any facial expression. The actual WPF View Models that convert the Action Unit into a real 2D graphical representation do not have the ability to react to all Action units. I only put in a basic subset and you might have to add additional behaviors to achieve the desired result in your animation.
Sounds
The sound service allows for multiple sounds to be played at once. So you have a sort of crude mixing board that will allow you to combine multiple sounds to achieve the desired effect. The system supports up to 6 channels to play sounds on. Don't forget to copy your sound files to the Raspberry Pi's music folder before you use any new sounds in your animation.
Timer Stop
This is only here to have the ability to stop the animation timer otherwise it would continue forever. Having a Timer Stop command allows me to delay the end of the Animation so the sensor can settle down before detecting the next victim.
Conclusion
This project ended up being a little more challenging than I had originally intended it to be. I wanted to use a simpler sensor to detect people than the Sonar sensor I ended up using. The MaxSonar sensor is not difficult to use at all. In fact it is one of the most reliable and easiest sonar sensors to use that I have seen. The difficult part was the fact that I had to add an extra ADC chip to read the Analog Voltage. There are other options to read the sensor that I could have used but I had the MCP3008 ADC chip so I used it. I originally wanted to use a PIR sensor but the one I had was discontinued and it seemed to give me false triggers. I was also concerned about using the PIR sensor in colder environments outside and being able to detect people.
I hope someone gets some ideas from this project and builds their own cool animation. There are a lot of ways this can be enhanced:
- Support multiple sensors that can trigger the same or different animations
- Add a way to randomize what animation gets executed
- Make animations load from a file or a database
- Have animations that contain animations ( a way to build larger animations from smaller re-usable ones)
- Network multiple Raspberry Pi's that can all work together in one big animation
- Make it easier to build animations by using a joystick
- Have other items such as a fog machine, strobe lights, or animatronic monsters that also can participate in an animation.
Comments