There is no arguing that our electronics are getting smarter. Smart enough that some of our battery powered devices can even talk and listen! What enables the capability is a technical area called signal processing. We will show how you can get started using Digital Signal Processing (DSP) with the TI LaunchPad™ ecosystem. Get a hands-on deep dive into the TI LaunchPad™ development kit and the SimpleLink™ family of products and learn where to find additional resources to progress your knowledge of signal processing and how it can relate to small scale machine learning. Machine Learning (ML) and Artificial Intelligence (AI) applications introduce many complexities to embedded systems, so you will learn how signal processing helps engineers train their device models to make smarter products.
Benefits of attending the workshop:
• Understand how ML and AI apply to embedded systems and the unique use cases
• Visualize how to apply signal processing techniques in a wide variety of complex system scenarios
• Gain exposure to ARM architecture and the flexibility for simple projects to highly sophisticated systems
• Know how to get better results out of design projects using new methods
Credit to Dr. Patrick Schaumont at Worchester Polytechnic Institute for developing this example. If you'd like to learn more from Dr. Schaumont, he offers an online resource on his course.
You can find links to those on his website: https://schaumont.dyn.wpi.edu/schaum/teaching/4703/
This workshop also has pointers to the TI SimpleLink Voice Detection Plug-in which is a library that can be used with the MSP432 LaunchPad and the Audio BoosterPack:
https://www.ti.com/tool/SIMPLELINK-SDK-VOICE-DETECTION-PLUGIN
Everything required for the workshop experience is provided on this page. This page is accessible from www.tinyurl.com/tidspworkshop2021 for your convenience. Scroll down to the Schematics section for the slides and the Code section for downloads.
Pre-workThis experience is split into two distinct tracks.
Track A is for setting up for running on a local copy of Code Composer Studio IDE. Track A focuses on getting the computer set up for building the project from source on the MSP432 LaunchPad.
Track B is for Uniflash which is a much more lightweight and quicker procedure using the web browser. For speed, Track B is recommended to experience the DSP demo, but if participants would like to modify their code, then Track A is recommended so they can fully develop with the TI example code
We will be using the DSP library developed at WPI
https://github.com/wpi-ece4703-b20/msp432_boostxl_lib
We will also be using TI's Code Composer Studio IDE which a free to use, industry grade eclipse based IDE. CCS has many features inside to help with debugging and software development. Version 7 and above is recommended, version 9 works best with MSP432. You can find the installer downloads at the bottom of the page under Custom Packages.
http://www.ti.com/tool/ccstudio
You will also need the 20.2 compiler.
https://www.ti.com/tool/download/ARM-CGT-19/
Lastly we will be referring to the SimpleLink SDK Voice Detection plug-in which is accessible from Resource Explorer inside of Code Composer or online.
Signal processing is a topic that is prevalent in a wide variety of application use cases. A signal is data collected over time. When we think of analog data we usually associate that as a continuous line time graph. We can see the waveforms and such based on time scale. There is also discrete data graphs and this is collected with sampling. The number of samples will affect the quality of data and this is a big part of signal processing, dealing with sample rates.
A DSP is a special type of processor because it is designed to be good at parallel processing. Therefore this processor needs to have the following parameters- fast compute, fast data access, fast execution control.
The signal chain for DSP is somewhat basic. Essentially what we want to do in the signal processing portion is to convert from Analog to Digital to feed samples into the DSP and then we want to convert back from Digital to Analog. On both ends there are techniques to help us better process the data we want using filters. Typically these are band pass filters and they are meant to simplify the work the DSP needs to do to get the desired application results.
A common way to explain this is in a real world application is through audio processing. A DSP is very handy to have in an audio playback to improve the output through a speaker or headphones coming from an audio source, but also as a way to process the audio coming in and identify things like keyword recognition, natural language processing, transcription, and so many other useful digital data outputs. We probably deal with this everyday with our voice activated smart assistants.
Another common example of DSP use in this manner is image or video processing. An image can be sent to a DSP to look for particular regions such as a face for face recognition or identifying objects in an image like a flower or animal. This can have many useful applications as when we look at an image with our eyes, there may be something we are searching for in particular and our brains help discern that. Video is the same where we are getting image frames that need to be processed. These can be coming in a a very high sample rate and high resolution but nonetheless you can still run image processing on each frame.
Did you know there is also a wide variety of uses for signal processing in the medical field? Our bodies generate a lot of biosignals and these can be measured like EKG, ECG, and EEG and tell us some interesting things about our health and perhaps can give us insights into what is or may go wrong in the future.
For a more advanced example, we can look at autonomous vehicles. If you are designing an autonomous moving system from the ground up, you not only need need to design the sensors and signals that will be going into the system to help with the navigation but you also need to contextualize the data and create boundaries that comply with the real-world rules of driving. As humans we are trained on the rules of the road and the operation of the vehicle, but an autonomous system has to be adapted to these rules and know how to take in the appropriate sensor data from the environment and uplink data to drive on the same level, if not better than a human operator. This requires understanding prediction to anticipate what other vehicles and objects on the road are doing, so decisions can be made.
RF and wireless communication is also another realm that DSP is very helpful. Multipath channels is a common issue that affects time and frequency.
Here are examples of using Wi-Fi or 5G.
Here is a funny older video about TI DSPs for your enjoyment.
Introducing Machine Learning[Part 2 workshop is linked here]
One goal or aspiration of engineering is the development of Artificial Intelligence. How do we better understand what is AI so we can build systems that get closer to this goal? If we work backwards, we know intelligence is on par with what we experience as human beings. Our brains absorb a bunch of signals and information coming from the sensors in our bodies- the senses like sight, touch, smell, hearing, taste, temperature and make decisions both voluntary and involuntary. So when we want to build AI, it is similar but it is a system we design that can make decisions on its own based on data that can be processed and interpreted to inform behavior. This work is mostly done in a digital context, but in areas such as robotics or transportation where we can attach actuators and motor drive to a system, then it can also have behavioral impact on the physical world.
Because AI is such a hot topic and has been around for many decades, it can be confusing how it is used to describe systems and intertwined with machine learning or deep learning techniques. Many times it is conflated with just software engineering and automation. Anything that appears to be making decisions independently can and is marketed as AI even though we know from a technical point of view the decision tree baked into the system design is limited. Think of algorithms that trade stocks, robot vacuum cleaners that traverse a room, or any number of software automations that help navigate choices and timing. Yes, some of these applications can get extremely complex, but to say they are equivalent to a more broadly defined intelligence may be a stretch.
When we talk about Machine Learning, we are looking at ways that we can train computers to identify certain pieces or aspects of data. This is a subset of the AI field.
Machine Learning can be done on high performance computers and obviously that is how many people approach it. Think of super computers like Deep Blue that can beat grand masters at chess.
There is a whole segment of machine learning that can focus on low power compute modules such as microcontrollers. This is called embedded machine learning or TinyML.
Deep learning is another type of ML using things like neural networks to train a model that the computer can use for machine learning. When a model is trained the machine will be able to consistently get correct results from the dataset even when new data is injected and processed.
Here is a video about TinyML
Another
Lab 1A - Setting up Code Composer StudioCCS is pretty easy to set up but takes some time to install. Go ahead and download it from TI and run through the installer. There are multiple versions of Code Composer, version 9 will run best. We are using the MSP432 LaunchPad for our hardware so you can install the relevant packages for MSP432.
Note: CCS is a large program so there are sometimes things that we may need to troubleshoot. If you have anything that isn't going smoothly, check out the CCS documentation and FAQs provided by TI.
Setup Hardware
Now we can set up our hardware. Go ahead and put the Audio BoosterPack on top of the MSP432 LaunchPad with the speaker facing the top side. Connect your LaunchPad to the PC with the included USB cable.
At this point we should have any drivers we need installed as part of the CCS installation. You should see the LaunchPad populated COM ports on your device manager.
If that all went smoothly you should be good.
Load example code
First thing we will do is load up some sample code projects. Grab the msp432_voicerecorder_ccs.zip from the bottom of this page and unzip it to get the CCS project. You can import this by going to Project > Import CCS Project.
To run the example project, click the hammer icon at the top to build the project, then click the bug icon to enter debug mode, and finally click the green play button to run it on the hardware.
If you are successful then you are all set up.
Lab 1B - TI Cloud Tools and UniFlashHardware Required
- TI LaunchPad
To start off the workshop we will load up the practice GUI to get our computer set up to communicate with the LaunchPad. The TI cloud tools can run directly from your browser (Chrome recommended, Firefox also supported). In the demo code we will control our LED on the MSP432 LaunchPad and test the LCD. CCS Cloud and CCS Desktop are good options for more serious development and are integrated with many resources and documentation from TI for both the hardware and software. The objective of this first lab is to introduce you to the resources available on TI cloud tools and also help you install the MSP432 LaunchPad drivers via TI Cloud Agent. You can also get the drivers when installing the desktop version of Code Composer Studio.
1. We will start with the out of box GUI for the MSP-EXP432P4111
https://dev.ti.com/gallery/view/3491167/MSP-EXP432P4111_OOB/ver/1.0.0/
Get set up in the GUI. For the first time it will ask you to install two components, the browser extension, and the TI Cloud Agent. Install both of those to proceed.
Interact with the GUI. You can program the out of box firmware by going to File > Program Device... If it isn't working, make sure the correct Serial COM port is selected for the LaunchPad by going to Options > Serial Port... and select from the list. Verify in the device manager that the XDS110 UART and XDS110 Data ports are available on the LaunchPad and assigned a COM port in the ports section of device manager.
Note: most of the of the elements of the GUI won't work with the default firmware but you should be able to adjust the Blink Rate and see that happen real time on the hardware.
2. Go to dev.ti.com. Click the UniFlash box listed under the Cloud tools section.
3. If you don't have a myTI account already, you can register for one and then sign in. If you do have one, go ahead and sign in and UniFlash will load.
4. Download the MSP432P4111_LCD_1.out at the bottom of this page in the Code section. Download the MSP432P4111_LCD_2.out at the bottom of this page in the Code section.
5. If your LaunchPad is plugged in, it should be detected by UniFlash. Navigate to the .out file you downloaded and click program. Now the program should be running on your LaunchPad. You should see the LCD changing from the out of box example.
6. Now you can add your BOOSTXL-AUDIO to the LaunchPad. Stack it on top with the speaker covering the USB port of the LP. make sure the pins are aligned and fully inserted for a solid connection.
If you are having any difficulties with loading the firmware or updating the debugger firmware try the following.
1) download the XDS support utility. This should resolve your issue. You may need to restart the browser and unplug and re-plug your LaunchPad to get it to fully reset.
Lab 2B - DSP example using UniFlash toolHardware Required- TI LaunchPad
- Audio BoosterPack
1. Use UniFlash at dev.ti.com
2. Download the MSP432_quantize and MSP432_voicerecorder.out at the bottom of this page in the Code section.
3. If your LaunchPad is plugged in, it should be detected by UniFlash. Navigate to the .out file you downloaded and click program. Now the program should be running on your LaunchPad.
4. With the basic demo recording and playback on msp432 with boostxl-audio.
When pressing the left button, the board records one second through the microphone while turning on the left red led.
When pressing the right button, the board plays back the sound while turning on the right led blue, green, or red.
If it’s green, the sound plays at normal speed.
If it’s blue, the sound plays at double speed.
If it’s red, the sound plays backward.
If you keep the right button depressed, the sound will repeatedly play.
Every time you release the right button, the playback mode shifts, from normal speed, to double speed, to backward.
5. Here is the video demo of the FFT (skip to 1:15 to see the demo)
6. The next example will use the Voice Detection SDK Plug-in. Since we don't have the exact hardware to replicate the demo, we will view the demo video to give an idea what is possible on this type of microcontroller hardware.
Here is an example of speech recognition on MSP432
DSP Concepts and TrainingTI provides SimpleLink Academy to give you on demand training around embedded systems and other critical topics. You can access SimpleLink Academy from the Resource Explorer inside of CCS or you can use the online resource explorer at dev.ti.com
First we will review DSP Concepts with the DSP course lectures:
https://schaumont.dyn.wpi.edu/ece4703b20/introduction.html
https://www.ti.com/lit/an/slaa707a/slaa707a.pdf
On your own you can also check out the TI DSP and SimpleLink workshops from training.ti.com:
- https://software-dl.ti.com/trainingTTO/trainingTTO_public_sw/c28x2808/C28x%20Workshop.pdf
- http://software-dl.ti.com/trainingTTO/trainingTTO_public_sw/c24x2407/DSP24%20Workshop.pdf
- https://training.ti.com/introduction-simplelink-sdk
- https://www.ti.com/microcontrollers-mcus-processors/processors/digital-signal-processors/overview.html
If you prefer a traditional class style experience, try out the coursera or edX course.
https://www.coursera.org/specializations/digital-signal-processing
https://www.edx.org/course/discrete-time-signal-processing-4
Hope you enjoyed this little demo of DSP development on the TI LaunchPad and Audio BoosterPack! Please share if you thought this was a cool project and be sure to check out Dr. Schaumont's online materials if you want to dive deeper into ARM Cortex-M based embedded systems.
TinyML Concepts and TrainingMachine Learning is still an emerging area when it comes to embedded systems. The leading way to engage in TinyML is through TensorFlow Lite. There are other emerging entry points such as Edge Impulse.
Check out this Edge Impulse webinars
Webinar 1 with MSP432: https://event.on24.com/wcc/r/3320201/68B622FA750DF9D5D1EA20D4F0E3DCCB
Webinar 2 with CC1352: https://register.gotowebinar.com/register/2450348940747485198
Apply to TIInterested in applying for TI internships and Full-time opportunities? Check out what's next at careers.ti.com. Be prepared for interview season at the beginning of each semester. Review your fundamentals from circuits and other classes. Apply online to as many opportunities as you can and visit TI at the career fairs at your school.
https://careers.ti.com/working-at-texas-instruments/
https://careers.ti.com/hiring-interview-process-2/
Comments
Please log in or sign up to comment.