In this project we will create the equivalent of a USB ‘webcam’ with embedded image processing able to detect and distinguish between left-to-right and right-to-left motion.
Project Introduction and DescriptionThis project marries the inexpensive OV7670 Camera Module with the FRDM-K82F development board from Freescale/NXP. The purpose of the integration is to show how intelligence can readily be interfaced with an inexpensive imaging sensor and embedded into a USB web-cam.
The application of this particular proof-of-concept is to detect a person entering and leaving a room, or entering and exiting the base of a staircase. It does this by using image basic processing to distinguish between left-to-right and right-to-left motion.
Required Hardware Components
The FRDM-K82F is NXP’s development platform for their K82, K81, and K80 MCUs. Included on the board is a six-axis digital accelerometer and magnetometer, a tri-colored LED, push buttons external flash, touch pads, headers for Bluetooth and 2.4 GHz radios, and (most importantly) their “FlexIO” camera header.
The FlexIO module itself emulates a variety of serial and parallel communication protocols, includes very flexible timers, programmable logic blocks, and a programmable state machine. By using the FlexIO board header and example code included in their SDK, it is reasonably easy to connect the OV7670 camera module and demonstrate image capture and processing functionality.
The OV7670 camera module uses a low voltage color CMOS image sensor and can provide 8-bit images in a wide range of formats. Many basic image processing functions are built into the module and allow configurable control of exposure control, gamma, white balance, saturation, etc. For this project, the image format being used is RGB-565. The camera module is available on Amazon for less than $9.
Lastly, to assist with debugging, I also leveraged a FTDI TTL-level-serial to USB converter (TTL-232R-3V3-PCB). This slick little device can be jumpered directly to the microcontroller’s serial port pins to catch debug output and transmit it to a PC’s com port.
To assemble the hardware, the camera module can be soldered directly onto the development board. As shown in the accompanying pictures, the camera is oriented on the backside so as to not interfere with the Arduino headers. Populate the camera module starting at header pins 1 and 2; pins 17 and 18 of the board’s header are not used.
To access a serial port on the board, I targeted the Arduino’s headers and used pin 4 of J1 as the microcontroller serial TX (connected to the RX of the serial converter) and pin 14 of J2 as the ground. Fortunately, the SDK example software also used the same serial port for “printf”, so no code changes were required. When accessing the COM port created from the use of the serial converter, you will want to use a configuration of 115200-8-N-1.
Setting up the Development Environment
To develop for the K82F, you will want to download and install 2 large components from NXP. The first is the “K82F SDK”, and the second is the “Kinetis Design Studio” (a development IDE built on and leveraging Eclipse and the ARM GCC toolchains). To get these packages, walk through NXP’s introductory getting started tutorial available here.
Within in the walkthrough, the download links will be made obvious. Lastly, you will want to download and install the Segger J-Link software tools from Segger directly.
Pay attention to the getting started tutorial pretty closely. There is a sub-link labeled “Use Kinetis Design Studio” that is of great importance. A series of updates are required to be downloaded, and some “new software” will need to be installed into Eclipse from the SDK package (located in the …./tools/eclipse_update directory).
Debugging on the BoardThe K82F board has a 10-pin debugging header suitable for easy connection to a J-Link programmer. This is my preferred method of development as I have found it the fastest and most-reliable method. Though the development board does include an on-board debugger, I often seem to run into problems with it. For one thing, you can power cycle the board without power cycling the debugger, and secondly, if you do transition to custom hardware, you will already be adept with the off-board debugger and debugging techniques. This also goes for the debug serial port they make available through the on-board debugger.
And in all honesty, the board I received did not appear to be in a ‘factory new state’; the onboard debugger was stuck in its bootloader state and restarted every 45 seconds. When attempting to update the bootloader, a restart occurred while updating and pretty much bricked the bootloader. Thus, I quickly transitioned to a standard off-board J-Link debugger and proceeded happily.
Compiling the example code in the SDK
This project builds right on top of their example code used
to demonstrate the OV7670 camera module.
To get the example project built, a series of modules need to be
imported into the Eclipse environment.
Specifically, go to the menu
<file><import…><General><Existing Projects Into
Workspace>
and navigate to the downloaded and expanded SDK. Once there, you will want to import the
following projects (note that the ‘KDS’ suffix in the pathname indicates the
Kinetis Design Studio variant of the project – of the 2 identically named
projects, that is the one you want):
mqx_frdmk92f
mqx_stdlib_frdmk82f
usbd_sdk_mqx_lib_MK82F25615
kskd_mqx_lib_K82F25615
ksdi_platform-Lib_K82F25615
dev_video_flexio_ov7670_mqx_frdmk82f
Since the above module do have some dependencies on each other, you will want to build them in roughly the order listed above.
The “dev_video” project is where we are going to be doing our hacking. The “usbd” is the USB driver used to support the USB web-cam aspect of the project, and “MQX” is the real-time operating system provided by Freescale.
Getting into the Example Code
The first thing that I want to point out is that much of the NXP SDK code and special purpose drivers are “example code”. Emphasis on “example”. A couple of years ago I argued a considerable amount with their provided USB drivers; much of it was incomplete and buggy. There was enough functionality to set up the hardware and demonstrate happy path functionality, but you quickly ran into problems as it was not production code. I think things have improved since then, but I too am simply in the mode of hacking together some functionality. The point is simply that any example code you choose to use, you will typically have to invest the time to thoroughly understand it.
Other aspects of the SDK are very solid. For instance, the MQX operating system is very easy to use, consistent, matches the documentation, and works reliably. My theory is that the patchwork quality of the code is a direct outcome of how they either invest in and acquire functionality or simply outsource to expedite and “save money”.
So with expectations set appropriately, the code I created code for this project is also “example code”. I achieved an accuracy of over 95% in my pristine and very narrow operating conditions. I wouldn’t represent it as robust for all conditions or environments. It also suffers from some “quick get-it-done” hacks. Nevertheless, it is a good place to get started.
The “camera.c” file is essentially the root of the
application, roughly equivalent to what would typically be described as
“main”. After the MQX OS is initialized,
control is transferred to Main_Task
, the board is initialized, and a sub task
is created where custom tasks can be placed.
Plopping right down on top of their framework, I placed this
application’s infinite processing loop into the stubbed out “APP_task” function.
Also in the “camera.c” file are the custom application callback functions used to integrate with the USB stack. Specifically, they respond USB events, grab whatever is the latest data/image available from the camera, and send it over to the PC.
Image capture is double buffered, and (as originally written) images are only captured IF the PC is asking for image data. Thus, if the USB port is plugged into the PC, the PC will enumerate the camera but the board will not capture and transfer data until an application is launched to view the webcam.
A second important file is the “flexio_ov7670.c” file. Here are the camera integration functions and where we will go to hack in notifications that an image capture has completed.
“usb_descriptor.c” will be modified to announce that the camera can do more than 15 frames per second.
“flexio_ov7670.h” will be modified to tweak the camera module’s initialization.
Project Code and Customization of the SDK Example Codeusb_descriptor.c
I increased the default frame rate advertised by the USB descriptor from 5 to 10 fps, by changing line 342 from:
0x80,0x84,0x1E,0x00, /* Default frame interval is 5fps */
to
0x40,0x42,0x0F,0x00, /* Default frame interval is 10fps */
The configuration values were lifted from this freescale page.
flexio_ov7670.h
To dumb down the camera some and not make it react so aggressively to changes in light levels, I modified the project’s default camera configuration by removing the “advanced_config” structure and setting it to null. At approximately line 138 change:
.advanced_config = &ov7670_advanced_config,
To:
.advanced_config = NULL,
flexio_ov7670.c
Ascertaining (guessing?) that the portb_callback
function is called when the
camera driver completes the acquisition of an image, I hacked in a global
variable (volatile) mechanism to signal to the application task that an image
is available. At approximately line 281:
static void portb_callback(uint32_t pin_status)
{
extern volatile int buf_indicator;
static int prior_capture = 0;
if (pin_status ==(1<<BOARD_USB_DEVICE_VIDEO_FLEXIO_OV7670_GPIO_VSYNC_PIN_INDEX)) /* vsyhc pin. */
{
FLEXIO_Camera_HAL_SetRxBufferDmaCmd(&gFlexioCameraDevStruct,0x03, false);
/*
* The assumptionhere is that this callback indicates the end of an image capture.
* Communicate theavailability of the data to the application using a global variable.
* '1' indicatesbuffer-zero has data, '2' indicates buffer-one has data
*/
if ( prior_capture > 0 )
{
buf_indicator = prior_capture;
prior_capture = 0;
}
if (start_capture > 0)
{
edma17_transfer_config.destAddr = (uint32_t)&u8CameraFrameBuffer[start_capture-1][32];
eDma_Reset(kEDMAChannel17, &edma17_transfer_config); /* flexio_camera to buffer. */
/* Clear all the flag. */
FLEXIO_Camera_HAL_ClearRxBufferFullFlag(&gFlexioCameraDevStruct);
FLEXIO_HAL_ClearShifterErrorFlags(gFlexioCameraDevStruct.flexioBase, 0xFF);
FLEXIO_HAL_ClearTimerStatusFlags(gFlexioCameraDevStruct.flexioBase, 0xFF);
FLEXIO_Camera_HAL_SetRxBufferDmaCmd(&gFlexioCameraDevStruct, 0x03,true);
prior_capture =start_capture;
start_capture = 0;
}
}
}
camera.c
What’s left is the main processing loop and the image processing routines.
This main processing loop simply polls the volatile global variable set by the image capture routine waiting for an image to arrive. Once detected, control is passed off to the image processing routine.
As noted previously, the example SDK application only captures images if the USB stack is requesting data. Because we need to detect image motion whether or not a PC is looking at the images, this loop will step in and request image capture if the USB isn’t. Letting the USB module request its own image capture is significant to allow it to sync capture with USB transfer. It will become pretty obvious that image capture slows down when the control loop switches to USB motivated capture.
This “APP_task” routine will be shown later with all the rest of the motion detection routines in “camera.c”.
Image Processing to Detect Motion and DirectionThe SDK application pulls bytes from the camera in 8 bit chunks; though the camera is configured to produce an RGB565 image format consisting of 16-bit pixels. To simplify pixel access, the header is skipped past, and the image buffer type cast and sent to the image processing routine as an array of 16-bit values. The specific format of the 16-bit pixel is as follows:
Source of the diagram is here.
The image processing routines ASSUME that the image is 160 pixels across (horizontal) and 120 pixels down (vertical). In full acknowledgement of the hackery, you will find these magic number constants littered in the code.
The image processing routine is very basic. It slices the image up into 4 equally sized
‘slices’ (SLC
). For each slice, it will
create an average value over all the pixels (the value averaged is actually the
red+green+blue).
The routine will remember the average of each vertical slice
for up to 6 (DPTH
) consecutive images.
Every time a new image comes along, the average of the oldest image gets
thrown out.
Ultimately, the algorithm is looking for disturbances in the average values over time, the amount of time being the last 6 captured images. For each slice, it determines where in time the largest disturbance was. If the max disturbance over time shows a trend from the left slice to the right slice, left-to-right motion is flagged. Vice-versa for right to left.
Upon detection of motion, the direction of detection is printed to the serial port.
Simultaneously, a white arrow indicating direction is drawn
on top of the image. This is useful if a
PC is monitoring the output of the USB web cam.
After detection, this arrow is left on the image for 10 (HOLD_CNT
) frames.
The USB web cam demo out of the box did not work flawlessly for me. Most of the programs that accessed this camera would get an error from the camera and not try again. Eventually, using an Ubuntu virtual machine, I found a Linux application called “Camorama” that worked some of the time. It usually took a handful of application restarts to see a ‘live image’.
Ultimately, I was only interested in the web-cam image so I could verify that I was decoding the image properly and to further debug and validate the image processing. The actual purpose of this project is not as a real web cam, but simply as a headless embedded device to detect directional motion. As a result, I did not bother troubleshooting the frailty.
With that, here is an example of the camera sitting idle and then displaying detected motion.
The majority of the source code modification is contained in the file "camera.c". I have uploaded that entire file as part of this project. To use, simply overwrite the SDK's existing version of it.
Comments