When we think of embedded vision systems we tend to think of solutions in the visible portion of the electromagnetic spectrum. However, the human visible spectrum covers only a brief range of the EM spectrum and working outside the visible EM spectrum provides significant useful data. An example of this would be working in the infrared element of the spectrum to detect hot spots in industrial systems or low light / night conditions.
Depending upon where we work in the EM spectrum, we may need to consider a different semi conductor technology for the actual sensor itself.
Charge Coupled Device – X-ray to visible and stretching to near infrared
- CMOS Imager Sensor – X-ray to visible and stretching to near infrared
- Uncooled IR – Micro Bolometer – Typically operate in mid-IR to long-IR range
- Cooled IR - HgCdTe or InSb based solutions require cooling
As the wavelength increases, it becomes more difficult to excite electrons as the electron energy is not sufficent to bridge the band gap in silicon. As such to work in the IR domain we often requiring more exotic semi conductors than Silicon e.g. HgCdTe or InSb.
In this project we are going to create a image processing system which can use "see" in both the visible and IR domains. We will use a low cost FPGA board to achieve this showing that such a solution does not have to be expensive.
Hardware DesignThis design is going to use a FLIR Lepton 2, HDMI camera and a HDMI display to output an image which represents both the visible and IR representation of the same image.
To do this in Vivado, we are going to use the following IP:
- Digilent DVItoRGB - Converts HDMI to RGB
- Digilent RGBtoDVI - Converts RGB to HDMI
- Video Timing Controller - Used for detecting and generating video timing
- VDMA - moves the HDMI image from into the DDR memory, reads the HDMI and the thermal image into the output video pipeline
- Quad SPI - Configured as single SPI to receive the VoSPI data from the Lepton
- GPIO - Provide HDMI hot plug detect reception and generation
The reading of the IR imager is actually performed by SW and output over the VDMA. This in contrast to the HDMI visible channel which is input via HDMI first then stored in DDR before the two images are merged by software.
The lepton sensor we are using the Lepton 2 outputs sixty packets of 164 bytes to make up the 80 pixel by 60 lines image. The additional four bytes contain a header which is needed to ensure we can synchronize with the image output.
Control and configuration of the lepton is provided by a using an I2C communications bus this can be easily interfaced with. Using this interface we can set flat field correction, auto gain control and a range of other configurations.
The image output from the Lepton is output over SPI and is called VoSPI it uses only the SCLK, CS and MISO lines.
To get the image out of the Lepton we need to to maintain synchronization, if we lose synchronization we need to de-assert the chip select for at least 200 ms.
The frame rate from the Lepton is reduced to comply with export regulations, as such it outputs at approximately 9 frames per seconds (FPS). Valid frames are indicated by the content of the header field if the first byte is 0xFF the packet is invalid and should be discarded.
Unlike in the movies, most thermal cameras actually use grey scale as opposed to a false color scheme. The pixels output from the lepton are output as 14 bit greyscale however, if we do want to use a color scheme we can use the RBG888 color scheme which provides 244 bytes per packet. In this case we three bytes per pixel and need to enable the Automatic Gain Control (AGC).
To correctly interface with the Lepton we need to set the following hardware configuration
- I2C address 0x2A
- SPI Master Operation
- SPI for CPOL=1, CPHA=1
On the hardware itself I connected the FLIR Lepton to the Jumper 2 of the Arty Board. Using this approach we can ensure the I2C, SDA and SCL lines align on the Arty and Lepton.
The visible sensor can be received using either a HDMI input or a camera / sensor connected to the Pmod interfaces for example the TDNext. For this example I used a small HDMI action camera, as it allowed maximum flexibility to align the two cameras with each other.
One thing that we must remember when interfacing with HDMI cameras is that we need to assert the hot plug detect signal to ensure video is output by the camera. Failure to do this will result in no video being generated and can mean you spend lot of time debugging the Vivado design and getting nowhere.
To receive the video we will be using the video In to AXI streaming block. This will then enable us to move the received video into DDR memory. Once in DDR memory our chosen processor can create the final image.
As the FLIR Lepton video is received by the processor and stored in DDR, to create merged image we also want to store the visible image in the same DDR memory.
We can then use the SW application to create the output frame. This is a different situation to if we have two (or more) video streams in the PL and wanted to merge them. In this case we could use the Video Mixer IP, however instead the SW will create a larger, frame and then fit the visible and the IR sensor to the frame.
Software DevelopmentThe software implements the following functionality
- Image processing chain configuration (VDMA)
- Configuration of the Lepton using I2C
- Read out Lepton Image using VoSPI
- Switch between IR and Visible as commanded over the Serial Link
To issue commands to the Lepton, we need to use the following structure
To write or read commands from the lepton we use the following approach
- Write commands into the data registers starting at address 0x0008 and upwards - If required.
- Write the number of commands to register 0x0006
- Write the command to command register at 0x0004 the command register will take the following structure.
An example of this approach can be seen below to set the AGC
SendBuffer[0] = 0x00;
SendBuffer[1] = 0x08;
SendBuffer[2] = 0x00;
SendBuffer[3] = 0x01;
XIicPs_MasterSendPolled(&Iic, SendBuffer, 4, IIC_SLAVE_ADDR);
while (XIicPs_BusIsBusy(&Iic)) {
/* NOP */
}
SendBuffer[0] = 0x00;
SendBuffer[1] = 0x0A;
SendBuffer[2] = 0x00;
SendBuffer[3] = 0x00;
XIicPs_MasterSendPolled(&Iic, SendBuffer, 4, IIC_SLAVE_ADDR);
while (XIicPs_BusIsBusy(&Iic)) {
/* NOP */
}
SendBuffer[0] = 0x00;
SendBuffer[1] = 0x06;
SendBuffer[2] = 0x00;
SendBuffer[3] = 0x02;
XIicPs_MasterSendPolled(&Iic, SendBuffer, 4, IIC_SLAVE_ADDR);
while (XIicPs_BusIsBusy(&Iic)) {
/* NOP */
}
SendBuffer[0] = 0x00;
SendBuffer[1] = 0x04;
SendBuffer[2] = 0x01;
SendBuffer[3] = 0x01;
XIicPs_MasterSendPolled(&Iic, SendBuffer, 4, IIC_SLAVE_ADDR);
while (XIicPs_BusIsBusy(&Iic)) {
/* NOP */
}
Receiving the VoSPI
for(segment = 0; segment <loop; segment++){
XSpi_SetSlaveSelect(&SpiInstancePtr,0x01);
XSpi_Transfer(&SpiInstancePtr, Buffer , Buffer, data);
if((Buffer[0] != 0xff)){
for(i =0; i<data/2; i++){
//scale to 8 bit as 8 bit op
Image[segment][i] = ((Buffer[2*i] << 8 | Buffer[2*i+1])>>8);
}
}
else {
segment = 0;
}
}
The FLIR Lepton has a limited resolution, to ensue we can see with ease what is occurring the software will scale up the image to a 640 pixels by 480 lines image. This makes the scaling quite easy as each pixel can be output with a factor of 8 to achieve the desired resolution.
The SW algorithm implements the scaling when it writes out the image to the frame buffer.
int scalex, scaley;
scalex =8;
scaley =8;
for(x = 0; x < (640); x++) {
iPixelAddr = x;
for(y = (0); y < (480); y++) {
frame[iPixelAddr] = (u32) (((Image[(y/scaley)][(x/scalex)+4])<<16)|
((Image[(y/scaley)][(x/scalex)+4])<<8)|
((Image[(y/scaley)][(x/scalex)+4])));
iPixelAddr += 640;
}
}
The HMDI video for the visible is simpler to receive using the VDMA
XAxiVdma_DmaConfig(dispPtr->vdma, XAXIVDMA_WRITE, &(dispPtr->vdmaConfig));
XAxiVdma_DmaSetBufferAddr(dispPtr->vdma, XAXIVDMA_WRITE,dispPtr->vdmaConfig.FrameStoreStartAddr);
XAxiVdma_DmaStart(dispPtr->vdma, XAXIVDMA_WRITE);
XAxiVdma_StartParking(dispPtr->vdma, dispPtr->curFrame, XAXIVDMA_WRITE);
When it comes to merging the images both visible and IR imager write into the same frame buffer. It is just that the frame buffer is larger than both the visible and IR images, this allows me to replicate the visible and IR images on the same frame.
To do this I leave the Visible imager to start at pixel 0, line 0 and will occupy the first 480 pixels. The IR image occupies 640 pixels from the 640 to 1280 pixels at line 0.
The overall frame output is 1280 by 720, which provides plenty of space for both images to be output.
Of course the VDMA transfer correctly position the Visible image, while the software is designed to write to the correct position in the frame bugger for the image to appear.
This software can be implemented using either an Arm A9 in a Arty Z7 or a MicroBlaze in a FPGA based Arty.
TestingOnce the FLIR Lepton SW was implemented I wanted to check it was working correctly, to do this I output only the IR Image on the display.
Outputting an image was pretty straight forward and showed the lepton was valid image with no corruption from discarded frames. However, I also wanted to ensure its settings were correct including the automatic gain control.
To test this I inserted a cold object into the scene and observed the scale adapt to take into account the new temperature range.
Finally to show just how sensitive the thermal imaging, I recorded the video below where you can see the imager detecting my finger prints long after my fingers have been removed from the cold bottle.
The final stage of the testing was to enable the visible video channel and scale the output video correctly.
When I put this together we can see the visible image and the thermal image side by side. This allows us to see more information than we could if we just used only one one or the other.
This has been a pretty simple introduction to imaging in both domains, the next potential steps could include:
- Using the PL to scale up video on the thermal imager
- Careful alignment of the two imagers enabling overlapping of the images - alpha blending
- Automatic switch over between visible and IR depending upon the ambient light
We will look at scaling in the programmable logic soon in a MicroZed Chronicles article.
See previous projects here.
Additional Information on Xilinx FPGA / SoC Deve
lopment can be found weekly on MicroZed Chronicles.
Comments