I have designed this story to be more educational than simply providing the basic recipe to make an IP camera. If you are in a hurry to see the best GStreamer commands to use, just jump ahead to "IP Camera using Compressed Video over UDP"
Hardware- 8MPLUSLPD4-EVK i.MX 8M Plus EVK (evaluation kit)
- daA3840-30mc-IMX8MP-EVK (Basler MIPI 8Mpixel/4K camera kit) (NXP Link)
- Workstation running a modern GNU/Linux distribution such as Ubuntu 20.04
- Micro SD card - Should be at least 16G Bytes and as fast as possible
- Ethernet cable for connecting the MX8MP EVK to the network
NOTE: The instructions in this project should work for other types of cameras, not just the Basler camera. If using a USB camera instead of a MIPI camera then the name of the video source may change but everything else should remain the same.
Software- We will use the NXP MX8MP BSP version 5.4.70_2.3.0 for this training, but all future versions of the BSP should also work.
- i.MX 8 GStreamer User Guide, version 2.0, 2020-09-23
- i.MX Graphics User’s Guide, IMXGRAPHICUG, version 8, 2021-06
- GStreamer Tutorials
- V4L2 (Video for Linux 2)
Username: root
Password: none (as in blank, no password)
SERVER MX8MP
- This is the 8MPLUSLPD4-EVK "i.MX 8M Plus EVK" with the daA3840-30mc-IMX8MP-EVK Basler camera
- In GStreamer lingo this is the video source, or SERVER
- I may also refer to this kit as the MX8MP
- Commands on the SERVER MX8MP will start with
root@imx8mpevk:~#
CLIENT WORKSTATION
- This is a GNU/Linux workstation
- In GStreamer lingo this is the video sink, or CLIENT
- I may refer to this device as the WORKSTATION
- Commands on the CLIENT WORKSTATION start with
[flint@ZBook] $
Instructions for installing GStreamer are available on the GStreamer website.
To install on Ubuntu, the command is simply the following
[flint@ZBook] $ sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
Install Wireshark on the WorkstationNOTE: This step is optional. The GStreamer portion of this project will work perfectly well without Wireshark. But by installing Wireshark on your workstation you will be able to measure the bandwidth used to transmit the video data. I will not be providing any training on how to use Wireshark, so please use a few minutes to experiment on how to monitor your Ethernet port
There are a few methods for installing Wireshark. You can download it and install that way. Below I will provide two methods, using the Ubuntu repository and building from scratch.
Instructions for installing Wireshark on Ubuntu 20.04 from repository
First check to see if the workstation has the proper repository added for Wireshark
[flint@ZBook] $ sudo apt show wireshark
If no info is provided on Wireshark then you may need to add the repository in this next step
[flint@ZBook] $ sudo add-apt-repository universe
Install Wireshark
[flint@ZBook] $ sudo apt install wireshark
To run Wireshark, this is the command. Sudo is preferred.
[flint@ZBook] $ sudo wireshark&
Instructions for installing Wireshark on Ubuntu 20.04 from repository
Note: I ran across issues with cmake while building from scratch. Fortunately there is a script on gitlab that will build Wireshark and take care of a lot of the issues.
[flint@ZBook] $ wget -O - https://gist.githubusercontent.com/syneart/2d30c075c140624b1e150c8ea318a978/raw/build_wireshark.sh | sh
I am using the Linux BSP 5.4.70_2.3.0 for this project, but you should be able to use newer releases although I have not tested GStreamer with them yet.
This is a list of BSP's for the MX8MP as of the time I wrote this project. There will be a new release every quarter.
Download the BSP
Download the L5.4.70_2.3.0_MX8MP BSP to your workstation by clicking on this link
When the download is complete, you should have a file that is about 4G Bytes in size named L5.4.70-2.3.0_images_MX8MPEVK.zip
Prepare the BSP so that it is ready to be programmed on the SD card
Move the zip archive to a location on your Workstation SSD that is convenient for you. For me this location is /work/imx8mp/bsp/
[flint@ZBook] $ mv /home/flint/Downloads/L5.4.70-2.3.0_images_MX8MPEVK.zip /work/imx8mp/bsp/
Unzip the archive to a directory by the same name (without the .zip)
[flint@ZBook] $ unzip L5.4.70-2.3.0_images_MX8MPEVK.zip -d L5.4.70-2.3.0_images_MX8MPEVK
When the archive has been fully decompressed, move to the new directory
[flint@ZBook] $ cd L5.4.70-2.3.0_images_MX8MPEVK
View the directory
[flint@ZBook] $ ls -al
We want to use the .wic
file, so to make it easier to find type this
[flint@ZBook] $ ls | grep .wic
There are two results
imx-image-full-imx8mpevk.wic
imx-image-multimedia-imx8mpevk.wic
The full image is more capable than the multimedia image. Both should work for this application but I prefer to use the full image.
Prepare the SD card to be programmedInsert a micro SD card in to the USB adapter and plug that in to the PC USB port.
Look at the kernel message buffer (most recent 25 lines)
[flint@ZBook] $ dmesg | tail -25
Hopefully you can see the device name the Kernel has given to your SD card.
This is an example of a useful line, it tells us that the SD card adapter is located at sdb which we know means device: /dev/sdb
[17073.394368] sd 4:0:0:0: [sdb] 62521344 512-byte logical blocks: (32.0 GB/29.8 GiB)
Check if there are any existing partitions on this SD card by using fdisk
[flint@ZBook] $ sudo fdisk -l | grep sdb
This is an example of what I saw. You might see only one partition if you have a new SD card.
Disk /dev/sdb: 29.83 GiB, 32010928128 bytes, 62521344 sectors
/dev/sdb1 * 20480 544767 524288 256M c W95 FAT32 (LBA)
/dev/sdb2 544768 14178303 13633536 6.5G 83 Linux
It is a good idea to unmount the partitions
[flint@ZBook] $ sudo umount /dev/sdb1
[flint@ZBook] $ sudo umount /dev/sdb2
Program the SD card with the BSP
ProTip: BE EXTREMELY CAREFUL THAT YOU GET THE DEVICE NAME RIGHT OR ELSE YOU RISK OVERWRITING ANOTHER DISK (I have done this a few times). It is best to write your command out on a note app and then copy it in to the terminal instead of typing manually in the terminal.
This is the command that I will use, you should replace /dev/sdb
with the device you identified earlier
[flint@ZBook] $ sudo dd if=imx-image-full-imx8mpevk.wic of=/dev/sdb bs=4K conv=fsync status=progress
This will take anywhere from 30 seconds to a few minutes depending on the speed of your SD card.
When the process if over you should see something like this
6508888064 bytes (6.5 GB, 6.1 GiB) copied, 106 s, 61.4 MB/s
1591735+1 records in
1591735+1 records out
6519747584 bytes (6.5 GB, 6.1 GiB) copied, 189.932 s, 34.3 MB/s
Inspect the partitions that were created on the SD card
[flint@ZBook] $ sudo fdisk -l | grep sdb
This is what it looks like on my workstation
Disk /dev/sdb: 29.83 GiB, 32010928128 bytes, 62521344 sectors
/dev/sdb1 * 16384 186775 170392 83.2M c W95 FAT32 (LBA)
/dev/sdb2 196608 12733881 12537274 6G 83 Linux
It is a good idea to synchronize the filesystems so that the SD card can be safely removed
[flint@ZBook] $ sync
View the files that were written to the SD card
If your workstation automatically mounted the partitions on the SD card, great. If not, then pull the SD card out of the workstation and insert again. That usually prompts the OS to mount the SD card.
View all of the mounted devices
[flint@ZBook] $ df
I can see that for my workstation, the two SD card partitions have been mounted
/dev/sdb2 5.8G 4.0G 1.5G 74% /media/flint/root
/dev/sdb1 84M 30M 54M 37% /media/flint/boot
Change directory to the boot partition and view the files
[flint@ZBook] $ cd /media/flint/boot
[flint@ZBook] $ ls -al
Make note of all of the files in this directory, especially the .dtb
files.
I like to add a empty filename with the version of the BSP so that I have an easier time identifying the SD card in the future
[flint@ZBook] $ touch L5.4.70-2.3.0_images_MX8MPEVK
If you view the directory again you will see the empty file you just created
[flint@ZBook] $ ls -al
The result for me is
.rw-r--r-- 0 flint 23 Jun 14:22 L5.4.70-2.3.0_images_MX8MPEVK
Move the micro SD card to the BSPGet out of the mounted directory
[flint@ZBook] $ cd
Unmount both partitions
[flint@ZBook] $ sudo umount /dev/sdb1
[flint@ZBook] $ sudo umount /dev/sdb2
Confirm that there are no mounted SD card partitions
[flint@ZBook] $ df
Remove the SD card from your workstation and set aside
Assemble the Basler CameraIf you haven't done so already, assemble the Basler camera as per instructions that came with the kit.
The finished assembly should look like these next two images.
Set up the 8MPLUSLPD4-EVK as follows
- Insert the SD card in to the slot under the "RESET" switch (near the HDMI connector)
- Ethernet cable attached between the EVK "ENET1" (next to the USB port) and your home network switch
- Power cable connected to the "power" USB type-C connector (next to the debug connector)
- Debug cable connected between the EVK "debug" USB micro-A and your Workstation USB type-A
- Connect the Basler camera to the EVK "CSI1 MIPI" connector (next to the DSI MIPI connector)
- Make sure that the "BOOT MODE" (SW4) DIP switches are set to boot from microSD as printed on the PCB (0011)
Keep the power switch in the off mode for now
The finished assembly should look like this image
If not already performed in the previous section, connect the console "debug" cable to your workstation
Find out which device has been assigned. Most likely it will be a ttyUSB
device.
[flint@ZBook] $ dmesg | grep ttyUSB
You should see four ttyUSB
instances from the single USB cable
[18352.903983] usb 1-4: FTDI USB Serial Device converter now attached to ttyUSB0
[18352.904226] usb 1-4: FTDI USB Serial Device converter now attached to ttyUSB1
[18352.904468] usb 1-4: FTDI USB Serial Device converter now attached to ttyUSB2
[18352.904691] usb 1-4: FTDI USB Serial Device converter now attached to ttyUSB3
I know from the MX8MP EVK documentation that the third one, /dev/ttyUSB2
, is for console adapter
My preferred console terminal is tio, but please feel free to use whichever one you prefer
There are two ways that I like to use tio to attach to the console
1 Without saving the session to a log file
[flint@ZBook] $ sudo tio /dev/ttyUSB2
2 With the session being saved to the log file
[flint@ZBook] $ sudo tio /dev/ttyUSB2 -l console_log.txt
Pick whichever one you like
Configure U-BootMake sure that you have your console ready to go, because once we enable power on the EVK we will only have a few seconds to interrupt the booting process
Turn on the EVK power switch
As soon as you see the following text, press any key (enter, space, etc) to interrupt autoboot
Hit any key to stop autoboot:
You should see the U-Boot prompt u-boot=>
NOTE: If you missed your chance to stop autoboot, let the Kernel complete the booting process, then log in and type in reboot
Go ahead and view all the U-Boot environment variables
u-boot=> printenv
You should see over 20 lines of environment variables, but we are only interested in one right now.
The variable fdt_file
points to the flattened device tree file that U-Boot should load when it loads the Kernel. The .dtb
files in the boot partition are Device Tree Blob (or Binary) files. We can only load one of these at a time, and each has a special purpose for this EVK.
View the fdt_file
default setting
u-boot=> printenv fdt_file
The default value should look like this
fdt_file=imx8mp-evk.dtb
Now go ahead and edit this variable to use the dtb file that enables the Basler camera
u-boot=> editenv fdt_file
You will need to use <backspace> to delete the original value and replace with imx8mp-evk-basler-ov5640.dtb
edit: imx8mp-evk-basler-ov5640.dtb
Press <enter> when the text matches what I have above
Print the newly edited fdt_file
to confirm that the value if proper
u-boot=> printenv fdt_file
You should see this value. If not, edit it again.
fdt_file=imx8mp-evk-basler-ov5640.dtb
Save the environment variables and boot the Kernel
u-boot=> saveenv
u-boot=> boot
Wait for the Kernel to stop loading
You should see this prompt
NXP i.MX Release Distro 5.4-zeus imx8mpevk ttymxc1
imx8mpevk login:
The login is simply root
with no password
If you see the occasional kernel message on the console, this is normal
[ 11.886011] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Find out the IP address assigned to the MX8MP EVKIt is a good idea to check the IP address for this MX8MP EVK every time it boots
root@imx8mpevk:~# ifconfig
Assuming you plugged the Ethernet cable in properly, you should see an IP address available after "inet addr
" for eth0.
For my setup this value is 192.168.1.137
Make note of the IP address on your MX8MP EVK
eth0 Link encap:Ethernet HWaddr 00:04:9f:07:07:35
inet addr:192.168.1.137 Bcast:192.168.1.255 Mask:255.255.255.0
SSH to the MX8MP EVK from your workstationUse the IP address provided in the previous section to SSH from your workstation to the MX8MP EVK
[flint@ZBook] $ ssh root@192.168.1.137
From here on out you have the flexibility of running commands in the console or in the SSH terminal. Both will work for you. The main difference is that the console will provide the booting and shutdown processes and will also provide Kernel messages as they come in.
Install htop on the MX8MP EVKNOTE: This step is optional. The GStreamer portion of this project will work perfectly well without htop. But by installing htop you will better understand CPU utilization.
From either the console or SSH, run the top command
root@imx8mpevk:~# top
Notice that at the top of the screen is a lot of useful information about the computer system, including all of the top processes using the CPU. This is a nice tool but it has a major drawback, it displays CPU utilization percentage referenced to 100% of one CPU. It is possible to see CPU utilization in excess of 100% if a process is running on multiple CPU cores. I prefer to use the htop utility instead of TOP for the extra granularity on a per-CPU basis.
Because we are running on a BSP generated by Yocto Project that is intended for embedded applications, we do not have a handy application repository for installing new applications such as we would when running Ubuntu on our workstation. The means that we have to manually compile the application. Below are the steps required.
root@imx8mpevk:~# pwd
/home/root
root@imx8mpevk:~# mkdir tmp
root@imx8mpevk:~# cd tmp
root@imx8mpevk:~/tmp# pwd
/home/root/tmp
root@imx8mpevk:~/tmp# wget https://github.com/htop-dev/htop/archive/refs/heads/master.zip
root@imx8mpevk:~/tmp# unzip master.zip
root@imx8mpevk:~/tmp# cd htop-master
root@imx8mpevk:~/tmp/htop-master# ./autogen.sh
root@imx8mpevk:~/tmp/htop-master# ./configure
root@imx8mpevk:~/tmp/htop-master# make install
root@imx8mpevk:~/tmp/htop-master# which htop
/usr/local/bin/htop
Run htop
root@imx8mpevk:~/tmp/htop-master# htop
Notice that now you can see the utilization for each individual CPU
Type <CTRL> C to exit
Change back to the home directory and verify
root@imx8mpevk:~# cd
root@imx8mpevk:~# pwd
/home/root
Basic InformationIn this section we are just going to look at some very basic tools to determine capabilities
List all video devices detected by the kernel
root@imx8mpevk:~# ls -ltrh /dev/video*
My system reports back two video devices, much like there are two video MIPI inputs.
crw-rw---- 1 root video 81, 4 Jul 8 22:05 /dev/video1
crw-rw---- 1 root video 81, 0 Jul 8 22:05 /dev/video0
We can also use v4l2 to list the devices
root@imx8mpevk:~# v4l2-ctl --list-devices
What we care about are these two devices, much like in the previous command
VIV (platform:viv0):
/dev/video0
VIV (platform:viv1):
/dev/video1
Next we are going to run the GStreamer device monitor which will list all of the audio/video devices that can be monitored. This is useful for future debugging capabilities.
root@imx8mpevk:~# gst-device-monitor-1.0
The really important stuff to learn here is to see the three types of raw video formats supported as well as the range of resolution and framerate.
gst-launch-1.0 v4l2src ! ...
caps : video/x-raw, format=(string)YUY2, width=(int)3840, height=(int)2160, framerate=(fraction)30/1;
video/x-raw, format=(string)NV16, width=(int)3840, height=(int)2160, framerate=(fraction)30/1;
video/x-raw, format=(string)NV12, width=(int)3840, height=(int)2160, framerate=(fraction)30/1;
gst-launch-1.0 v4l2src device=/dev/video1 ! ...
caps : video/x-raw, format=(string)YUY2, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)[ 48, 3840 ], height=(int)[ 32, 2160 ];
video/x-raw, format=(string)NV16, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)[ 48, 3840 ], height=(int)[ 32, 2160 ];
video/x-raw, format=(string)NV12, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)[ 48, 3840 ], height=(int)[ 32, 2160 ];
Lets use v4l2 again to list all the capabilities of /dev/video0
root@imx8mpevk:~# v4l2-ctl --device=/dev/video0 --all
We can also ask it to give us a list of supported formats
root@imx8mpevk:~# v4l2-ctl --list-formats
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'YUYV' (YUYV 4:2:2)
[1]: 'NV12' (Y/CbCr 4:2:0)
[2]: 'NV16' (Y/CbCr 4:2:2)
[3]: 'BA12' (12-bit Bayer GRGR/BGBG)
Using the gst-inspect tool is essential for understanding the supported source and sink pads for any plugin used in the GStreamer pipeline. Go ahead and try some of these out.
root@imx8mpevk:~# gst-inspect-1.0 imxvideoconvert_g2d
root@imx8mpevk:~# gst-inspect-1.0 waylandsink
root@imx8mpevk:~# gst-inspect-1.0 videotestsrc
root@imx8mpevk:~# gst-inspect-1.0 autovideosrc
root@imx8mpevk:~# gst-inspect-1.0 vpuenc_h264
Basic GStreamer TestingIn this section we are going to test some very basic functionality of GStreamer
NOTE: Pressing <CTRL> C will exit GStreamer
I recommend that you have at least two terminals open with the SERVER MX8MP, your console as well as at least one SSH session.
Also make sure that the MX8MP EVK is connected to an HDMI display
In the SSH, run the htop command so that you can view the CPU utilization of the MX8MP. At times you may want to also run the top command so that you can see CPU utilization for all cores in one value.
You can use the console to run the GStreamer commands, or you can open up a second SSH session.
With the next two commands we will create a test pattern and send to the display. autovideosink is the default video sink that is used on PC's, and it will work on the MX8MP, but waylandsink is preferred. Try both.
root@imx8mpevk:~# gst-launch-1.0 -v videotestsrc ! autovideosink
root@imx8mpevk:~# gst-launch-1.0 -v videotestsrc ! waylandsink
You should notice that both of the above used very little CPU utilization
Now we are going to specify the type of test pattern
root@imx8mpevk:~# gst-launch-1.0 videotestsrc pattern=ball ! waylandsink
Notice how this is using about 17% CPU utilization. The reason is that the CPU is used to generate the video test patterns, so more animation and more resolution will use more CPU utilization.
Now try these next three commands, and notice how the CPU utilization increases with the increase in resolution. videotestsrc runs on a single thread, and thus a single CPU core, so eventually the one CPU core will reach 100% utilization and the frame rate will drop. You may also see some warning messages from GStreamer that buffers are being dropped.
root@imx8mpevk:~# gst-launch-1.0 videotestsrc pattern=ball ! video/x-raw,width=640,height=480 ! waylandsink
root@imx8mpevk:~# gst-launch-1.0 videotestsrc pattern=ball ! video/x-raw,width=1280,height=720 ! waylandsink
root@imx8mpevk:~# gst-launch-1.0 videotestsrc pattern=ball ! video/x-raw,width=1920,height=1080 ! waylandsink
Put the Camera Video on the DisplayNOTE: We know that our camera will show up on /dev/video0
because it is plugged in to the first MIPI-CSI port.
Tell GStreamer to put the camera video feed on the display
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! waylandsink
If all goes as planned, you should see the camera video on the HDMI display. You may need to adjust the camera focus ring to improve the picture image.
GStreamer should be providing some FPS (Frames Per Second) commentary that looks something like this
[ 2358.190829] ###### 30.49 fps ######
[ 2363.248737] ###### 30.50 fps ######
[ 2368.307805] ###### 30.50 fps ######
[ 2373.365616] ###### 30.51 fps ######
You should also check your htop/top terminal to see what the CPU utilization is. It should be very low, like around 8%
The other important thing to notice is the GStreamer output for the source caps
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)YUY2, width=(int)3840, height=(int)2160, framerate=(fraction)30/1, colorimetry=(string)2:6:5:1, interlace-mode=(string)progressive
We can now write the same command as above, but force the caps into the pipeline. Notice that we are copying all the text after the "caps ="
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! "video/x-raw, format=(string)YUY2, width=(int)3840, height=(int)2160, framerate=(fraction)30/1, colorimetry=(string)2:6:5:1, interlace-mode=(string)progressive" ! waylandsink
This should work exactly the same as the previous command
IP Camera BasicsNow we are going to create an IP camera by sending test video from the MX8MP SERVER to the CLIENT WORKSTATION.
Make sure to take note of the IP address for both, in my case they are:
SERVER (MX8MP) 192.168.1.137
CLIENT (Workstation) 192.168.1.61
First lets set up the server
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v videotestsrc ! imxvideoconvert_g2d ! "video/x-raw, width=640, height=480" ! rtpvrawpay pt=96 timestamp-offset=0 ! queue max-size-buffers=0 ! udpsink host=192.168.1.61
Search through the GStreamer output comments until you find the source of the gstvrawpay plugin
/GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)640, height=(string)480, colorimetry=(string)SMPTE240M, payload=(int)96, timestamp-offset=(uint)0, ssrc=(uint)472767626, seqnum-offset=(uint)20656, a-framerate=(string)30
Now copy the text after the "caps =" and paste into the next command that will be used on the client workstation.
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 -v udpsrc caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)640, height=(string)480, colorimetry=(string)SMPTE240M, payload=(int)96, timestamp-offset=(uint)0, ssrc=(uint)472767626, seqnum-offset=(uint)20656, a-framerate=(string)30" ! rtpjitterbuffer latency=1 ! queue max-size-buffers=0 ! rtpvrawdepay ! videoconvert ! autovideosink sync=false
If all goes well, you should see a small 640 x 480 window pop up on your workstation, displaying the test pattern.
The application htop on the client will be showing about 80% CPU utilization, but this is split among the four CPUs fairly well.
On the client workstation, I have told Wireshark to monitor my Ethernet port. In the filter area I have entered "ip.addr==192.168.1.137
" to limit the monitoring to anything coming from the SERVER MX8MP. I then select menu "Statistics" and "Protocol Hierarchy". After about a minute of the packets coming in, the protocol hierarchy statistics screen is telling me that the UDP data payload is worth about 299M bits/s. This is a substantial amount of bandwidth for a simple 640 x 480 test pattern video is barely any movement.
Let's calculate how much bandwidth this should take. 640 bits x 480 bits x 30 frames per second x 32 bits for color gives us 295M bits per second, so the Wireshark measurement is extremely close to our calculations.
IP Camera using Raw Video over UDPHere we are going to reproduce the work from the previous section but substitute the camera video feed for the videotestsrc.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=640, height=480" ! rtpvrawpay pt=96 timestamp-offset=0 ! queue max-size-buffers=0 ! udpsink host=192.168.1.61
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 -v udpsrc caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)640, height=(string)480, colorimetry=(string)SMPTE240M, payload=(int)96, timestamp-offset=(uint)0, ssrc=(uint)3525922485, seqnum-offset=(uint)12886, a-framerate=(string)30" ! rtpjitterbuffer latency=1 ! queue max-size-buffers=0 ! rtpvrawdepay ! videoconvert ! autovideosink sync=false
What you should see on your workstation is a 640 x 480 window with the live video feed. There should not be any noticeable lag. The server CPU utilization should be reported as 100% but that is spread across 4 cores, so no one core is actually at 100%. You should also see GStreamer reporting about 30.30 fps, which is excellent.
Now let's try to break things a bit by pushing the resolution up to 1024 x 768, and as a result, the bandwidth will also increase.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=1024, height=768" ! rtpvrawpay pt=96 timestamp-offset=0 ! queue max-size-buffers=0 ! udpsink host=192.168.1.61
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 -v udpsrc caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)1024, height=(string)768, colorimetry=(string)SMPTE240M, payload=(int)96, timestamp-offset=(uint)0, ssrc=(uint)3525922485, seqnum-offset=(uint)12886, a-framerate=(string)30" ! rtpjitterbuffer latency=1 ! queue max-size-buffers=0 ! rtpvrawdepay ! videoconvert ! autovideosink sync=false
What you should see on your workstation is a 1024 x 768 window with the live video feed. There may be a slight amount of lag on the video. The server CPU utilization should be reported as over 140%, and one core is approaching 88% utilization. GStreamer is also reporting that is can only supply 17.34 fps.
And finally we are going to push the system too far by trying to transmit full HD 1920 x 1080.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=1920, height=1080" ! rtpvrawpay pt=96 timestamp-offset=0 ! queue max-size-buffers=0 ! udpsink host=192.168.1.61
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 -v udpsrc caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)SMPTE240M, payload=(int)96, timestamp-offset=(uint)0, ssrc=(uint)3525922485, seqnum-offset=(uint)12886, a-framerate=(string)30" ! rtpjitterbuffer latency=1 ! queue max-size-buffers=0 ! rtpvrawdepay ! videoconvert ! autovideosink sync=false
What you should see on your workstation is a 1920 x 1080 window with the live video feed. There should be substantial lag on the video. The server CPU utilization should be reported as over 150%, and one core is exceeding 90% utilization. GStreamer is also reporting that is can only supply 7.34 fps.
IP Camera using Raw Video over TCPWe are going to reproduce the first experiment from the UDP section above, but we will use TCP to carry our video packets instead. Make sure you know your server MX8MP Ethernet IP address and fill it in the commands below.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=640, height=480" ! rtpvrawpay pt=96 timestamp-offset=0 ! queue max-size-buffers=0 ! gdppay ! tcpserversink host=192.168.1.137 port=5000 blocksize=512000 sync=false
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 tcpclientsrc host=192.168.1.137 port=5000 ! gdpdepay ! queue max-size-buffers=0 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGBA, depth=(string)8, width=(string)640, height=(string)480, colorimetry=(string)SMPTE240M, payload=(int)96, framerate=30/1" ! rtpjitterbuffer latency=100 ! rtpvrawdepay ! decodebin ! videoconvert ! autovideosink sync=false
What you should see on your workstation is a 640 x 480 window with the live video feed, and you should notice some lag. The server CPU utilization should be reported as over 180% but that is spread across 4 cores, so no one core is actually at 100% but each core is averaging between 30% and 55%. You should also see GStreamer reporting about 14 fps so even this low resolution video feed is pushing the limits thanks to using TCP.
IP Camera using Compressed Video over UDPNow this is where things get much more interesting. We are going to take advantage of the h.264 encoding offload engines of the MX8MP, so no encoding work will need to be performed by the CPU.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=1920, height=1080, framerate=30/1" ! vpuenc_h264 ! rtph264pay pt=96 ! rtpstreampay ! udpsink host=192.168.1.61
CLIENT WORKSTATION - OKAY BUT NOT GREAT
[flint@ZBook] $ gst-launch-1.0 udpsrc caps = "application/x-rtp-stream, encoding-name=H264" ! rtpstreamdepay ! rtph264depay ! decodebin ! videoconvert ! autovideosink
You should see a 1920 x 1080 window open up on your workstation with a live video feed. There may be some lag and the colors may be a bit off, so cancel the workstation command with a <CTRL> C, and try a different command pipeline that substitutes the more specific avdec_h264
instead of decodebin
. I included the decodebin
command so you can see the issues it causes; many online trainings use this plugin because it is more capable of decoding on many platforms but is not as efficient as a plugin that is more specific to the application such as avdec_h264
CLIENT WORKSTATION - BETTER
[flint@ZBook] $ gst-launch-1.0 udpsrc caps = "application/x-rtp-stream, encoding-name=H264" ! rtpstreamdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink
This video feed should look excellent. Notice how there is barely any lag at all, and the images look very good. But there are even more impressive things to notice. The CPU utilization of the server MX8MP is around 10% spread across all four cores, which is to say pretty much no load at all. Wireshark is telling me that the USB data payload bandwidth is equal to around 10M bits per second. And of course GStreamer is proudly telling us that it is maintaining 30.45 fps.
The following image is a screenshot showing the the GStreamer 1920 x 1080 autovideosink (on my workstation) over the left side of the screen MX8MP console. The right side of the screen is showing htop running on a SSH session with the MX8MP
Although TCP is not as efficient as UDP, it does have its uses for certain applications.
SERVER MX8MP
root@imx8mpevk:~# gst-launch-1.0 -v v4l2src device=/dev/video0 ! imxvideoconvert_g2d ! "video/x-raw, width=1920, height=1080, framerate=30/1" ! vpuenc_h264 ! rtph264pay pt=96 ! rtpstreampay ! tcpserversink host=192.168.1.137 port=5000 blocksize=512000 sync=false
CLIENT WORKSTATION
[flint@ZBook] $ gst-launch-1.0 tcpclientsrc host=192.168.1.137 port=5000 ! "application/x-rtp-stream, encoding-name=H264" ! rtpstreamdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink
Just like in the UDP section above, this video feed should look excellent. But you may notice how there is a slight lag between video capture and the display. The CPU utilization of the server MX8MP is around 16% spread across all four cores, which is not too bad.
Now lets make things a little more interesting by opening a second terminal on the Client workstation and entering the same GStreamer command as above. What you should now see is that there are two video windows on the workstation, but they may not be synchronized. The server MX8MP CPU utilization has increased to about 19%. Now lets do this two more times, open two more terminals and start a gstreamer client. The Server MX8MP CPU utilization has increased to about 25%. You can also use other workstations on your network and open up a GStreamer client.
ConclusionHopefully you found this project useful. If you have any questions or if this didn't work on your system, please let me know.
Comments
Please log in or sign up to comment.