In recent years, the resolution of high-definition displays has reached a new level, such as 1920x1080 for HDTV and 3840x2160 for some ultra-high-definition 4k TVs. However, with advanced display devices, relatively early movies, TV programs or games are often not well represented in the aforementioned devices, which greatly reduces the user's sense of experience. In the past, most of the high-definition remakes of classic games and movies were mainly based on multimedia materials, and this part of the work could only be completed by professional designers, which took a lot of time and resources.
At the same time, in order to meet the real-time audio and video transmission on the mobile terminal, low resolution is also a pain point, which is due to the limitation of transmission bandwidth and real-time requirements. Low-resolution video cannot effectively display image details, thus bringing limited user experience. Therefore, it is of great significance to perform video real-time super-score.
In this challenge, I intends to use the powerful AIE inference computing power of VCK5000 to reproduce the papers based on neural network super-resolution in recent years. The most typical models are the ESPCN and BasicVSR models, which have simple structures and can effectively use AIE to accelerate operations based on convolutional neural networks.
Comments