Video Compression

Broadcast, machine vision, surveillance cameras

H.264 used to be a widespread standard for encoding HDTV signals. With higher resolutions of 4 K and 8 K displays new standards such as H.265 (also known as HEVC) were introduced, which essentially provide stronger encoding and compression, allowing them to transmit double image size in the data signal at the same bandwidth. As this standard has only recently been adopted, standardised components are still rare, on both the encoder end and on the decoder end. For real-time processing of these algorithms, FPGAs and DSPs are often the first choice. For example, the cameras used for live broadcasts need to be capable of encoding the signal in such a way that it can be transmitted directly to television sets. The field of studio technology will continue to be a domain for FPGAs in the future due to their ultra-low latency capabilities. Also in some consumer display screens, GPUs and FPGAs are used for additional features. Other standards like JPEG, JPEG2000, MPEG can also be implemented in FPGAs and DSPs, esp. perfectly suited, when special features like low latency or non-mainstream resolutions are required. Major industry trends in digital broadcasting include HD content creation, content scaling and compression quality and bit rate reduction improvements.