Education

Let's get technical with our media publishing glossary.

4K Streaming

4K streaming is a high-resolution video also known as Ultra-HD (UHD) that is four times larger than standard resolution (SD) video.  4K videos have more than 7 million pixels in the video display and a horizontal pixel count of approximately 4000.  4K content is becoming more widely available online including on Apple TV, YouTube, Netflix, Hulu, and Amazon.

 

Common 4k video resolutions include:

  • – 4096 × 2160 (full-frame, 256∶135 or ≈1.90∶1 aspect ratio)
  • – 3840 × 2160 or 2160p (an aspect ratio of 1.77∶1 (16∶9) or wider)

Adaptive Streaming

Adaptive streaming (adaptive bitrate streaming or ABR) is a method of video streaming over HTTP where the source content is encoded at multiple bit rates. Each of the bit rate streams are segmented into multi-second parts. Segment size are typically between two and ten seconds. The adaptive bitrate (ABR) algorithm in the client decides which bit rate segments to download, based on the current state of the users network resulting in smoother playback and less buffering.

 

Implementation of adaptive streaming include:

  • – MPEG-DASH
  • – Adobe HTTP Dynamic Streaming
  • – Apple HTTP Live Streaming
  • – Microsoft Smooth Streaming (HLS)
  • – QuavStreams Adaptive Streaming over HTTP
  • – Self-learning clients

Application Programming Interface (API)

An API is an application programming interface that facilitates the request and response of data between applications. APIs provide a software-to-software service for applications to add, edit and delete data thus keeping two or more systems in sync.

 

Typically APIs provide fronted systems such as websites and native apps an interface to request content or data to display and make updates to backend systems such as database records.

 

GlueMPS APIs can be used to build a simple or complex frontend or backend applications. The base APIs allow you to locate, view, filter, sort and paginate content, user, and assets data stored in the system.

 

The Glue API’s are organized around REST. Our RESTful API has resource-oriented URLs, and uses HTTPS response codes in combination with API Status codes to indicate API errors. We use built-in HTTP features. JSON is returned in all API responses including errors.

Article Schema

Article Schema is a web form in GlueMPS administration console to enter metadata, images, assets and publishing rules that are assigned to an asset such as audio or video files.

 

Article schemas have a taxonomy or metadata structure that provides context and relationships between the data. Taxonomies of metadata (“categories” “channels” “movies”) allow the GlueMPS content administrator to add, edit and publish content metadata to the audio and video application that is optimised for the user with easy information discovery.

 

Article schema content is available via Glue RESTful content API. The content API is valuable because it contains not only tags that tell an application where the title, main content and subheads are, but also information that gives meaning and context to the data in the content, so a user can quickly find a channel, show or movies by a favourite actor etc.

Aspect Ratio

An aspect ratio is the proportional relationship between the width and height of a video or image which is expressed as a ratio.

 

Common Broadcast aspect ratios are

  • 4:3 for SD video. Sometimes expressed as 1.33:1 (4:3 or “full screen” which came from fitting older TV sets)
  • 16:9 for HD wide screen formats. Sometimes expressed as 1.78:1 (16:19 or widescreen).
  • 1.85:1 or 2.35:1 for film (CinemaScope, TohoScope and other cinematic formats).
  • – 2.39:1. Known as anamorphic widescreen format, for shooting scenic landscapes.
  • – 2.76:1 (70mm) often used for IMAX developed in the 1950s and first usedin Best Picture-winning film Ben-Hur
  • – 1.37:1 (Academy ratio). Only slightly wider than the 4:3 ratio used throughout the silent film era
  • – 2.59:1 to 2.65:1 (Cinerama) a super widescreen format involving three standard 35mm film cameras that simultaneously project a film onto a curved screen.
  • – 2.35:1 to 2.66:1 (Cinemascope) developed by the head of research at 20th Century Fox and only required one projector, which made it much less complex than Cinerama.

Bandwidth

Bandwidth is the maximum amount of data that can be transferred from the video source location to the receiver or viewer in a period of time. Bandwidth is a measure of the internet connection speed or the amount of data consumed and is typically measured in kilobits or megabits per second, or in gigabytes. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. A viewers bandwidth may change due to network conditions. Modern adaptive streaming protocols were developed to adjust the data bit rate based on a users fluctuating bandwidth to ensure streaming stability and less buffering.

Bitrate

Bitrate is the number of bits per second that can be transmitted along a digital network. The data is measured in bits, not to be confused with bytes. Since data is measured in bits, the bitrate measurements are presented in either bits, kilobits (Kbps – kilobits per second), megabits (Mbps – megabits per second), gigabits, or terabits per second.  For example, the bitrate of a 720p video stream may be approximately  2 Mbps, and a standard audio bitrate is 96 – 128 Kbps.

Buffering

Buffering is the process of loading data into memory (a buffer). In live and on-demand streaming audio or video over the Internet, buffering refers to pre-downloading chunks or segments of data before the player starts and then throughout playback. The preloading of the data buffer can assist with smooth playback. Visible buffering during real-time playback (playback interruption) is caused by a delay in the data preloading process from low bandwidth, weak Wi-Fi, or issues with the streaming origin.

 

Techniques for smooth uninterrupted video and audio playback includes the use of adaptive streaming. Adaptive streaming adjusts to low or high bandwidth conditions by delivering an optimised bitrate (lower or higher) to the data buffer. GluePlyr supports all leading adaptive streaming technologies including HLS and MPEG-DASH.

Captions

Closed Captions (CC) display text over a video to provide dialogue and a description of background audio such as sound effects and audio-only on-screen events. Closed Captions may be turned on or off by the viewer.

 

Open Captions differ from Closed Captions as they are always in view and cannot be turned off.

 

Subtitles differ from Closed Captions as they do not describe non-dialogue audio and assume the user can hear audio. Subtitles are often used for the translation of the dialogue.

Capture Card

Capture Cards are hardware devices to convert an analog video signal into a digital format sending to local storage or to an external device. The resulting digital data are referred to as a digital video stream. Capture cards connect to a computer through various formats (PCIe, USB, Thunderbolt) and may support various video signals such as SDI, HDMI, Component, etc.

CMAF

CMAF (Common Media Application Format) is a developing standard designed to create a common HTTP streaming protocol. CMAF would use fragmented .mp4 containers that could be referenced by both HLS and MPEG-DASH eliminating the need to use a MPEG transport stream (.ts) for HLS, and also providing low-latency streaming improvements with chunked-encoded and chunked-transfer CMAF.

 

Benefits or CMAF

 

  1. Lower latency chunked encoding reduces latency by creating smaller chunks of a set duration so there is less preloading of the buffer required before content plays.
  2. Chunked encoding with a smaller chunk size also means live streams are immediately published upon encoding for near real-time delivery.
  3. For publishers, CMAF has the potential to reduce encoding costs, complexity, storage and latency.
  4. CDN efficiency using a single format/filetype.

Codec

A codec is a compression and decompression technology to encode and decode a signal or data stream. The name codec is a portmanteau word that expresses the combination of coder-decoder or encode-decode or compress-decompress.

 

A codec is a code standard capable of encoding and decoding audio or video for transmission over a data network by reducing the file sizes of video, audio, and other media formats. 

 

The most common video codecs for video streaming are:

  • – H.264/AVC
  • – VP9
  • – H.265/HEVC
  • – AV1
  • – H.266/VVC

Codecs should not be confused with Containers. A codec applies lossy compression to reduce the source video file size where unnecessary data is discarded and decompress during playback. 

 

Containers, on the other hand, store the video codec, audio codec, and metadata such as subtitles or preview images. So the container holds all the components together and determines which programs can accept the stream.

Container

Containers store the video codec, audio codec, and metadata such as subtitles or preview images. So the container holds all the components together and determines which programs can accept the stream.

Codecs should not be confused with Containers. A codec applies lossy compression to reduce the source video file size where unnecessary data is discarded and decompress during playback. 

 

Containers are also referred to as the format of that file. The common video containers are:

  • – 3GP Third Generation Partnership
  • – ADTS Audio Data Transport Stream
  • – FLAC Free Lossless Audio Codec
  • – MPEG / MPEG-2 Moving Picture Experts Group (1 and 2)
  • – MPEG-4 (MP4) Moving Picture Experts Group 4
  • – Ogg Ogg
  • – QuickTime (MOV) Apple QuickTime movie
  • – WebM

Content Delivery Network (CDN)

A CDN is a content delivery network that comprises a distributed system of servers that store (cache) and load content based on a users location. Caching of the content in a CDN assists with offloading file requests, that would otherwise go directly to the media platforms origin servers, which assists with scalability and improving load speeds due to a reduction in latency. CDN is typically distributed globally and is designed to increase the speed and reliability of delivering digital content to users by caching content dynamically at edge servers located close to the audience.

 

Live and on-demand video streaming is a resource-intensive process therefore most video streaming platforms utilize a CDN. The process of caching the content (video files, web pages, etc.) offloads the file requests from a single server to a large network of distributed servers. CDNs work in combination with cloud-based hosting infrastructure to provide scalability by dynamically increasing resources to meet traffic demands.

 

Glue Media Publishing System supports all leading CDN’s including

 

  1. Akamai
  2. Amazon CloudFront
  3. Fastly
  4. Cloudflare
  5. Alibaba Cloud CDN
  6. Limelight Networks
  7. Microsoft Azure CDN
  8. KeyCDN
  9. and more

Content Management System (CMS)

A Content Management System or CMS provides a software interface to add, store and edit digital content and publish the digital content to the frontend web (apps). A content management system typically consists of a database, API and user interface (Administration console).

Content is delivered from a CMS to websites and applications via APIs.

 

Content Management Systems differ from Online Video Platforms (OVP) or Media Publishing Systems (MPS), in that, they do not feature media publishing workflows tailored to audio and video assets or managing the metadata specific to audio and video.

 

OVP and MPS have additional features from a CMS including:

  1. video broadcasting ingest workflow
  2. audio broadcasting ingest workflow
  3. video encoding
  4. audio encoding
  5. video metadata schemas
  6. audio metadata schemas
  7. digital asset management (audio, video, images)
  8. video players
  9. audio players
  10. closed captions
  11. subtitles
  12. video entitlements
  13. audio entitlements
  14. digital rights management (DRM)
  15. geo-targeting video & audio
  16. geo-filtering video & audio
  17. Video paywalls (SVOD, TVOD)
  18. Audio paywalls (SVOD, TVOD)
  19. Media monetisation pricing
  20. Media monetisation transaction logging
  21. Media monetisation revenue reporting
  22. Media metadata management
  23. API for media asset delivery
  24. API for media tracking and reporting
  25. Audience user management
  26. Audience content personalisation
  27. Audience user-generated content
  28. Audience subscription and payment history
  29. Web and native applications for OTT platforms
  30. Web and native applications for Video streaming platforms
  31. Web and native applications for Music streaming platforms
  32. Web and native applications for Podcast streaming platforms
  33. Web and native applications for Radio streaming platforms
  34. Web and native applications for Sports streaming platforms
  35. Web and native applications for Online Education platforms
  36. QoS video monitoring system
  37. QoS audio monitoring system
  38. Dynamically scalable infrastructure for responding to large video streaming events
  39. CDN infrastructure for responding to large video streaming events

Deinterlace

Deinterlacing combines the two alternating fields found in interlaced videos to form a clean shot in a progressive video. Interlacing was developed to reduce the data transmission size for traditional terrestrial television systems such a cable networks. Without deinterlacing, the interlaced content will often display motion with a line-like appearance.

Embedding

Embedding a video or video player is the process of coding a video player on a website or application. Normally, this is done by copying an “embed code” of a live stream or video-on-demand file from an OVP and inserting it into the HTML code of your website. Although the video play can be seen on the website it is hosted on the OVP or another website through HTML embed code.

 

An iframe embed of a video player is the most common method of embedding a video player.

Encoding & Transcoding

Video encoding is the process of compressing raw, uncompressed video. While video transcoding is the process to re-encoding compressed files and therefore involves an additional step to decode the incoming video before encoding it.

 

Video and audio transcoders are more commonly referred to as encoders however they typically perform transcoding processes in additional to encoding such as:

  • – decoding different container formats including mp4, ts
  • – decoding bitstreams using different video codecs including H.264/AVC, HEVC, AV1, VP9
  • – change the resolution of the video to produce outputs of different resolutions (critical to ABR stream production)

 

In addition to encoding and transcoding, there are other terms used in the encoding and transcoding process to prepare audio and video files for streaming, which are:

  • – Transrating involves changing the bitrate of the video
  • – Transmuxing involves changing the container format

 

Video encoders can be software or hardware and process live or on-demand video sources by compressing (encoding) and converting (transcoding) the video format. Encoding by definition takes an analog source and digitizes that content whereas transcoding takes an existing digital format decodes it, compresses and encodes it to a different digital format.

 

An example is a .mov file being transcoded to H.264/AAC HLS file to be streamed on a live stream player or mobile device.

 

The transcoding process is a vital function to ensure playback compatibility with target devices and browsers that do not support the current format of your media. Encoding & transcoding also improve streaming performances by, compression techniques to reduce file size and create adaptive streaming formats that adjust to the user’s available bandwidth.

 

During the encoding process, additional security measures may be applied such as DRM packaging of license keys to prevent unauthorised video playback.

Frame Rate

Frame rate or frame frequency is the frequency at which consecutive images are captured or displayed per second.The amount of images displayed per second is measured in frames per second (fps).

Essentially video is made up of a series of still pictures or frames that are displayed one after the other. The most common digital video frame rate uses 30 fps however sports and online gaming video streams with fast-action often use 60 fps. VR has even higher frame rates targeting 90 fps.

 

At 30 fps, 30 distinct images would appear in succession within one second. If the fps is too low, movement will appear jagged and jerky. 24 fps is considered the lowest to achieve smooth video motion.

 

It is common for the source frame rate to be matched during the video encoding process to archived efficient encoding and optimised playback, therefore frame rates vary slightly due to different standards adopted around the world.

Geo-blocking, Geo-filtering & Geo-targeting

Geo-blocking restricts access to video content based on a users geographical location.

 

Geo-filtering restricts access to specific video content based on a users geographical location.

 

Geo-targeting tailors access to video content based on a users geographical location.

HTTP Live Streaming (HLS)

HTTP Live Streaming is an HTTP-based adaptive bitrate streaming protocol developed by Apple also known as HLS. The HLS protocol enables responsive live streaming with the delivery of an adaptive bitrate set that adjusts to a users bandwidth to achieve smoother playback. HLS creates small video chunks and delivers them to the player via a manifest file that describes the available bitrates and order of video chunks. The quality can differ from chunk to chunk based on a user’s internet connection at the time the data is sent.

 

Despite its name, HLS sends both live and on-demand audio and video to iPhone, iPad, Mac, Apple Watch, Apple TV, and PC with HTTP Live Streaming (HLS) technology.

 

By using the same HTTP protocol that powers the web, HLS lets you stream video and audio content using ordinary web servers and content delivery networks.

 

Support for the protocol is widespread in media players, web browsers, mobile devices, and streaming media servers.

HDMI (High Definition Multimedia Interface)

HDMI (High Definition Multimedia Interface) is an audio and video interface and a common standard for transmitting uncompressed HD video and audio from an HDMI-compliant source device, such as a display controller, to a compatible digital device such as television, monitors and audio equipment.

HD (High Definition Video)

HD (High Definition Video) refers to the resolution of image or video content not less than 720p but more likely 1080p in height. Each of these have a different resolution as follows:

 

  • – 720p is 1280×720 (921,600 pixels)
  • – 1080p is 1920×1080 (2,073,600 pixels) p = progressive
  • – 1080i is 1920×1080 (1,036,800 pixels) i = interlaced

 

A pixel is the smallest visible element on a display. With HD video, there are more pixels than Standard Definition (SD) video producing an image with more detail.

 

Anything greater than 1080p, such as 4K is Ultra HD.

HDS (HTTP Dynamic Streaming)

HDS (HTTP Dynamic Streaming)s an adaptive bitrate streaming protocol developed by Adobe. HDS delivers MP4 video content over HTTP connections. HDS can be used for on-demand streaming or live streaming however since HDS was developed for use with Adobe Flash Player and Adobe AIR HDS has been superseding by HTTP adaptive streaming protocols such as HLS and MPEG-DASH.

 

Adaptive bitrate streaming works by detecting a users bandwidth and adjusts the quality of the video stream between multiple bitrates and/or resolutions in real-time.

H.264 (Advanced Video Coding)

H.264 (Advanced Video Coding/AVC) is the industry standard video codec used for encoding (compression)for video streaming. A codec is a code standard capable of encoding and decoding audio or video for transmission over a data network by reducing the file sizes of video, audio, and other media formats. H.264, also known as Advanced Video Coding (AVC) or MPEG-4 Part 10 was developed to deliver good quality video at lower bitrates than other codecs. Today H.264/AVC has the highest level of compatibility with devices.

 

Container formats compatible with H.264/AVC include:

  • – MP4
  • – MOV
  • – F4V
  • – 3GP
  • – TS

H.265 (High-Efficiency Video Coding/HEVC)

H.265 (High-Efficiency Video Coding/HEVC) is the next version of H.264 codec and includes major performance and quality improvements when compared to H.264/AVC. Specifically, compared to H.264, H.265 achieves 25% to 50% better data compression at the same level of video quality.

 

H.265/HEVC delivers high-quality videos at the same bitrate and file size, therefore, reduce bandwidth requirements. H.265/HEVC also supports resolutions up to 8192×4320, which includes 8K UHD.

 

H.265 has an improved motion compensation and spatial prediction than H.264 requiring less bandwidth and processing to decompress a data stream.

 

H.264 breaks an image into squares of pixels known as macroblocks (16×16 pixels) whereas H.265 replaces macroblocks with coding tree units (CTUs). CTUs can use block structures up to 64×64 pixels increasing the coding efficiency.

 

Container formats compatible with H.264/AVC include:

  • – MP4
  • – MOV
  •  -F4V
  • – 3GP
  • – TS

HTML5 Video

HTML5 Video is an in-browser HTML5 standard to embed video natively without a browser plugin such as Flash Player, Quicktime and Real Player. HTML5 is a markup language that is standard for the internet as a whole. Embed video is done through the HTML5 <video> element. HTML5 video functions as an in-browser player format that replaces the browser player plugins for live and on-demand streamed video. HTML5 requires less bandwidth, making it ideal for streaming video. HTML5 Video standard improved cross-platform compatibility by allowing native embedding of streaming media.

 

HTML5 is not a streaming protocol. Websites built with HTML5 can use several different streaming protocols to play video, including HTTP live streaming (HLS) and MPEG-DASH. This is configured on the server side, not in the HTML markup code.

 

The HTML5 video tag is <video>, and the tag is closed, similar to other HTML elements, with </video>.

 

Between those tags, a <source> tag indicates the location of the video file. Within this tag, both the video source (src) and the type of file (type) are indicated like this:

 

<video>
<source src=”example.mp4″ type=”video/mp4″>
</video>

 

The following important attributes also can go inside the <video> tag:

 

Width: This specifies the video’s width in pixels. The number of pixels goes between quotation marks: width=”370″
Height: This specifies the video’s height. It works similarly to the width attribute.
Play, pause, volume: These controls enable the user to control video playback. Add controls to the <video> tag to include them.

 

Other HTML5 features include support for SVG graphics, tags for defining the navigation on a website, and header and footer tags.

Interlaced Video

Interlaced Video or interlacing is a technology used for television video formats, such as NTSC and PAL to reduce the data transmission size for traditional terrestrial television systems such a cable networks. Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth.  In the 1940s television, bandwidth was not sufficient to transmit 60 full frames per second. It was decided that interlacing with 60 half frames was visually better than 30 non-interlaced full frames.

 

Interlacing uses half frames per second (fields per second) rather than full frames per second. The interlaced signal contains each full frame of video that actually consists of alternating lines taken from two separate fields captured at slightly different times. The two fields are then interlaced or interleaved into the alternating odd and even lines of the full video frame. The alternating fields are displayed in sequence, depending on the field dominance of the source material by displaying all odd lines in the frame first and then all even lines.

 

Modern DVDs and monitors convert the 60 fields in interlaced NTSC content to 60 frames by interpolating the missing lines.

 

Deinterlacing for digital content such as web streaming combines the two alternating fields found in interlaced videos to form a clean shot in a progressive video. Without deinterlacing, the interlaced content will often display motion with a line-like appearance.

 

IPTV

IPTV (Internet Protocol TV) delivers video and audio content to dedicated devices (STB) attached to a network based on the Internet protocol (IP).

 

IPTV delivers content through a dedicated network, managed network as opposed to OTT which delivers content over the internet.

 

IPTV delivers the content via a dedicated set-top box making the telecoms industry the primary supplier of IPTV services.

 

IPTV typically uses a multicast protocol as opposed to OTT which uses Unicast HTTP and simulated UDP/TCP.

Keyframe Interval

Keyframe Interval (key frame interval), also called an i-frame interval, is an encoding setting that determines how often (the interval) the whole picture is transmitted. The Keyframes form points in the video where the entire video frame is sent instead of just the differences from the previous frame. Having a keyframe interval of 2 means that every 2 seconds a full-frame is sent. The keyframe provides a reference point within a video for detecting changes and typically assists players to recover from error events.

 

When a stream is encoded, the compression method results in only some frames showing the complete image. The initial (key) frame includes a complete image, while subsequent (delta) frames only depict changes from that image. This helps reduce redundant data and lower the bandwidth.

 

Selecting the optimum keyframe interval depends on how much action or movement is in the content. Video streams depicting static scenes such as a news desk or talk show, then a keyframe interval of two seconds is optimum. Sporting events and action scenes require a shorter keyframe interval of around one second.

 

Shorter keyframe intervals for high action scenes are more data-intensive and generate a larger sized video in order to not reduce quality.

 

The higher the keyframe intervals used for more static scenes, can use more compression without a noticeable reduction in quality.

Video Latency

Latency is a time delay between the data request and the data response being received by the requester and is usually expressed in the time it takes for a request to travel from the sender to the receiver and for the receiver to process that request. In other words, the round trip time between the browser and the server. The lower the latency, the less time it takes for data to reach the end-user after being released.

 

Video latency is typically the amount of time it takes for video and video metadata data round trip between the viewer and the video server. Latency is synonymous with ‘lag’ and ‘delay however delay in video playing can also be affected by the minimum buffer size (chunk/segment) that is required to load before a player start event.

 

Lower latency is obviously ideal for live streaming. Livestream streams commonly have approx. a 20-30 delay in initial play one latency and the buffer segments and preload to facilitate DVR.

 

There are 4 causes of network latency as follows:

 

  1. Physical Transmission components such as WAN or fibre cables have limitations.
  2. Propagation is the amount of time it takes for a packet to travel from one source to another (at the speed of light).
  3. Routers take time to analyze the header information of a packet as well as, in some cases, add additional information. Each hop a packet takes from router to router increases the latency time.
  4. Storage delays can occur when a packet is stored or accessed resulting in a delay caused by intermediate devices like switches and bridges.

 

Latency is measured using the following methods:

  1. Round trip time (RTT)
  2. Time to first byte (TTFB)

 

Latency can be reduced by using a CDN with servers that cache video content to reduce latency by serving the content from the closest location to the viewer.

Live Streaming

Live streaming is the process of broadcasting video or audio over the internet in real-time. Live streaming media is broadcast in real-time without a requirement for a completed file by delivering the video file in segments or chunks.

 

Some live streaming architecture also allows for simultaneous recording to support audience interaction including DVR style scrubbing of the timeline and live to on-demand publishing workflow.

 

Live streaming requires the following items:

 

  • – a camera source to produce a content feed
  • – hardware or software encoders that apply codecs and containers to the video content and packages to adaptive streaming format such as HLS and MPEG-DASH in real-time video to an internet streaming destination.
  • – an internet connection from the encoder
  • – a streaming destination such as a media publishing system, video platform website, or social platform like Youtube & Facebook

Lossless Video Compression

Lossless Video Compression is video encoding codecs to code and decodes video and audio with the original full quality at a smaller file size. Lossless video codecs encode to a compressed file size that can be decoded back to full quality without loss of information.

 

Consequently, lossless compression does not degrade sound or video quality meaning the original data could be completely reconstructed from the compressed data.

 

A 60-minute uncompressed standard definition video is at least 70 GB file size and a 120 minute 4k UHD movie is at least 3400 GB file size. These files are so large they can be difficult to watch or to work with. Lossless video compression encodes the full quality source video to make video files smaller without losing any data. This sounds fantastic however lossless compression generally does not make the video files small enough for video streaming so is only appropriate for media
production and editing workflows.

 

Lossless Video Encoding (Uncompressed):

  • – v210 (Can be contained in Quicktime MOV)
  • – v410 (Can be contained in Quicktime MOV)
  • – Uncompressed UYVY (Can be contained in Quicktime MOV or AVI)
  • – Uncompressed YUY2 (Can be contained in Quicktime MOV or AVI)

 

Lossless Video Encoding (Compressed):

  • – FFV1 (Fun fact, FFV1 is created by the developers of FFmpeg)
  • – HuffYUV
  • – Lagarith
  • – H.264 Lossless
  • – Apple Animation
  • – OpenEXR

For online video streaming, publishers use lossy video encoding to achieve small enough file sizes for internet bandwidth limitations.

Lossy Video Compression

Lossy Video Compression are encoding codecs that create a lower quality approximation of a video in order to reduce the file size for editing and bandwidth requirement when streaming.

 

The quality reduction may not be perceivable by viewers however the video cannot be restored back to full quality once a lossy video codec is applied.

Lossy encoding compression schemes try to eliminate information in ways so that the change is not perceptible, and sound or video quality is not seriously degraded by identifying redundant patterns in video data and replacing duplicates to references to earlier instances such as not moving or non changing elements in a video frame.

 

Common lossy codecs include

  • – H.265/HEVC,
  • – H.264/AVC,
  • – VP9
  • – ProRes
  • – DNxHD and DNxHR
  • – AV1

Lower Thirds

Lower Thirds is a text title or graphic overlay positioned in the lower area of the video screen, and are used to display information such as sports scores, news tickers, event speakers, places.

 

In television broadcast production, lower thirds are placed in the “title-safe area,” the part of the screen in which you can safely place graphics without them getting cropped.

TS Files (.ts files)

TS files or MPEG-TS (.ts) are video files that contain the segment chunks for adaptive video streaming using the HTTP Live Streaming (HLS) protocol.

 

HLS is an HTTP based adaptive streaming protocol developed by Apple, that works by segmenting video files into multiple chunks of MPEG2-TS format and having an index file system of a master manifest and rendition manifests that have all the chunks listed.

 

MPEG-TS is a format designed for transmitting MPEG video and other streaming formats that may also include separate streams for video, audio, and closed captions.

 

The primary difference between TS and MP4 files is that TS files are flat while MP4 files have an index at the beginning of the MP4 file. Otherwise, the video bits inside the files are the same and therefore the video quality of TS, M2TS and MP4 files are the same

 

Other adaptive streaming formats such as MPEG-DASH utilise different formats for the video segments which is .m4s

MPEG-DASH

MPEG-DASH (MPEG Dynamic Adaptive Streaming over HTTP) is an adaptive bitrate format that contains encoded audio and video. Moving Pictures Expert Group (MPEG) developed the technology as an alternative to Apples HLS adaptive streaming protocol.

 

MPEG-DASH is a leading streaming protocol of which there are two other competitors being Adobe’s Real-Time Messaging Protocol (RTMP), Apple’s HTTP Live Streaming (HLS) protocol.

 

How does MPEG-DASH Work?

 

MPEG-DASH separates the video stream into a series of sub-ten-second segments, allowing the player to dynamical adapt playback by rendering the best quality segment for the viewer’s bandwidth. This process involves breaking down the video stream into small HTTP sequence files. These files allow the content to be switched from low to high-quality renditions in response to available bandwidth.

 

Unlike HLS which is a proprietary standard developed by Apple whereas DASH is an open standard defined by MPEG.

 

MPEG-DASH

  • – Audio Codecs: Codec-agnostic
  • – Video Codecs: Codec-agnostic
  • – Container format: MP4, or .mp4
  • – Playback Compatibility: all Android devices (HTML5 players); most post-2012 Samsung, Philips, Panasonic, and Sony TVs; Chrome, Safari, and Firefox browsers.
  • – Playback Non-Compatibility: iOS and Apple TV do not support DASH
  • – Latency: 6-30 seconds (lower latency only possible with CMAF)
  • Variant Formats: MPEG-DASH CENC (Common Encryption)

 

MPEG-DASH causes an initial delay in playback as the individual segments are pre-loaded. A method for addressing this is to decrease the segment size. Another method involves the Common Media Application Format (CMAF) which we outline in the CMAF guide.

MP4

MP4 (MPEG-4) is a common multimedia container format used to store compressed video, audio and metadata including subtitles and still images. MP4 container format supports live and on-demand streaming via HLS or MPEG-DASH.

 

MP4 files are “containers”—instead of storing the code for the file, they store the data. As such, MP4 files do not have a native way of handling the coding of the file. To determine how the coding and compression will be handled, they rely on specific codecs.

 

The most widely-supported codecs are:

 

  1. Video—MPEG-4 Part 10 (H.264) and MPEG-4 Part 2
  2. Audio—AAC, ALS, SLS, TTSI, MP3, and ALAC
  3. Subtitles—MPEG-4 Timed Text

 

MP4 files may also contain video, images, and text. Various file extensions indicate the type of data within the container.

 

The most common MP4 extensions are as follows:

 

  1. MP4—The only official extension
  2. M4A—Non-protected audio
  3. M4P—Audio encrypted by FairPlay Digital Rights Management
  4. M4B—Audiobooks and podcasts
  5. M4V—MPEG-4 Visual bitstreams

OTT

OTT (Over The Top) refers to any service that streams video content over the open internet rather than terrestrial, cable, satellite or dedicated network (IPTV).

 

OTT uses Unicast (HTTP) a one-to-one communication method, Simulated Multicast (UDP/TCP) routing topology as opposed to IPTV which uses a multicast protocol.

 

OTT streaming services include Netflix, Amazon Prime Video, Hulu and Apple TV.

 

OTT video streaming applications include web applications and native applications for mobile, tablet, smart tv and set-top box (STB).

 

OTT system architecture includes on-premise and cloud-based infrastructure, high bandwidth internet connection, content delivery network, media publishing software, digital rights management system, paywall and advertising system.

PiP

PiP (Picture-in-picture) is a feature of some device video players to see multiple video sources simultaneously. Specifically, PiP is where one video is at full-screen at the same time as one or more other videos are displayed in inset windows. Sound is usually from the main program only.

Progressive Video

Progressive Video uses a progressive scan also referred to as non-interlaced scanning, which is a process of displaying, storing or transmitting videos where all the lines of every frame are given in sequence.

 

In order to produce a more precise way of displaying images on a computer monitor, progressive scan technology was developed. By transmitting the full frame at once, it reduces flicker and artifacts. The video will appear smoother, more realistic and high quality, particularly on lower framerate devices or monitors.

 

All streaming files are progressive as opposed to interlaced videos more common in terrestrial and cable broadcasting. The interlaced video demonstrates even and odd scan lines as two individual fields to reduce required bandwidth and increase framerate. 

 

Progressive video can be paused at any time, and would still display the paused frame as a complete image. 

 

Video sources with the letter p are called progressive scan signals. Examples of this would be 480p, 720p, 1080p resolution video.

Smart TV

Smart TV, or Connected TV, is a television with an integrated Internet connection that allows viewers to stream music and videos, browse the internet, and view photos.

 

Smart TV Operating Systems:

 

Tizen

  1. Samsung

webOS

  1. LG
  2. Eko
  3. Polaroid
  4. Bauhn

Android TV

  1. Sony
  2. Hisense
  3. TCL
  4. Hitachi
  5. Philips
  6. Eko
  7. Blaupunkt
  8. Chromecast

tvOS

  1. Apple TV

FireTV

  1. Amazon Fire

BrightScript

  1. Roku

Mobile

  1. iOS
  2. Android

 

Smart TV Programming Languages by OS:

 

HTML/CSS/JavaScript, XML, JSON

  1. Tizen
  2. WebOS

React Native (JavaScript)

  1. Tizen
  2. Android TV
  3. Amazon Fire
  4. tvOS
  5. iOS
  6. webOS

BrightScript

  1. Roku

Swift/Xcode

  1. Apple tvOS

Java

  1. Android TV
  2. Amazon Fire

STB

STB (Set-top Box) is an appliance that decodes a television signal or supports IP based video streaming protocols and application interfaces.

 

In IPTV networks, the settop box is a small computer providing two-way communications on an IP network and decoding the video streaming media.

 

Examples of modern STB include Roku and Apple TV.

 

 

SSO

SSO (Single Sign On) is an authentication service allowing shared sessions (login) between multiple applications using the same credentials (username/ID & password).

 

True single sign-on allows the user to log in once and access services without re-entering authentication factors

 

Identity management services based around SSO include Okta, OneLogin, Google Apps , Open Graph and more.

 

Glue supports SSO for frontend applications and administration systems to create a secure, internal video solution for enterprises.

 

Streaming Video

Streaming Video is delivered over the internet. Historically it described content that was consumed in a continuous manner from a source, with little or no intermediate storage in network elements, however, it is also used to describe any live or on-demand video play online without download.

 

Video streaming involves sending chunks of video data to an end-user over an internet connection. The video player creates a buffer with the video chunks to create a small buffer (pre-loading content) to enable smooth playback. Buffering allowss the viewer watch from the buffer in the event a video chunk is lost.

 

If the user’s connection (bandwidth) is low the buffer may empty before the next chunk is received causing a delay in playback which most viewers describe as buffering.

 

Modern adaptive streaming protocols such as HLS (CMAF) and MPEG-DASH prevent buffering with a combination of low, medium and high-quality renditions and smaller chunk sizes that the player can switch between in the event of bandwidth fluctuations.

 

UDP

UDP (User Datagram Protocol) is a core Internet protocol. With UDP, computer applications can send messages, in this case, referred to as datagrams, to other hosts on an Internet Protocol network. UDP is one of the most universal ways to transmit or receive video and audio via a network card or modem. In terms of real-time protocol, RTMP (Real Time Messaging Protocol) is based on TCP (Transmission Control Protocol), which led to the creation of RTMFP (Real Time Media Flow Protocol) which is based on UDP.

 

VOD

VOD (Video On-demand) typically refers to pre-recorded video content that a viewer can play at any time. Examples of VOD content include catch-up television streaming services and on-demand movies. All Netflix content is on-demand.

 

VOD is normally differentiated from live streaming content which is linear and cannot be played at any time including pre-recorded television content that is broadcast as live linear (is not) on-demand.

 

WebRTC

WebRTC (Web Real-Time Communication) is a free and open-source project providing web browsers and mobile applications with real-time communication via JavaScript API typically utilised for real-time video conferencing.

 

The WebRTC standard covers, on a high level, two different technologies: media capture devices and peer-to-peer connectivity.

 

WebRTC is an HTML5 specification that you can use to add real-time media communications directly between browser and devices. WebRTC is supported in all modern browsers. Google Chrome, Mozilla Firefox, Apple Safari and Microsoft Edge.

 

For native clients, like Android and iOS applications, a library is available that provides the same functionality. The WebRTC project is open-source and supported by Apple, Google, Microsoft and Mozilla, amongst others. This page is maintained by the Google WebRTC team.

 

Media capture devices includes video cameras and microphones, but also screen capturing “devices”. For cameras and microphones, we use navigator.mediaDevices.getUserMedia() to capture MediaStreams. For screen recording, we use navigator.mediaDevices.getDisplayMedia() instead.

 

The peer-to-peer connectivity is handled by the RTCPeerConnection interface. This is the central point for establishing and controlling the connection between two peers in WebRTC.

 

For most WebRTC applications to function a server is required for relaying the traffic between peers, since a direct socket is often not possible between the clients (unless they reside on the same local network). The common way to solve this is by using a TURN server. The term stands for Traversal Using Relay NAT, and it is a protocol for relaying network traffic.

 

WebRTC applications:

  • – Video conferencing
  • – Support centre communications
  • – Watch parties/Sync
  • – Education
  •  -Live event broadcast
  • – Auctions
  • – Telehealth, online education, legal proceedings, remote travel, fitness, dancing, tutoring, coaching
  • – Webinars
  • – Low latency broadcasting
  • – Cloud gaming
  • – Machine remoting
  • – Virtual spaces and the metaverse

Video Streaming

Glue Media Publishing System is tailored for video streaming production workflows. GlueMPS integrates with broadcast production systems to ingest video & content and metadata, processing your media assets for cross-device playback.

Learn more

Akamai

Akamai is the largest Content Delivery Network (CDN Platform) in the world. They currently have greater than 240,000 servers that cache video content to reduce latency resulting in an improved video viewing experience. Akamai also provides benefits to the media publishing systems by offsetting load to origin services resulting in better performance and availability and cyber security such as DDoS attack mitigation.

 

GlueMPS integrates with all leading CDNs including Akamai CDN.

AJA Video Systems

AJA Video Systems are manufacturers of remote and broadcast production hardware of software.

 

AJA video streaming hardware includes;

  1. Mini-converters
  2. Digital recorders
  3. Mobile IO
  4. Streaming Encoders
  5. Routers
  6. Recording Media
  7. Production Software
  8. Monitoring
  9. Cameras

 

GlueMPS integrates with all AJA streaming encoders

Alibaba Cloud CDN

Alibaba Cloud CDN is a content delivery network (CDN). Alibaba Cloud CDN provides more than 2,800 globally distributed edge nodes to ensure resource scalability and service availability.

 

GlueMPS integrates with all leading CDNs including Alibaba Cloud CDN.

Amazon Cloudfront

Amazon CloudFront is a content delivery network operated by Amazon Web Services. Content delivery networks provide a globally-distributed network of proxy servers that cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content.

 

GlueMPS integrates with all leading CDNs including Amazon Cloudfront

AWS Elemental

AWS Elemental, formerly known as Elemental Technologies, is a software company owned by Amazon Web Services that specializes in media streaming services.

AWS Elemental Cloud Solutions Include:

  1. MediaConnect
  2. MediaConvert
  3. MediaLive
  4. MediaPackage
  5. MediaStore
  6. MediaTailer

AWS Elemental On-premise Solutions Include:

  1. Live
  2. Server
  3. Conductor
  4. Link

 

Boom Labs is an Amazon Technology Partner. GlueMPS integrates with all AWS Elemental solutions.

Blackmagic Design

Blackmagic is a manufacturer of video production equipment ranging from professional live production cameras, recording and monitoring equipment, editing, colour correction and post-production systems, visual effects and graphic studios and live production devices including switches, capture and playback encoders.

 

GlueMPS integrates with Blackmagic Design playback encoders.

Cloudflare

Cloudflare is a content delivery network (CDN). It also security such as DDoS mitigations services and the worlds highest performance Domain Name System (DNS) network.

 

GlueMPS integrates with all leading CDNs including Cloudflare.

Fastly

Fastly’s is a content delivery network (CDN). It offers an edge cloud platform, edge software development kit (SDK), content delivery and image optimization, video and streaming, cloud security, load balancing, and managed CDN.

 

GlueMPS integrates with all leading CDNs including Fastly.

Harmonic

Harmonic are manufacturers of live video production equipment and software.

 

Harmonic solutions include:

  1. Live video encoding
  2. Production studio
  3. Media servers
  4. Stream processors

GlueMPS integrates with Harmonic video encoders

KeyCDN

KeyCDN is a Content Delivery Network is a (CDN). KeyCDN provides content caching of website and video content to reduce latency and improve performance with 25+ points of presence. KeyCDN also provides security and DNS services.

 

GlueMPS integrates with all leading CDNs including KeyCDN

LiveU

LiveU owns the patent for cellular bonding for remote news gathering in the US, Europe, China and other countries. All LiveU products are based on this fourth-generation patented technology.

 

Live manufacturer live video transmission hardware including IP broadcast quality field units including live encoders that are popular with live event production services.

 

GlueMPS integrates with LiveU encoders.

Limelight Networks

Limelight Networks is a content delivery network (CDN). It offers edge compute, cloud security, load balancing and managed CDN. Limelight has 95+ Tbps egress and ingress capacity and over 136 points of presence with direct connections to 1,000 + ISPs.

 

GlueMPS integrates with all leading CDNs including Limelight.

Matrox

Matrox Video is a leading manufacturer of video products and components for the broadcast and media, live entertainment, and AV/IT markets. Our mission is to create innovative products and provide superior support, enabling our customers to harness the power of video to entertain, communicate and make critical decisions.

 

Matrox hardware and software products are deployed across a diverse range of industries, including broadcast and media, education, enterprise, government, houses of worship, medical, military and defence, process control and utilities, security, and transportation. Our product offerings span encoders and decoders, KVM extenders, video wall controllers, and broadcast developer products. Matrox is also a trusted supplier of some of the world’s leading OEMs, offering a broad product and intellectual property (IP) portfolio to help solution providers accelerate product development, customization, and time to market.

 

GlueMPS integrates Matrox video encoders.

Microsoft Azure CDN

Microsoft Azure Content Delivery Network is a (CDN). Azure CDN helps to reduce load times, save bandwidth and speed responsiveness for websites, mobile apps, or encoding and distributing streaming media, gaming software, firmware updates or IoT endpoints.

 

GlueMPS integrates with all leading CDNs including Azure Content Delivery Network (CDN)

NewTek

NewTek are manufacturers of live video production hardware of software.

 

NewTek solutions include:

  1. Tricaster
  2. Cameras
  3. Video Replay Systems
  4. Control Panels
  5. Connect Pro
  6. Spark Plus
  7. Broadcast Graphics
  8. Animation & VFX

 

GlueMPS integrates NewTek video encoders.

Niagara Video

Niagara Video manufacturer 20 different encoding hardware options, in addition to software encoding solutions.

 

Niagara solutions include:

  1. GoStream Mini 150
  2.  Niagara 9300 Series
  3. GoStream Digital and Analog
  4. GoStream B264 encoder
  5. GoStream Mini 200

 

GlueMPS integrates Niagara Video encoders.

Open Broadcaster Software (OBS Studio)

Open Broadcaster Software (OBS Studio) is free and open-source software for video recording and production of the live streaming source.

 

OBS Studio is a popular choice for sending live stream sources to user-generated video platforms such as Youtube, Twitch and Facebook.

 

There are versions of OBS Studio available for Microsoft Windows, macOS, Linux distributions, and BSD.

 

Real time video/audio capturing and mixing. Create scenes made up of multiple sources including window captures, images, text, browser windows, webcams, capture cards and more.

 

Audio mixer with per-source filters such as noise gate, noise suppression, and gain. Take full control with VST plugin support.

 

Studio Mode lets you preview your scenes and sources before pushing them live. Adjust your scenes and sources or create new ones and ensure they’re perfect before your viewers ever see them.

 

GlueMPS integrates with Open Broadcaster Software (OBS Studio)

Telestream

Telestreamis a provider of media streaming software and hardware products for video capture and ingest; live and on-demand encoding and transcoding; captioning; playback and inspection, delivery, live streaming, workflow automation and orchestration, QC and monitoring and management of quality service and experience over networks.

 

Telestream products span the entire digital media lifecycle, including video capture and ingest; live and on-demand encoding and transcoding; captioning; playback and inspection, delivery, live streaming, workflow automation and orchestration, and monitoring and management of quality service and experience over networks. The company also partners closely with digital media companies across the entire digital media lifecycle, from consumer to enterprise.

 

Telestream’s Wirecast is a high quality software encoder. Wirecast recently launched a new streaming encoder that features several enhancements and fixes, including Facebook Live polling, re-written WebStream plugin, and Virtual Camera improvements

 

GlueMPS integrates with Telestream encoders.

Teradek

Teradek designs and manufactures high-performance video solutions for broadcast, cinema, and general imaging applications.

 

Teradek live streaming encoders are a popular choice for live event production service providers requiring mobile form factor encoding devices.

From wireless monitoring, color correction, and lens control, to live streaming, SaaS solutions, and IP video distribution, Teradek technology is used around the world by professionals and amateurs alike to capture and share compelling content.

 

GlueMPS integrates with Teradek encoders.

VidBlasterX

VidBlasterX is live production streaming encoder software.

 

VidBlasterX has three live streaming encoder software types as follows:

  1. VidBlasterX Home
  2. VidBlasterX Studio
  3. VidBlasterX Broadcast

 

GlueMPS integrates with VidBlasterX encoders.

vMix

vMix is live video production software for streaming for desktop broadcasting.

 

vMix live video streaming solutions include:

  1. vMix24
  2. vMix reference systems
  3. vMix Replayer

 

GlueMPS integrates with vMix encoders.

Video Platforms

Glue Media Publishing System is a Video Platform as a Service featuring end-to-end digital broadcasting hardware and software for OTT and Music Streaming Platforms. Learn how GlueMPS can help broadcasters and publishers ingest, manage, distribute and monetise audio and video content.

Learn more

AVOD

AVOD is an acronym for Advertising-based Video on Demand. AVOD video platforms generate video revenue by displaying advertising before, during or after the video. AVOD ads are not restricted to video advertising and may also be display advertising such as banners or sponsored content.

 

AVOD is a popular business model for video platforms with user-generated videos like Youtube  or streaming services where subscription (SVOD) and pay per view (TVOD) is not appropriate such as Catchup TV.

 

The AVOD business model is already familiar to terrestrial television audiences including cable and satellite. AVOD provides an excellent monetisation option for freemium content that audiences may not value enough to pay for but are happy to watch a few ads in return for free access to content.

 

Many OTT platforms combine AVOD, SVOD and TVOD tailoring content monetisation to specific content types. It is not uncommon for OTT platforms to offer hybrid free and paid access to video content consisting of the following

 

  1. AVOD (Freemium) free content access that has advertising
  2. SVOD (Subscription) content access that has no advertising
  3. TVOD (Pay Per View) content access for new releases or special event broadcasts

 

The advantages of AVOD monetisation are as follows:

 

  1. High Audience Acquisition — the free viewing model has a very large and broad audiences potential compared to SVOD or TVOD
  2. Higher Revenue Potential for catchup content — audiences may not be willing to pay for catchup television content but are quite willing to watch advertising to access catchup content
  3. Flexible Advertising Methods — AVOD publishers can choose from a wide variety of video and display ad types to suit their platform
  4. Personalized Advertising — personalised or targeted advertising increases ad revenues and is more engaging to audiences making AVOD less likely to deter audience access
  5. No barrier to entry — AVOD requires no effort on the part of the audience so is less likely to deter audiences from accessing content

 

AVOD video streaming services are the fastest growing business model in the video sector. AVOD requirements for broadcasters and publishers is as follows:

 

  1. Select an AVOD platform that meets your requirements.
  2. Ensure the Advertising platform and video CMS have privacy features (password protection, ad fraud prevention)
  3. Ensure the Advertising platform supports VAST & VPAID ads and ad targeting
  4. Select an intuitive and user-friendly video CMS of media publishing system (like GlueMPS)
  5. Ensure your video CMS supports ad targeting
  6. Ensure your applications and video player support VAST & VPAID ads

 

Glue Media Publishing System supports AVOD, SVOD and TVOD monetisation including all major video advertising platforms such as Google & OpenX and payment gateway providers such as Stripe, Paypal/Braintree and many more.

SVOD

SVOD is an acronym for Subscription Video on Demand. SVOD video streaming platforms generate revenue by charging their audience a recurring subscription fee to access the content.

 

Audiences are very familiar with subscription-based content access with leading OTT platforms including Netflix, Amazon Prime, Hulu, HBO Max, Disney+ and Apple TV to name a few.

 

Some OTT platforms combine SVOD and TVOD tailoring content monetisation to specific content types. It is not uncommon for OTT platforms to offer hybrid free trials and paid access to video content consisting of the following

 

  1. AVOD (Freemium) free content access that has advertising
  2. SVOD (Subscription) content access that has no advertising
  3. TVOD (Pay Per View) content access for new releases or special event broadcasts

 

SVOD monetisation is appropriate for a wide range of content including the following:

 

  1. Premium content not available on AVOD catch-up services
  2. Tailored or personalised content
  3. Large content catalogues with frequent regular releases
  4. Fitness & training videos

Advantages of SVOD includes the following:

 

  1. Flexible subscription options (subscription tiers and bundles)
  2. Creates an ongoing revenue stream
  3. Audiences willing to provide credit / direct debit are generally more brand loyal than TVOD or AVOD customers
  4. Generate revenue create and publish exclusive content
  5. Attracts premium content owners with revenue share licensing

Glue Media Publishing System has a range of SVOD features including:

 

  1. Content entitlements
  2. Geo-location pricing
  3. Subscription pricing tiers
  4. User authentication
  5. Payment history
  6. Customer support dashboard
  7. Transaction logging
  8. Payment gateway (Stripe, PayPal, Braintree and many more)
  9. Digital Rights Management
  10. Financial reporting

TVOD

TVOD is an acronym for Transactional Video on Demand. TVOD video streaming platforms generate revenue by charging their audience a one-off payment also known as Pay-Per-View to access content.

 

Audiences are very familiar with pay-per-view based content access with leading OTT platforms including Google Play, Youtube Red, Amazon Prime, Disney+ and Apple TV to name a few.

 

TVOD has become increasingly popular in recent times for new release movie content when cinema realises were not possible. The trend of TVOD monetisation is increasing with movie studios Disney, Paramount and HBO offering direct pay-per-view access to recently released movies.

 

Some OTT platforms combine SVOD and TVOD tailoring content monetisation to specific content types. It is not uncommon for OTT platforms to offer subscription and pay-per-view access to limited release video content consisting of the following

 

  1. AVOD (Freemium) free content access that has advertising
  2. SVOD (Subscription) content access that has no advertising
  3. TVOD (Pay Per View) content access for new releases or special event broadcasts

 

TVOD monetisation is appropriate for a wide range of content including the following:

 

  1. New release movies
  2. New release series
  3. Limited releases
  4. Live streaming events
  5. Sporting events
  6. Music events
  7. Conferences
  8. Education & Training

 

Advantages of TVOD includes the following:

 

  1. Flexible pay-per-view options (rental access period)
  2. Exclusive content access
  3. Special event content
  4. One-time content licenses
  5. Limited releases

 

Glue Media Publishing System has a range of SVOD features including:

 

  1. Content entitlements
  2. Geo-location pricing
  3. Pay-per-view access periods
  4. User authentication
  5. Payment history
  6. Customer support dashboard
  7. Transaction logging
  8. Payment gateway (Stripe, PayPal, Braintree and many more)
  9. Digital Rights Management
  10. Financial reporting

Ad Exchange

An ad exchange is a technology platform that serves as an open marketplace for publishers, advertisers, and other parties to list, sell, and buy ads on an impression-by-impression basis through automated auctions.

 

In order to advance digital advertising further, automation was required to alleviate the burden of manually connecting advertisers to their desired ad sources

 

Ad exchanges focus on providing publishers and advertisers with a high level of control and transparency into the process of selling and buying ad impressions reliably through automation.

 

Differences of an Ad Exchange from Ad Network:

 

  1. Ad exchanges are technology platforms. They facilitate their services automatically.
  2. Ad exchanges sell ads automatically on an impression-by-impression basis, not in bulk.
  3. Both publishers and advertisers use ad exchanges to either sell or buy ads.
  4. Ad exchanges are fully automated technology platforms
  5. Ad exchanges are considered “non-intermediary” open marketplaces

Ad Network

An ad network is an intermediary that facilitates the transaction of media (ad) buying and selling between publishers and advertisers. Ad networks act in a similar way to stockbrokers – assisting advertisers with their purchasing process by grouping ad inventory together into categorized, bulk-impression packages.

 

Ad networks typically focus on categorizing and selling large volumes of ad impressions to advertisers through manually negotiated sales packages.

 

Differences of an ad network from an ad exchange:

 

  1. Ad networks are companies with manual human business operations.
  2. Publishers “join” ad networks.
  3. Advertisers negotiate deals manually with ad networks.
  4. Ad networks are considered “intermediaries” in the ad ecosystem
  5. Ad exchanges are fully automated technology platforms
  6. Ad exchanges are considered “non-intermediary” open marketplaces

Google AdSense

Google AdSense is Google’s entry-level advertising solution having lower qualification criteria of 300,000 monthly page views, which is substantially lower than Google AdX.  AdSense has fewer publisher controls than Google AdX, however, on the positive side, AdSense manages ad placement, ad targeting, and payment for publishers.

 

Google AdX also differs from AdSense in that it has its own network of publishers, advertisers, and third-party ad networks. Unlike AdSense, publishers need to manage their inventory (like giving the floor price for auctions and carrying out preferred deals by choosing advertisers).

 

AdSense optimises publisher revenue by only displaying the highest bid ads. Ads are screened to ensure quality and relevance to the publishers content.

 

Auto ads scan your entire site and place ads where they’re likely to perform well and potentially generate more revenue.

 

Smarter ad sizing automatically adapts ads to the size of your user’s screen, meaning more ads will be eligible to fill your ad units, potentially leading to an increase in revenue

 

Integration of Google AdSense is quite simple with the addition of a few lines of code.

 

GlueMPS integrates with Google AdSense advertising solutions.

Google AdX

Google AdX (previously known as DoubleClick Ad Exchange) is an ad exchange network. Google AdX is a programmatic advertising platform, offering real-time bidding (RTB) on ad spaces to ad networks, including AdSense, agencies and demand-side platforms such as online video streaming platforms.

 

Google AdX is an exchange (not an ad network) that allows you to sell impressions rather than clicks. You can access Google AdX via Google Ad Manager (Google’s ad server).

 

The minimum requirement to access Google AdX is 10 million page views per month for a publisher.

 

The benefits of Google AdX include access to a large pool of premium media buyers/advertisers interesting in long term advertising campaigns.

 

Google AdX is aimed at top tear publishers as a step above Google Adsense. With Google AdSense a publisher can only display ads from Google Display Network (GDN) or Google-certified Ad Networks (GCAN). In contrast to that, Google AdX provides publishers access to non-Google-certified Ad Networks (demand-side platforms).

 

Googel AdX provides a programmatic platofrm for publishers, advertisers, and agencies to exchange inventories via RTB and private auctions allowing private auctions, preferred deals, and CPM, CPC, other such auction types.

 

Buyers from other exchanges, ad networks, SSPs, DSPs, and Google Display Network are allowed.

 

Once the threshold balance is reached, Google sends the money to the publisher’s account by the end of the month.

 

GlueMPS integrates with Google AdX programmatic advertising solutions.

Google AdMob

Google AdMob (Advertising on Mobile) is an ad exchange for mobile application advertising.

 

AdMob optimises publisher revenue on native mobile applications with tailored mobile app advertising formats.

 

As one of the largest global ad networks including third party networks, AdMob provides hight advertiser demand,

 

Publishers create a space for ads in their mobile app, AdMob works with advertisers who pay to show ads that are relevant to the publisher’s users.

 

Ad formats

By delivering the right ads to the right users at the right time, you can continue to give users a great experience while monetizing your app. You can choose from a wide range of formats, including:

  • Rewarded: Ads that users can choose to engage with in exchange for in-app rewards, like bonus points or an extra “life” in a game.
  • Native: Customized ads that look and feel like a natural part of your app.
  • Banner: Rectangular ads that can be anchored to the top or bottom of the screen.
  • Interstitial: Static or video ads that can appear at natural breaks or transition points, creating engaging brand experiences without disrupting the app experience.

 

GlueMPS integrates with Google AdMob programmatic advertising solutions.

Google Ad Manager

Google Ad Manager (GAM),  is an ad server through which publishers can manage, optimize, and serve ads on their website. (GAM) is a unified ad management platform designed to streamline ad delivery, reporting, and monetization for publishers with high monthly page views. Google Ad Manager combines DoubleClick for Publishers and DoubleClick Ad Exchange (Google AdX).

 

While Google AdSense remains a separate product for small-medium sized publishers, Google Ad Manager combines Google’s popular ad server/SSP (formerly DFP) and their industry-leading ad exchange (AdX).

 

Getting access to AdX is limited by Google to preferred publishers with at least 5 million monthly pageviews

 

Publishers can sell inventory via CPC, PPC, CPM, CPI, CPA, CPL. GAM also allows publishers to directly reach advertising demand partners of their own choice by inviting them to bid on the impressions.

 

Google’s Authorized Buyer program enables millions of websites and apps from all over the world to make their ad inventory available to programmatic buyers. Authorized buyers have access to “Google Partner Inventory”, allowing them to curate both how and who they work with through diverse buying methods.

 

The most common buyers include:

 

  1. Ad networks. Companies that serve as brokers between publishers and advertisers.
  2. Trading desks. Divisions at agency holding companies that execute exchange buys for all the company’s agencies.
  3. Demand-side platforms (DSP). Platforms that make buying across multiple exchanges easier.

 

To qualify for an AdX account:

 

  1. 5 million page views a month
  2. Appropriate brand safety measures
  3. An updated ads.txt file with details of inventory buyer

 

Google AdX Auction Types:

 

  1. Open auction.  Anonymous publishers and the auction is open to all publishers and advertisers.
  2. Private auction. Publishers target a selected group of advertisers directly, publishers are not anonymous.
  3. Preferred deal. Publishers offer a deal directly to an advertiser, offering them exclusive access to selected inventory generally at premium pricing

 

GlueMPS integrates with Google Ad Manager programmatic advertising solutions.

IAB

IAB (Interactive Advertising Bureau) is an advertising business organization that develops industry standards, best practice guidelines, conducts research, and provides legal support for the online advertising industry.

 

IAB standard ad sizes are of great significance in the digital advertising industry, IAB is responsible for creating standards and guidelines for ad units/sizes. This ensures that the buying and selling process of the inventory becomes easy for both publishers and advertisers, given that they are adhering to IAB standard ad size.

 

Founded in 1996, the Interactive Advertising Bureau (IAB) is the leading interactive advertising association and represents companies responsible for selling over 75% of online advertising in the United States, including; AOL, CNET, MSN, Overture Services, Walt Disney Internet Group, Yahoo, and over one hundred others. Its activities include evaluating and recommending standards and practices, fielding research to document the effectiveness of the interactive medium and educating the advertising industry about the use of interactive advertising. Membership includes companies that are actively engaged in, and support the sale of interactive advertising.

 

The common IAB Standard Sizes are 300×250 medium rectangle, 180×150 rectangle, 160×600 wide skyscraper, and 728×90 leaderboard.

 

What are the benefits of IAB Standards?

 

  • Protecting Consumer Privacy and Enabling Responsible Data Use
  • Improving Brand Safety and Reducing Ad Fraud
  • Programmatic Effectiveness: Driving Efficiency and Transparency
  • Driving Consistency in Ad Measurement and Ad Experiences
IAB Standards

 

Identity, Data, & Consumer Privacy

  1. Ad Product Taxonomy
  2. Audience Taxonomy
  3. Content Taxonomy
  4. IAB CCPA Compliance Framework for Publishers & Technology Companies
  5. Data Label
  6. GDPR Transparency & Consent Framework
  7. Project Rearc

 

Brand Safety & Ad Fraud

  1. Ad Blocking
  2. ads.cert
  3. ads.txt and app-ads.txt for Anti-Fraud
  4. Buyers.json and DemandChain Object
  5. Supply Chain Object
  6. sellers.json

 

Ad Experiences & Measurement

  1. Digital Audio Ad Serving Template (DAAST)
  2. Digital Video In-Stream Ad Format Guidelines
  3. Digital Video Ad Serving Template (VAST)
  4. Guidelines for Identifier for Advertising (IFA) on OTT Platforms
  5. HTML5 for Digital Advertising
  6. IAB New Ad Portfolio: Advertising Creative Guidelines
  7. Mobile Rich Media Ad Interface Definitions (MRAID)
  8. Open Video Viewability (OpenVV)
  9. Open Measurement SDK
  10. Podcast Ad Metrics Guidelines
  11. SafeFrame Implementation Guidelines
  12. SIMID (Secure Interactive Media Interface Definition)
  13. Video Multiple Ad Playlist (VMAP)
  14. Video Player-Ad Interface Definition (VPAID)

 

Programmatic Effectiveness

  1. AdCOM (OpenMedia)
  2. Ad Management API (OpenMedia)
  3. Common Ad Transport Standard (CATS)
  4. Dynamic Content Ad Standards
  5. Header Bidding Guidance
  6. OpenDirect
  7. OpenRTB
  8. OpenRTB Advisory – GDPR
  9. OpenRTB Dynamic Native Ads API

 

Which is the highest performing display ad size? 

 

  1. 300×250 – Medium rectangle
  2. 336×280 – Large rectangle
  3. 728×90 – Leaderboard
  4. 320×50 – Mobile leaderboard

 

GlueMPS web and native applications support IAB advertising technical standards including the programmatic display and video advertising solutions.

OpenX

OpenX has the largest independent ad exchange network for publishers with a 10.8% market share. OpenX creates programmatic advertising marketplaces for advertising including display and video.

 

OpenX monetisation solutions include:

  1. Open Audience
  2. Ad Exchange
  3. Mobile
  4. Video
  5. Private Marketplaces

 

GlueMPS integrates with OpenX programmatic advertising solutions.

Paywall

A paywall is a method of restricting access to content with a pay-per-view (TVOD) purchase or a paid subscription (SVOD).

 

Paywalls may be an alternative to advertising (AVOD) revenue or be combined with TVOD and SVOD to offer tiered content access.

 

What content is best for using a paywall:

 

  1. new release
  2. exclusive
  3. live event
  4. movies
  5. education
  6. sport
  7. special interest
  8. full series
  9. large regularly updated catalogues

 

Types of Paywalls

 

  1. Freemium paywalls with tiered free and paid content access
  2. Soft paywall with limited or metered free content access
  3. Adblock wall preventing access to content if ad blockers are used
  4. Hard paywall TVOD pay per views content access
  5. Hard paywall SVOD subscription content access

 

Many OTT platforms combine AVOD, SVOD and TVOD tailoring content monetisation to specific content types. It is not uncommon for OTT platforms to offer hybrid free and paid access to video content consisting of the following

 

  1. AVOD (Freemium) free content access that has advertising
  2. SVOD (Subscription) content access that has no advertising
  3. TVOD (Pay Per View) content access for new releases or special event broadcasts

Payment Gateway

A payment gateway is a service that authorises credit card or direct payment processing over the internet.

 

The procedure to process online payments includes the customer requiring to fill in a secure form with details, like credit/debit card number, expiry date, and CVV.

 

The main role of the payment gateway is to facilitate the secure authorisation of the transaction between merchant and customer and settle the payment of funds.

 

  1. The merchant: this is you, i.e an online business operating in any vertical (travel, retail, eCommerce, gaming, Forex, etc), offering a product or service to customers
  2. The customer: the customer, also called a cardholder, who wants to access the products or services that the merchant is selling, and initiates the transaction
  3. The issuing bank: the issuing bank is the customer’s bank that issues the cardholder’s credit or debit card on behalf of the card schemes (Visa, Mastercard)
  4. The acquirer: also known as the acquiring bank, the acquirer is the financial institution that maintains the merchant’s bank account (known as the merchant’s account). The acquiring bank passes the merchant’s transactions to the issuing bank to receive payment

 

Payment Gateway Process

 

  1. Customer enters credit/debit card details.
  2. The card details are encrypted with Secure Socket Layer (SSL) encryption to be sent between the browser and the merchant’s web server for Payment Card Industry Data Security Standard (PCI DSS) compliance.
  3. Merchant forwards transaction details to their payment gateway, with an SSL encrypted connection to the payment server hosted by the payment gateway.
  4. The payment gateway converts the message from XML to ISO 8583 or a variant message format (format understood by EFT Switches) and then forwards the transaction information to the payment processor used by the merchant’s acquiring bank.
  5. The payment processor forwards the transaction information to the card association (I.e.: Visa/MasterCard/American Express).
  6. The credit card issuing bank receives the authorization request, verifies the credit or debit available and then sends a response back to the processor (via the process same as for the authorization) with a response code (i.e., approved or denied). The response code also helps to communicate the reason for the case of a failed transaction, for example, insufficient funds, and so on.
  7. The processor then forwards the authorization response to the payment gateway, and the payment gateway receives the response and forwards it onto the interface used to process the payment. This process is termed as Authorization or “Auth”.
  8. The merchant then fulfils the order and the above process can be repeated but this time to “Clear” the authorization by consummating the transaction. Typically, the “Clear” is initiated only after the merchant has fulfilled the transaction (I.e. shipped the order). This results in the issuing bank ‘clearing’ the ‘auth’ (I.e. moves auth-hold to a debit) and prepares them to settle with the merchant acquiring bank.
  9. The merchant submits all their approved authorizations, in a “batch” (end of the day), to their acquiring bank for settlement via its processor. This typically reduces or “Clears” the corresponding “Auth” if it has not been explicitly “Cleared.”
  10. The acquiring bank makes the batch settlement request of the credit card issuer.
  11. The credit card issuer makes a settlement payment to the acquiring bank (the next day in most cases).
  12. The acquiring bank subsequently deposits the total of the approved funds in to the merchant’s nominated account (the same day or next day). This could be an account with the acquiring bank if the merchant does their banking with the same bank or an account with another bank

 

Payment Gateway Providers

  1. Stripe
  2. Paypal
  3. Braintree (Paypal company)
  4. Square
  5. WePay
  6. Authorize.Net
  7. 2Checkout
  8. Checkout.com
  9. Skrill
  10. WorldPay

 

GlueMPS integrates with the worlds leading payment gateways.

Monetisation

Glue Media Publishing System makes video streaming more profitable for OTT and music streaming platforms. Learn how GlueMPS supports the monetisation of audio and video content with content entitlements, paywalls and video advertising. GlueMPS can kickstart your revenue SVOD, TVOD and AVOD and content pricing based on geo-location.

Learn more

Glue Media Publishing System is a Platform as a Service for broadcast streaming.

Turnkey OVP & OMP solutions to get you streaming video, music or podcasts to the largest possible audience.
Video content management systems frequently lack end-to-end features that are necessary for managing an online video platforms entire publishing workflow. GlueMPS is a platform as a service providing infrastructure, software and applications to stream live and on-demand television, radio, music, podcasts, events, education, sport, news and government content.

Glue Media Publishing System cloud-based software features modules to ingest, manage, distribute and monetise content.