The television viewing experience has changed dramatically over the last few years, moving from a TV set in the living room to mobile devices with HD screens that travel wherever you go. Not only has the device that we watch TV on changed, but the dynamic of how we watch has also undergone a fundamental change. TV viewing is no longer a family activity, with multiple family members watching the same show. Rather, it’s now an individual activity, with different members of the family watching different programming at the same time, often in the same room.
For content distributors and programmers, the effects of this shift are further exacerbated due to the sheer volume of quality video content being produced and widely available to consumers.
For content distributors and programmers, enhancing content discovery has become a necessity. Broadcasters and OTT providers have to aggressively expand the search and recommendation features of their services to help an individual viewer find exactly what they want to watch, when they want to watch it. There is such a massive amount content available to the average viewer that if a provider doesn’t make it easy for them to find something they want to watch on the device of their choice, consumers can and will quickly move to a service that does. Expanding discovery capabilities is good business for content services as well, as multiple studies have shown that successful content discovery leads to satisfied and loyal customers.
But where does a content provider start? Typically, the focus has been on collaborative filtering recommendation engines — you watched X so you must like Y, or because others who watched X also watched Y, we’ll show you Y. There is a lot of value to be gained from collaborative filtering recommendation engines, but after nearly a decade of investment and development, the maximum viewing lift from them is well understood and appears to largely have reached a plateau. The primary reason for this plateau is not the quality of the recommendation algorithms, but the quality and consistency of the metadata that powers them.
Many broadcasters and OTT providers still manage their metadata via spreadsheets. All rely on metadata from multiple third-parties, which is not only often incomplete and inconstant, but also usually differs for each distribution platform. For broadcasters and OTT providers, an advanced content metadata system is necessary not only improve their user experience, but also to drive loyalty, differentiation and increased revenue.
Ultimately, these businesses need content catalogues that are better managed, more accurate and consistent content descriptions across all their assets, a better understanding of similarities between content assets, and the ability to create next generation search and discovery experiences that are far more granular, based on the way that users think of content, with more useful recommendations at the level of the individual.
One key is keeping metadata consistent and accurate across platforms. One of the common issues with metadata management is that content is often ingested across multiple systems, each with its own criteria for ingesting, managing and presenting metadata. Inconsistencies across these systems can result in viewers not being able to discover content that they want to watch, or becoming confused by seeing the same piece of content with completely different descriptions.
One way to solve the management issue is to merge broadcast and digital metadata workflows to eliminate data duplication and inconsistencies; this also has the added bonus of reducing operational costs by moving to a single system. A single, master metadata file could, for example, contain the different length synopses for experiences on different devices and lower and higher resolution images for different screen types, ensuring that viewers see the correct, consistent version of the content for their circumstances, whether for linear viewing, catch-up viewing, digital or on demand experiences.
Metadata management is only one pillar of the necessary solution though. Broadcasters and distributors need to break their reliance on multiple third party metadata sources, and instead start to create new, unique metadata to improve content discovery on their services. Scene-level analysis, utilizing visual identification and natural language processing of closed captions can deliver greater insights about content assets and power next generation discovery experiences by understanding things like which characters are present, what they are doing and feeling, and what they are talking about.
Content can be more finely categorized into “micro genres” based on granular attributes like mood and plot, rather than top-level genre categorizations like drama and comedy. A more granular understanding of the connections and similarities between different pieces of content that is generated from plot analysis and micro genre categorizations can be used to create better individual recommendations and a much more personalized experience.
Increasingly, artificial intelligence is also being used to streamline and strengthen the use of metadata for discovery. Metadata augmentation, underpinned by machine learning, for example, can include analysing video scenes using image, facial and setting recognition in addition to closed-caption analysis. Audio recognition could also be introduced, using background noises to help identify things like car chases, or the audio dialogue to gain an understanding of a character’s emotional state. This new intelligence opens up a world of possibilities in the content discovery process.
Right now, consumers are enjoying a golden age of video programming, but many broadcasters and OTT providers are struggling to deliver the next generation discovery experiences that viewers expect. “You are only as good as your metadata” is the refrain that is starting to be heard. In an era characterized by choice for the consumer, the pressure is on to fully utilize the engagement opportunities that the right use of metadata can deliver.
Categories: Video Search
Topics: Piksel