How Consumers Actually Perceive Video Quality and How to Measure It

 

At the heart of the debate on how best to improve the quality of video streams for viewers is a discussion on how to measure video quality. As Peter Drucker famously said “If you can’t measure it how can you improve it?”

Several approaches to measuring quality have been developed, but it is right to ask if quality measurement in the controlled data centre environment tells us anything about the video quality viewers actually experience when streamed to their device over the internet?

ITU EVP Measures Actual Viewer Experience

The ITU study (BT.2095-1) “Subjective assessment of video quality using expert viewing protocol” published in 2017 defines the EVP protocol for measuring perceived quality. Such an approach is useful to map different quality measurement techniques onto a common subjective video quality standard.

Subjective viewer experience could only be measured by using a similar approach to the ITU EVP for each and every user session – clearly impractical. So the ITU EVP can tell us little about how to measure the actual viewer experience.

Capture Quality of Video Segments Prior to Delivery

The pragmatic option is to capture the quality of each video segment prior to delivery, track which of those segments was viewed by the user during a streaming session, and then report for each of those session the quality experience across the session timeline.

capture streaming video quality of segments prior to its delivery

This requires linking a quality measurement system, with a session-based delivery mechanism, with a player-based reporting tool.

iMOS Video Analysis – Quality Measurement Tool

This is exactly what MediaMelon’s iMOS video analysis, SmartSight QBR and SmartSight QoE achieves. Using these tools together provides a unique insight into the video quality experienced by each user for each streaming session.

Variations in Streaming Video Quality Matters More than its Absolute Quality

We can discuss how MediaMelon’s iMOS quality measurement tool consistently and reliably reflects the video quality users experience, but there may be other issues to consider.

A study by David Hands and Kennedy Cheng entitled “Subjective Responses to Constant and Variable Quality Video” showed that variations in quality are perceived by viewers as less acceptable compared to stable quality of the same average quality.

Using a very similar approach to the ITU Expert Viewing Protocol, this study showed the audience a video clip played several times at different constant qualities; and then the same video clip played with quality varying during the clip. The metric used as a proxy for quality in each session was frame rate, and in the varying quality session the frame rate was changed during the clip from 15 – 5 – 20 – 1 – 10 fps being an average frame rate of 10fps.

iMOS video quality measurement analysis graph

Results of MOS Score vs Frame Rate fps Study

The results provided in the chart showed that:

  • The frame rate proxy for constant quality produced a predictable improvement in MOS score as the frame rate was increased.
  • Even though the average frame rate of the variable quality clip was 10 fps, the perceived quality was less than that of a constant quality clip at 5 fps.

The lesson we draw from the study shows that reducing variability in content quality has a greater impact on user experience in contrast to focusing purely on improving the average quality.

MediaMelon’s SmartSight QBR solution does exactly that, focusing on reducing the troughs in MOS score throughout a session, typically delivering an 80% reduction in quality fluctuations.

4 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *