Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audio samples are rendered and in increasing presentation time order #205

Open
yanj-github opened this issue Nov 7, 2024 · 4 comments
Open

Comments

@yanj-github
Copy link
Contributor

After further analysis, the Observation Framework accurately reported failures in the starting and ending audio samples in the following observation.

[OF] When examined as a continuous sequence of timestamped audio samples of the audio stream, the 20ms test audio samples shall be a complete rendering of the source audio track and are rendered in increasing presentation time order.

e.g: Failed segments are:
Segment(0.0ms) Segment(20.0ms) - at the beginning
Segment(29920.0ms) Segment(29940.0ms) Segment(29960.0ms) Segment(29980.0ms) - at the end

To assist with debugging, you can use audio analysis tools to measure the duration of the presented audio. You'll notice that it is shorter than the expected duration of 30 seconds.

I don’t have the right knowledge in this area. Could someone please kindly look into this further?
Possible causes might include:

  • Issues in the test implementation
  • Device-related problems
  • Change from TS audio to WAVE playback
  • Tolerance at either end being set to 0, which may be too strict
@yanj-github
Copy link
Contributor Author

yanj-github commented Nov 7, 2024

This is how I debug this issue, hope that it will help.
Debugging Process:

  • Measure the start time of the audio: Audio started at 9.627s.
  • Measure the end time of the audio: Audio ended at 39.504s.
  • Expected playback duration: 30 seconds.
  • Detected playback duration: 39.504s - 9.627s = 29.877s.
  • Discrepancy: The actual playback duration is 123 milliseconds shorter than expected (29.877s vs. 30s).
  • Audio measurement frequency: Audio is measured every 20 milliseconds.
  • Mismatch analysis: The discrepancy of 123ms corresponds to 6 missing audio segments (123ms ÷ 20ms = 6 segments).

image
image

@jpiesing
Copy link

jpiesing commented Nov 26, 2024

In the 2024-11-26 results shared by @louaybassbouss , there are

  • 124 instances of "FAIL: Audio segments failed at the following timestamps: 0ms 20ms".
  • 52 instances of "FAIL: Audio segments failed at the following timestamps: 0ms 20ms 40ms" and
  • 11 instances of "FAIL: Audio segments failed at the following timestamps: 0ms 20ms 40ms 60ms".
  • 9 instances of "FAIL: Audio segments failed at the following timestamps: 0ms 20ms 40ms 60ms 80ms"
  • 4 instances of "FAIL: Audio segments failed at the following timestamps: 0ms 20ms 40ms 60ms 80ms 100ms"
  • In random access, 6 instances of "FAIL: Audio segments failed at the following timestamps: 11000ms 11020ms"

Additionally there are 20 failures where the entire failure message is "FAIL: Audio segments failed at the following timestamps: 0ms 20ms Found 2 segments are missing. Start segment number tolerance is 0. ", i.e. allowing 2 missing segments would make these a pass.
There are a further 15 failures where the entire message is "FAIL: Audio segments failed at the following timestamps: 0ms 20ms 40ms Found 3 segments are missing. Start segment number tolerance is 0. ".

Changing the start_segment_num_tolerance in config.ini to 3 (or is it 60) would convert 35 failures into passes and simplify a further 124-+6-11-35 = 84 failures which also fail for other reasons as well.

There are 15 rows of observations "[OF] When examined as a continuous sequence of timestamped audio samples of the audio stream, the 20ms test audio samples shall be a complete rendering of the source audio track and are rendered in increasing presentation time order." and which have in general been run on 4 TVs (NOT RUN on TVs 5 and 6) so approximately 180 failures.
Converting 35 of these into passes and simplifying a further 84 failures is significant.

@wschidol
Copy link

Regarding "discrepancy of 123ms" it is not at all surprising to find implementations that don't play all samples. There are certain SOCs where we know that they are cutting off samples off the end, and missing ~120 ms sounds plausible as a consequence. (Speaking technically, the issue here is that the decoder is often followed by other processing stages like limiters, resamples or mixers, all of which have buffers. Unless the implementation explicitly flushes these buffers, the data in the buffers will be discarded).

@jpiesing
Copy link

When these tests are used in an HbbTV context, the tolerances are explicitly required to be zero :)

When content that has been delivered through MSE is played from the beginning to the end, the first and last frame of video shall be visible and the audio corresponding to the period from the start of the first video frame to the end of the last video frame shall be audible.

Outside an HbbTV context, the definition of the 'end of stream' algorithm makes it clear that the duration is set based on the content in the SourceBuffer with the highest media time. If the end of the last appended video frame is at 30s and the end of the last appended audio frame is 30s then the duration is 30s.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants