Changeset 291216 in webkit
- Timestamp:
- Mar 12, 2022 11:23:12 PM (4 months ago)
- Location:
- trunk
- Files:
-
- 7 added
- 8 edited
-
LayoutTests/ChangeLog (modified) (1 diff)
-
LayoutTests/TestExpectations (modified) (2 diffs)
-
LayoutTests/media/content/test-vp8-hiddenframes.png (added)
-
LayoutTests/media/content/test-vp8-hiddenframes.webm (added)
-
LayoutTests/media/media-source/media-source-vp8-hiddenframes-expected.html (added)
-
LayoutTests/media/media-source/media-source-vp8-hiddenframes.html (added)
-
LayoutTests/media/media-vp8-hiddenframes-expected.html (added)
-
LayoutTests/media/media-vp8-hiddenframes.html (added)
-
LayoutTests/media/utilities.js (added)
-
Source/ThirdParty/libwebrtc/ChangeLog (modified) (1 diff)
-
Source/ThirdParty/libwebrtc/Source/webrtc/sdk/WebKit/WebKitDecoderReceiver.cpp (modified) (1 diff)
-
Source/WebCore/ChangeLog (modified) (1 diff)
-
Source/WebCore/platform/MediaSample.h (modified) (2 diffs)
-
Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.cpp (modified) (7 diffs)
-
Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.h (modified) (8 diffs)
Legend:
- Unmodified
- Added
- Removed
-
trunk/LayoutTests/ChangeLog
r291198 r291216 1 2022-03-12 Jean-Yves Avenard <jya@apple.com> 2 3 Safari produces scrambled output for some webm videos with vp8 codec. 4 https://bugs.webkit.org/show_bug.cgi?id=236754 5 6 Reviewed by Eric Carlson. 7 8 VP8 files were generated such that alternative reference frames were used: 9 $ fmpeg -i dragon.webm -c:v libvpx -vf scale=320:-1 -auto-alt-ref 1 -arnr-maxframes 5 -arnr-strength 3 -pass 1 test-vp8-hiddenframes.webm 10 $ fmpeg -i dragon.webm -c:v libvpx -vf scale=320:-1 -auto-alt-ref 1 -arnr-maxframes 5 -arnr-strength 3 -pass 2 test-vp8-hiddenframes.webm 11 12 The command used to extract the last frame in png format was: 13 $ ffmpeg -sseof -3 -i test-vp8-hiddenframes.webm -pred mixed -pix_fmt rgb24 -sws_flags +accurate_rnd+full_chroma_int -update 1 -q:v 1 test-vp8-hiddenframes.png 14 15 * TestExpectations: 16 * media/content/test-vp8-hiddenframes.png: Added. 17 * media/content/test-vp8-hiddenframes.webm: Added. 18 * media/media-source/media-source-vp8-hiddenframes-expected.html: Added. 19 * media/media-source/media-source-vp8-hiddenframes.html: Added. 20 * media/media-vp8-hiddenframes-expected.html: Added. 21 * media/media-vp8-hiddenframes.html: Added. 22 * media/utilities.js: Added. 23 (once): 24 (fetchWithXHR): 25 (loadSegment): 26 1 27 2022-03-11 Nikolaos Mouchtaris <nmouchtaris@apple.com> 2 28 -
trunk/LayoutTests/TestExpectations
r291036 r291216 68 68 fast/text/mac [ Skip ] 69 69 scrollingcoordinator [ Skip ] 70 media/media-vp8-hiddenframes.html [ Skip ] # Requires MediaFormatReader and VP8 decoder 70 71 media/ios [ Skip ] 71 72 media/modern-media-controls/overflow-support [ Skip ] … … 4768 4769 webkit.org/b/221973 media/media-webm-no-duration.html [ Skip ] 4769 4770 webkit.org/b/222493 media/media-source/media-source-webm-vp8-malformed-header.html [ Skip ] 4771 media/media-source/media-source-vp8-hiddenframes.html [ Skip ] # Requires VP8 decoder 4770 4772 4771 4773 # WebXR - Missing modules. -
trunk/Source/ThirdParty/libwebrtc/ChangeLog
r291210 r291216 1 2022-03-12 Jean-Yves Avenard <jya@apple.com> 2 3 Safari produces scrambled output for some webm videos with vp8 codec. 4 https://bugs.webkit.org/show_bug.cgi?id=236754 5 rdar://80869041 6 7 Reviewed by Eric Carlson. 8 9 * Source/webrtc/sdk/WebKit/WebKitDecoderReceiver.cpp: 10 (webrtc::WebKitDecoderReceiver::decoderFailed): Tell CoreMedia if a frame 11 was silently dropped by the decoder. 12 1 13 2022-03-12 Tim Horton <timothy_horton@apple.com> 2 14 -
trunk/Source/ThirdParty/libwebrtc/Source/webrtc/sdk/WebKit/WebKitDecoderReceiver.cpp
r281868 r291216 152 152 vtError = kVTVideoDecoderBadDataErr; 153 153 154 VTDecoderSessionEmitDecodedFrame(m_session, m_currentFrame, vtError, 0, nullptr);154 VTDecoderSessionEmitDecodedFrame(m_session, m_currentFrame, vtError, vtError ? 0 : kVTDecodeInfo_FrameDropped, nullptr); 155 155 m_currentFrame = nullptr; 156 156 -
trunk/Source/WebCore/ChangeLog
r291215 r291216 1 2022-03-12 Jean-Yves Avenard <jya@apple.com> 2 3 Safari produces scrambled output for some webm videos with vp8 codec. 4 https://bugs.webkit.org/show_bug.cgi?id=236754 5 rdar://80869041 6 7 Reviewed by Eric Carlson. 8 9 The MediaFormatReader plugin and the MSE SourceBufferPrivate are using 10 a SampleMap to store all the media samples timing information: one sorted 11 by DTS order and the other in PTS order. 12 Those SampleMap use the sample's presentation time as unique key. 13 The VP8 codec can define hidden samples that are to be fed to the decoder 14 but will not decode into an actual image. Those samples are typically 15 packed together in the webm container in two or more consecutive blocks 16 with the same presentation time (similar behaviour can also be found in 17 webm where multiple frames may be stored with the same presentation 18 time). 19 When stored in the SampleMap, only the latest sample added would be kept 20 after overwriting the previous one with the same time. 21 To get around this issue, we pack all samples with the same presentation 22 time in a single MediaSamplesBlock so that they can be stored together in 23 the map without any losses. 24 Upon decoding, all those sub-samples will be retrieved and fed to the 25 decoder. 26 27 The CoreMedia MediaFormatReader backend however has a bug where it will 28 enter in an infinite if we return successive frames with the same 29 timestamp which will cause memory exhaustion and a crash. 30 To get around this, we make the grouped samples appear as discrete, 31 making each hidden sample have a duration of 1us followed by the 32 visible frames (which will see its duration shorten by 1us). 33 34 Tests: media/media-source/media-source-vp8-hiddenframes.html 35 media/media-vp8-hiddenframes.html 36 New tests had to be disabled due to bug 236755. We currently have no 37 way to guarantee which frame is currently displayed after either a seek 38 operation or reaching the end of playback. 39 40 * platform/MediaSample.h: 41 (WebCore::MediaSamplesBlock::append): 42 (WebCore::MediaSamplesBlock::clear): 43 * platform/graphics/cocoa/SourceBufferParserWebM.cpp: 44 (WebCore::WebMParser::parse): 45 (WebCore::WebMParser::OnFrame): 46 (WebCore::WebMParser::flushPendingVideoSamples): 47 (WebCore::WebMParser::VideoTrackData::resetCompletedFramesState): 48 (WebCore::WebMParser::VideoTrackData::consumeFrameData): 49 (WebCore::WebMParser::VideoTrackData::processPendingMediaSamples): 50 (WebCore::WebMParser::VideoTrackData::flushPendingSamples): 51 (WebCore::WebMParser::AudioTrackData::consumeFrameData): 52 * platform/graphics/cocoa/SourceBufferParserWebM.h: 53 (WebCore::WebMParser::TrackData::consumeFrameData): 54 (WebCore::WebMParser::TrackData::resetCompletedFramesState): 55 (WebCore::WebMParser::TrackData::drainPendingSamples): 56 1 57 2022-03-12 Alan Bujtas <zalan@apple.com> 2 58 -
trunk/Source/WebCore/platform/MediaSample.h
r291033 r291216 239 239 MediaSample::SampleFlags flags; 240 240 }; 241 using SamplesVector = Vector<MediaSampleItem>; 241 242 242 243 void setInfo(RefPtr<const TrackInfo>&& info) { m_info = WTFMove(info); } … … 246 247 TrackInfo::TrackType type() const { return m_info ? m_info->type() : TrackInfo::TrackType::Unknown; } 247 248 void append(MediaSampleItem&& item) { m_samples.append(WTFMove(item)); } 248 void append(MediaSamplesBlock&& block) { m_samples.appendVector(std::exchange(block.m_samples, { })); } 249 void append(MediaSamplesBlock&& block) { append(std::exchange(block.m_samples, { })); } 250 void append(SamplesVector&& samples) { m_samples.appendVector(WTFMove(samples)); } 249 251 size_t size() const { return m_samples.size(); }; 250 252 bool isEmpty() const { return m_samples.isEmpty(); } 251 253 void clear() { m_samples.clear(); } 252 using SamplesVector = Vector<MediaSampleItem>;253 254 SamplesVector takeSamples() { return std::exchange(m_samples, { }); } 254 255 -
trunk/Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.cpp
r291033 r291216 577 577 if (m_status.ok() || m_status.code == Status::kEndOfFile || m_status.code == Status::kWouldBlock) { 578 578 m_reader->reclaimSegments(); 579 580 // We always keep one sample queued in order to calculate the video sample's time, return it now. 581 flushPendingVideoSamples(); 579 582 return 0; 580 583 } … … 997 1000 } 998 1001 999 return trackData->consumeFrameData(*reader, metadata, bytesRemaining, MediaTime(block->timecode + m_currentTimecode, m_timescale) , block->num_frames);1002 return trackData->consumeFrameData(*reader, metadata, bytesRemaining, MediaTime(block->timecode + m_currentTimecode, m_timescale)); 1000 1003 } 1001 1004 … … 1047 1050 } 1048 1051 1052 void WebMParser::flushPendingVideoSamples() 1053 { 1054 for (auto& track : m_tracks) { 1055 if (track->trackType() == TrackInfo::TrackType::Video) 1056 downcast<WebMParser::VideoTrackData>(track.get()).flushPendingSamples(); 1057 } 1058 } 1059 1049 1060 void WebMParser::VideoTrackData::resetCompletedFramesState() 1050 1061 { 1051 m_keyFrames.clear();1062 ASSERT(!m_pendingMediaSamples.size()); 1052 1063 TrackData::resetCompletedFramesState(); 1053 1064 } 1054 1065 1055 webm::Status WebMParser::VideoTrackData::consumeFrameData(webm::Reader& reader, const FrameMetadata& metadata, uint64_t* bytesRemaining, const MediaTime& presentationTime , int)1066 webm::Status WebMParser::VideoTrackData::consumeFrameData(webm::Reader& reader, const FrameMetadata& metadata, uint64_t* bytesRemaining, const MediaTime& presentationTime) 1056 1067 { 1057 1068 #if ENABLE(VP9) … … 1086 1097 } 1087 1098 1088 if (!m_completeMediaSamples.info()) 1089 m_completeMediaSamples.setInfo(formatDescription()); 1090 else if (formatDescription() && *formatDescription() != *m_completeMediaSamples.info()) 1091 drainPendingSamples(); 1092 1093 auto track = this->track(); 1094 1095 uint64_t duration = 0; 1096 if (track.default_duration.is_present()) 1097 duration = track.default_duration.value() * presentationTime.timeScale() / k_us_in_seconds; 1098 1099 m_completeMediaSamples.append({ presentationTime, presentationTime, MediaTime(duration, presentationTime.timeScale()), WTFMove(m_completeFrameData), isKey ? MediaSample::SampleFlags::IsSync : MediaSample::SampleFlags::None }); 1100 1101 drainPendingSamples(); 1099 processPendingMediaSamples(presentationTime); 1100 1101 m_pendingMediaSamples.append({ presentationTime, presentationTime, MediaTime::indefiniteTime(), WTFMove(m_completeFrameData), isKey ? MediaSample::SampleFlags::IsSync : MediaSample::SampleFlags::None }); 1102 1102 1103 1103 ASSERT(!*bytesRemaining); … … 1111 1111 } 1112 1112 1113 void WebMParser::VideoTrackData::processPendingMediaSamples(const MediaTime& presentationTime) 1114 { 1115 // WebM container doesn't contain information about duration; the end time of a frame is the start time of the next. 1116 // Some frames however may have a duration of 0 which typically indicates that they should be decoded but not displayed. 1117 // We group all the samples with the same presentation timestamp within the same final MediaSampleBlock. 1118 1119 if (!m_pendingMediaSamples.size()) 1120 return; 1121 auto& lastSample = m_pendingMediaSamples.last(); 1122 lastSample.duration = presentationTime - lastSample.presentationTime; 1123 if (presentationTime == lastSample.presentationTime) 1124 return; 1125 1126 MediaTime timeOffset; 1127 MediaTime durationOffset; 1128 while (m_pendingMediaSamples.size()) { 1129 auto sample = m_pendingMediaSamples.takeFirst(); 1130 if (timeOffset) { 1131 sample.presentationTime += timeOffset; 1132 sample.decodeTime += timeOffset; 1133 auto usableOffset = std::min(durationOffset, sample.duration); 1134 sample.duration -= usableOffset; 1135 durationOffset -= usableOffset; 1136 } 1137 // The MediaFormatReader is unable to deal with samples having a duration of 0. 1138 // We instead set those samples to have a 1us duration and shift the presentation/decode time 1139 // of the following samples in the block by the same offset. 1140 if (!sample.duration) { 1141 sample.duration = MediaTime(1, k_us_in_seconds); 1142 timeOffset += sample.duration; 1143 durationOffset += sample.duration; 1144 } 1145 m_processedMediaSamples.append(WTFMove(sample)); 1146 } 1147 m_lastDuration = m_processedMediaSamples.last().duration; 1148 m_lastPresentationTime = presentationTime; 1149 if (!m_processedMediaSamples.info()) 1150 m_processedMediaSamples.setInfo(formatDescription()); 1151 drainPendingSamples(); 1152 } 1153 1154 void WebMParser::VideoTrackData::flushPendingSamples() 1155 { 1156 // We haven't been able to calculate the duration of the last sample as none will follow. 1157 // We set its duration to the track's default duration, or if not known the time of the last sample processed. 1158 if (!m_pendingMediaSamples.size()) 1159 return; 1160 ASSERT(m_lastPresentationTime); 1161 auto track = this->track(); 1162 1163 MediaTime duration; 1164 if (track.default_duration.is_present()) 1165 duration = MediaTime(track.default_duration.value() * m_lastPresentationTime->timeScale() / k_us_in_seconds, m_lastPresentationTime->timeScale()); 1166 else if (m_lastDuration) 1167 duration = *m_lastDuration; 1168 processPendingMediaSamples(*m_lastPresentationTime + duration); 1169 m_lastPresentationTime.reset(); 1170 m_lastDuration.reset(); 1171 } 1172 1113 1173 void WebMParser::AudioTrackData::resetCompletedFramesState() 1114 1174 { … … 1117 1177 } 1118 1178 1119 webm::Status WebMParser::AudioTrackData::consumeFrameData(webm::Reader& reader, const FrameMetadata& metadata, uint64_t* bytesRemaining, const MediaTime& presentationTime , int)1179 webm::Status WebMParser::AudioTrackData::consumeFrameData(webm::Reader& reader, const FrameMetadata& metadata, uint64_t* bytesRemaining, const MediaTime& presentationTime) 1120 1180 { 1121 1181 auto status = readFrameData(reader, metadata, bytesRemaining); … … 1180 1240 } 1181 1241 1182 if (!m_ completeMediaSamples.info())1183 m_ completeMediaSamples.setInfo(formatDescription());1184 else if (formatDescription() && *formatDescription() != *m_ completeMediaSamples.info())1242 if (!m_processedMediaSamples.info()) 1243 m_processedMediaSamples.setInfo(formatDescription()); 1244 else if (formatDescription() && *formatDescription() != *m_processedMediaSamples.info()) 1185 1245 drainPendingSamples(); 1186 1246 1187 m_ completeMediaSamples.append({ presentationTime, MediaTime::invalidTime(), m_packetDuration, WTFMove(m_completeFrameData), MediaSample::SampleFlags::IsSync });1247 m_processedMediaSamples.append({ presentationTime, MediaTime::invalidTime(), m_packetDuration, WTFMove(m_completeFrameData), MediaSample::SampleFlags::IsSync }); 1188 1248 1189 1249 drainPendingSamples(); -
trunk/Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.h
r291033 r291216 38 38 #include <webm/status.h> 39 39 #include <webm/vp9_header_parser.h> 40 #include <wtf/Deque.h> 40 41 #include <wtf/MediaTime.h> 41 42 #include <wtf/UniqueRef.h> … … 147 148 WebMParser& parser() const { return m_parser; } 148 149 149 virtual webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime& , int)150 virtual webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime&) 150 151 { 151 152 ASSERT_NOT_REACHED(); … … 156 157 { 157 158 m_completeBlockBuffer = nullptr; 158 m_ completeMediaSamples = { };159 m_processedMediaSamples = { }; 159 160 } 160 161 … … 169 170 void drainPendingSamples() 170 171 { 171 if (!m_ completeMediaSamples.size())172 if (!m_processedMediaSamples.size()) 172 173 return; 173 m_parser.provideMediaData(WTFMove(m_ completeMediaSamples));174 m_parser.provideMediaData(WTFMove(m_processedMediaSamples)); 174 175 resetCompletedFramesState(); 175 176 } … … 178 179 RefPtr<SharedBuffer> contiguousCompleteBlockBuffer(size_t offset, size_t length) const; 179 180 webm::Status readFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t* bytesRemaining); 180 MediaSamplesBlock m_ completeMediaSamples;181 MediaSamplesBlock m_processedMediaSamples; 181 182 bool m_useByteRange { false }; 182 183 MediaSamplesBlock::MediaSampleDataType m_completeFrameData; … … 207 208 } 208 209 210 void flushPendingSamples(); 211 209 212 private: 210 213 const char* logClassName() const { return "VideoTrackData"; } 211 webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime& , int) final;214 webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime&) final; 212 215 void resetCompletedFramesState() final; 216 void processPendingMediaSamples(const MediaTime&); 217 WTF::Deque<MediaSamplesBlock::MediaSampleItem> m_pendingMediaSamples; 218 std::optional<MediaTime> m_lastDuration; 219 std::optional<MediaTime> m_lastPresentationTime; 213 220 214 221 #if ENABLE(VP9) 215 222 vp9_parser::Vp9HeaderParser m_headerParser; 216 Vector<bool> m_keyFrames;217 223 #endif 218 224 }; … … 231 237 232 238 private: 233 webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime& , int) final;239 webm::Status consumeFrameData(webm::Reader&, const webm::FrameMetadata&, uint64_t*, const MediaTime&) final; 234 240 void resetCompletedFramesState() final; 235 241 const char* logClassName() const { return "AudioTrackData"; } … … 245 251 static bool isSupportedVideoCodec(StringView); 246 252 static bool isSupportedAudioCodec(StringView); 253 void flushPendingVideoSamples(); 247 254 248 255 // webm::Callback
Note: See TracChangeset
for help on using the changeset viewer.