Changeset 287249 in webkit
- Timestamp:
- Dec 19, 2021 4:03:00 PM (7 months ago)
- Location:
- trunk/Source/WebCore
- Files:
-
- 3 edited
-
ChangeLog (modified) (1 diff)
-
platform/graphics/cocoa/SourceBufferParserWebM.cpp (modified) (5 diffs)
-
platform/graphics/cocoa/SourceBufferParserWebM.h (modified) (1 diff)
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/WebCore/ChangeLog
r287248 r287249 1 2021-12-19 Jean-Yves Avenard <jya@apple.com> 2 3 Don't pack audio samples with discontinuity together 4 https://bugs.webkit.org/show_bug.cgi?id=234458 5 rdar://86659914 6 7 Reviewed by Eric Carlson. 8 9 Some webm content may have a data gap between frames. Normally audio frames 10 are packed in 2s block. When we pack the samples with discontinuities, those 11 discontinuities would all be accumulated at the 2s boundary which makes them 12 much more audible. 13 The CMSampleBufferCreateReady API should allow us to pack samples with 14 discontinuities as we can give a vector of CMSampleTimingInfo with the 15 exact information for all packets. 16 However, this data appears to be ignored and the discontinuities is still 17 heard at the 2s boundary. 18 So we no longer pack samples with discontinuities so that the frame 19 timestamps will be more accurate and no audible artefacts are heard on 20 small gaps. 21 22 Manually tested and verified manually. This is getting around an issue 23 in CoreMedia that inserts very audible artifacts when there's a gap between 24 samples. 25 26 * platform/graphics/cocoa/SourceBufferParserWebM.cpp: 27 (WebCore::SourceBufferParserWebM::AudioTrackData::resetCompleted): 28 (WebCore::SourceBufferParserWebM::AudioTrackData::consumeFrameData): 29 (WebCore::SourceBufferParserWebM::AudioTrackData::createSampleBuffer): 30 * platform/graphics/cocoa/SourceBufferParserWebM.h: 31 1 32 2021-12-19 Alan Bujtas <zalan@apple.com> 2 33 -
trunk/Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.cpp
r287232 r287249 1374 1374 { 1375 1375 mNumFramesInCompleteBlock = 0; 1376 m_packetDescriptions.clear(); 1376 m_packetSizes.clear(); 1377 m_packetTimings.clear(); 1377 1378 m_currentPacketByteOffset = 0; 1378 1379 TrackData::resetCompleted(); … … 1381 1382 webm::Status SourceBufferParserWebM::AudioTrackData::consumeFrameData(webm::Reader& reader, const FrameMetadata& metadata, uint64_t* bytesRemaining, const CMTime& presentationTime, int sampleCount) 1382 1383 { 1384 if (m_packetTimings.size()) { 1385 auto& lastTiming = m_packetTimings.last(); 1386 if (PAL::CMTimeCompare(PAL::CMTimeAdd(lastTiming.duration, lastTiming.presentationTimeStamp), presentationTime)) { 1387 // Discontinuity encountered, emit the previously demuxed samples. 1388 createSampleBuffer(metadata.position); 1389 reset(); 1390 } 1391 } 1392 1383 1393 auto status = readFrameData(reader, metadata, bytesRemaining); 1384 1394 if (!status.completed_ok()) … … 1390 1400 mMaxBlockBufferCapacity = mNumFramesInCompleteBlock; 1391 1401 1392 if (m_packet Descriptions.isEmpty())1402 if (m_packetSizes.isEmpty()) 1393 1403 m_samplePresentationTime = presentationTime; 1394 1404 … … 1467 1477 } 1468 1478 1469 m_packetDescriptions.append({ static_cast<int64_t>(m_currentPacketByteOffset), 0, static_cast<UInt32>(*m_completePacketSize) }); 1479 m_packetSizes.append(*m_completePacketSize); 1480 m_packetTimings.append({ m_packetDuration, presentationTime, PAL::kCMTimeInvalid }); 1470 1481 m_currentPacketByteOffset += *m_completePacketSize; 1471 1482 m_completePacketSize = std::nullopt; 1472 1483 1473 1484 auto sampleDuration = PAL::CMTimeGetSeconds(PAL::CMTimeSubtract(presentationTime, m_samplePresentationTime)) + PAL::CMTimeGetSeconds(m_packetDuration) * sampleCount; 1485 1474 1486 if (sampleDuration >= m_minimumSampleDuration) { 1475 1487 createSampleBuffer(metadata.position); … … 1483 1495 void SourceBufferParserWebM::AudioTrackData::createSampleBuffer(std::optional<size_t> latestByteRangeOffset) 1484 1496 { 1485 if (m_packet Descriptions.isEmpty())1497 if (m_packetSizes.isEmpty()) 1486 1498 return; 1487 1499 1488 1500 CMSampleBufferRef rawSampleBuffer = nullptr; 1489 auto err = PAL::CM AudioSampleBufferCreateReadyWithPacketDescriptions(kCFAllocatorDefault, m_completeBlockBuffer.get(), formatDescription().get(), m_packetDescriptions.size(), m_samplePresentationTime, m_packetDescriptions.data(), &rawSampleBuffer);1501 auto err = PAL::CMSampleBufferCreateReady(kCFAllocatorDefault, m_completeBlockBuffer.get(), formatDescription().get(), m_packetSizes.size(), m_packetTimings.size(), m_packetTimings.data(), m_packetSizes.size(), m_packetSizes.data(), &rawSampleBuffer); 1490 1502 if (err) { 1491 1503 PARSER_LOG_ERROR_IF_POSSIBLE("CMAudioSampleBufferCreateWithPacketDescriptions failed with %d", err); -
trunk/Source/WebCore/platform/graphics/cocoa/SourceBufferParserWebM.h
r287232 r287249 235 235 uint8_t m_framesPerPacket { 0 }; 236 236 Seconds m_frameDuration { 0_s }; 237 Vector<AudioStreamPacketDescription> m_packetDescriptions; 237 Vector<size_t> m_packetSizes; 238 Vector<CMSampleTimingInfo> m_packetTimings; 238 239 size_t mNumFramesInCompleteBlock { 0 }; 239 240 // FIXME: 0.5 - 1.0 seconds is a better duration per sample buffer, but use 2 seconds so at least the first
Note: See TracChangeset
for help on using the changeset viewer.