Changeset 267787 in webkit
- Timestamp:
- Sep 30, 2020 7:10:05 AM (3 years ago)
- Location:
- trunk
- Files:
-
- 4 added
- 17 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/ChangeLog
r267771 r267787 1 2020-09-30 Philippe Normand <pnormand@igalia.com> 2 3 [GStreamer] Internal audio rendering support 4 https://bugs.webkit.org/show_bug.cgi?id=207634 5 6 Reviewed by Xabier Rodriguez-Calvar. 7 8 * Source/cmake/FindWPEBackend_fdo.cmake: Check for the audio extension header initially 9 shipped in the 1.8.0 release. 10 * Source/cmake/GStreamerChecks.cmake: Check and enable external audio rendering support if 11 the WPEBackend-FDO audio extension was found. 12 1 13 2020-09-29 Don Olmstead <don.olmstead@sony.com> 2 14 -
trunk/Source/WebCore/ChangeLog
r267786 r267787 1 2020-09-30 Philippe Normand <pnormand@igalia.com> 2 3 [GStreamer] Internal audio rendering support 4 https://bugs.webkit.org/show_bug.cgi?id=207634 5 6 Reviewed by Xabier Rodriguez-Calvar. 7 8 This patch introduces two features regarding audio rendering: 9 10 1. Internal audio mixing enabled at runtime with the WEBKIT_GST_ENABLE_AUDIO_MIXER=1 11 environment variable. When this is enabled, the WebProcess will have its GStreamer backends 12 render to dedicated WebKit audio sinks. Those will forward buffers to a singleton audio 13 mixer. The resulting audio stream will then be rendered through the default audio sink 14 (PulseAudio in most cases). Using this approach, applications will maintain a single 15 connection to the audio daemon. 16 17 2. For WPE, external audio pass-through. To enable this, the application has to register an 18 audio receiver using the WPEBackend-FDO wpe_audio_register_receiver() API. When this is 19 enabled, the WebKit audio sinks running in the WebProcess will forward audio samples to the 20 UIProcess, using a Wayland protocol defined in the WPEBackend-FDO backend and exposed 21 through its audio extension. This client-side rendering support allows applications to have 22 full control on the audio samples rendering. 23 24 The Internal mode should be considered a technology preview and can't be enabled by default 25 yet, because audiomixer lacks some features such as reverse playback support. External audio 26 rendering policy is covered by a new WPE API test. 27 28 * platform/GStreamer.cmake: 29 * platform/audio/gstreamer/AudioDestinationGStreamer.cpp: 30 (WebCore::AudioDestinationGStreamer::AudioDestinationGStreamer): Create sink depending on 31 selected audio rendering policy and probe platform for a working audio output device only 32 when the WebKit custom audio sink hasn't been selected. This is needed only for the 33 autoaudiosink case. 34 * platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp: 35 (WebCore::AudioSourceProviderGStreamer::configureAudioBin): Instead of creating a new sink, 36 embed the one provided by the player into the audio bin. The resulting bin becomes the 37 player audio sink and it's able to render both to the WebAudio provider and the usual sink, as before. 38 * platform/audio/gstreamer/AudioSourceProviderGStreamer.h: 39 * platform/graphics/gstreamer/GStreamerAudioMixer.cpp: Added. 40 (WebCore::GStreamerAudioMixer::isAllowed): The mixer requires a recent GStreamer version and 41 the inter plugin (shipped in gst-plugins-bad until version 1.20 at least). 42 (WebCore::GStreamerAudioMixer::singleton): Entry point for the mixer. This is where the 43 singleton is created. 44 (WebCore::GStreamerAudioMixer::GStreamerAudioMixer): Configure the standalone mixer 45 pipeline. 46 (WebCore::GStreamerAudioMixer::~GStreamerAudioMixer): 47 (WebCore::GStreamerAudioMixer::ensureState): Lazily start/stop the mixer, depending on the 48 number of incoming streams. The pipeline starts when the first incoming stream is connected, 49 and stops when the last stream disappears. 50 (WebCore::GStreamerAudioMixer::registerProducer): Client pipelines require an interaudiosink, they 51 will render to that sink, which internally forwards data to a twin interaudiosrc element, 52 connected to the audiomixer. 53 (WebCore::GStreamerAudioMixer::unregisterProducer): Get rid of an interaudiosink and its interaudiosrc. 54 This is called by the WebKit audio sink when the element is being disposed. 55 * platform/graphics/gstreamer/GStreamerAudioMixer.h: Added. 56 * platform/graphics/gstreamer/GStreamerCommon.cpp: 57 (WebCore::initializeGStreamerAndRegisterWebKitElements): Register new audio sink element. 58 (WebCore::createPlatformAudioSink): New utility function to create an audio sink based on 59 the desired and implied runtime rendering policy. 60 (WebCore::initializeGStreamerAndRegisterWebKitElements): 61 * platform/graphics/gstreamer/GStreamerCommon.h: 62 * platform/graphics/gstreamer/GUniquePtrGStreamer.h: 63 * platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.cpp: 64 (WebCore::MediaPlayerPrivateGStreamer::~MediaPlayerPrivateGStreamer): 65 (WebCore::MediaPlayerPrivateGStreamer::seek): Drive-by clean-up, no need to create the seek 66 Mediatime before the early return checking this is a live stream. 67 (WebCore::setSyncOnClock): Fixup code style in this method. 68 (WebCore::MediaPlayerPrivateGStreamer::createAudioSink): 69 (WebCore::MediaPlayerPrivateGStreamer::audioSink const): 70 * platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.h: 71 * platform/graphics/gstreamer/WebKitAudioSinkGStreamer.cpp: Added. This sink can either 72 forward incoming samples to the shared audiomixer running in its own pipeline, or forward 73 samples to the UIProcess using the WPEBackend-FDO audio extension. 74 (AudioPacketHolder::AudioPacketHolder): Wrapper around audio buffers, in charge of creating 75 the corresponding memfd descriptor and also keeping track of the corresponding 76 wpe_audio_packet_export. 77 (AudioPacketHolder::~AudioPacketHolder): 78 (AudioPacketHolder::map): Create the memfd descriptor and return it along with the buffer size. 79 (webKitAudioSinkHandleSample): Forward incoming samples using the WPEBackend-FDO audio 80 extension. The wpe_audio_source start is synchronized with the buffer flow. 81 (webKitAudioSinkConfigure): When internal mixing has been requested, create an 82 interaudiosink to which samples will be sent. Internally the interaudiosink will forward 83 data to its interaudiosrc which is connected to the audiomixer. Otherwise, if external 84 rendering has been requested, create an appsink in order to relay samples to the UIProcess. 85 (webKitAudioSinkDispose): 86 (getInternalVolumeObject): When internal mixing is enabled, volume and mute states are 87 tracked within the audiomixer sink pads. Otherwise our audio sink manages this using a volume 88 element. 89 (webKitAudioSinkSetProperty): Proxy volume and mute properties from the internal volume proxy. 90 (webKitAudioSinkGetProperty): Ditto. 91 (webKitAudioSinkChangeState): Keep the WPE audio source state synchronized with the element 92 state, in order to know when the pause/resume notifications should be sent to the UIProcess. 93 This is also where maintain the relationship between the interaudiosink and the audiomixer, when 94 it's enabled. 95 (webkit_audio_sink_class_init): 96 (webkitAudioSinkNew): 97 * platform/graphics/gstreamer/WebKitAudioSinkGStreamer.h: Added. 98 1 99 2020-09-30 Zalan Bujtas <zalan@apple.com> 2 100 -
trunk/Source/WebCore/platform/GStreamer.cmake
r265492 r267787 10 10 platform/graphics/gstreamer/GLVideoSinkGStreamer.cpp 11 11 platform/graphics/gstreamer/GRefPtrGStreamer.cpp 12 platform/graphics/gstreamer/GStreamerAudioMixer.cpp 12 13 platform/graphics/gstreamer/GStreamerCommon.cpp 13 14 platform/graphics/gstreamer/GstAllocatorFastMalloc.cpp … … 24 25 platform/graphics/gstreamer/VideoSinkGStreamer.cpp 25 26 platform/graphics/gstreamer/VideoTrackPrivateGStreamer.cpp 27 platform/graphics/gstreamer/WebKitAudioSinkGStreamer.cpp 26 28 platform/graphics/gstreamer/WebKitWebSourceGStreamer.cpp 27 29 -
trunk/Source/WebCore/platform/audio/gstreamer/AudioDestinationGStreamer.cpp
r267541 r267787 27 27 #include "AudioSourceProvider.h" 28 28 #include "AudioUtilities.h" 29 #include "G RefPtrGStreamer.h"29 #include "GStreamerCommon.h" 30 30 #include "Logging.h" 31 #include "WebKitAudioSinkGStreamer.h" 31 32 #include "WebKitWebAudioSourceGStreamer.h" 32 33 #include <gst/audio/gstaudiobasesink.h> … … 93 94 "frames", AudioUtilities::renderQuantumSize, nullptr)); 94 95 95 GRefPtr<GstElement> audioSink = gst_element_factory_make("autoaudiosink", nullptr);96 GRefPtr<GstElement> audioSink = createPlatformAudioSink(); 96 97 m_audioSinkAvailable = audioSink; 97 98 if (!audioSink) { 98 LOG_ERROR("Failed to create GStreamer au toaudiosink element");99 LOG_ERROR("Failed to create GStreamer audio sink element"); 99 100 return; 100 101 } 101 102 102 g_signal_connect(audioSink.get(), "child-added", G_CALLBACK(autoAudioSinkChildAddedCallback), nullptr); 103 // Probe platform early on for a working audio output device. This is not needed for the WebKit 104 // custom audio sink because it doesn't rely on autoaudiosink. 105 if (!WEBKIT_IS_AUDIO_SINK(audioSink.get())) { 106 g_signal_connect(audioSink.get(), "child-added", G_CALLBACK(autoAudioSinkChildAddedCallback), nullptr); 103 107 104 // Autoaudiosink does the real sink detection in the GST_STATE_NULL->READY transition 105 // so it's best to roll it to READY as soon as possible to ensure the underlying platform 106 // audiosink was loaded correctly. 107 GstStateChangeReturn stateChangeReturn = gst_element_set_state(audioSink.get(), GST_STATE_READY); 108 if (stateChangeReturn == GST_STATE_CHANGE_FAILURE) { 109 LOG_ERROR("Failed to change autoaudiosink element state"); 110 gst_element_set_state(audioSink.get(), GST_STATE_NULL); 111 m_audioSinkAvailable = false; 112 return; 108 // Autoaudiosink does the real sink detection in the GST_STATE_NULL->READY transition 109 // so it's best to roll it to READY as soon as possible to ensure the underlying platform 110 // audiosink was loaded correctly. 111 GstStateChangeReturn stateChangeReturn = gst_element_set_state(audioSink.get(), GST_STATE_READY); 112 if (stateChangeReturn == GST_STATE_CHANGE_FAILURE) { 113 LOG_ERROR("Failed to change autoaudiosink element state"); 114 gst_element_set_state(audioSink.get(), GST_STATE_NULL); 115 m_audioSinkAvailable = false; 116 return; 117 } 113 118 } 114 119 115 120 GstElement* audioConvert = gst_element_factory_make("audioconvert", nullptr); 116 121 GstElement* audioResample = gst_element_factory_make("audioresample", nullptr); 117 gst_bin_add_many(GST_BIN (m_pipeline), webkitAudioSrc, audioConvert, audioResample, audioSink.get(), nullptr);122 gst_bin_add_many(GST_BIN_CAST(m_pipeline), webkitAudioSrc, audioConvert, audioResample, audioSink.get(), nullptr); 118 123 119 124 // Link src pads from webkitAudioSrc to audioConvert ! audioResample ! autoaudiosink. -
trunk/Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp
r241587 r267787 142 142 } 143 143 144 void AudioSourceProviderGStreamer::configureAudioBin(GstElement* audioBin, GstElement* teePredecessor)144 void AudioSourceProviderGStreamer::configureAudioBin(GstElement* audioBin, GstElement* audioSink) 145 145 { 146 146 m_audioSinkBin = audioBin; … … 153 153 GstElement* audioResample2 = gst_element_factory_make("audioresample", nullptr); 154 154 GstElement* volumeElement = gst_element_factory_make("volume", "volume"); 155 GstElement* audioSink = gst_element_factory_make("autoaudiosink", nullptr); 156 157 gst_bin_add_many(GST_BIN(m_audioSinkBin.get()), audioTee, audioQueue, audioConvert, audioResample, volumeElement, audioConvert2, audioResample2, audioSink, nullptr); 158 159 // In cases where the audio-sink needs elements before tee (such 160 // as scaletempo) they need to be linked to tee which in this case 161 // doesn't need a ghost pad. It is assumed that the teePredecessor 162 // chain already configured a ghost pad. 163 if (teePredecessor) 164 gst_element_link_pads_full(teePredecessor, "src", audioTee, "sink", GST_PAD_LINK_CHECK_NOTHING); 165 else { 166 // Add a ghostpad to the bin so it can proxy to tee. 167 GRefPtr<GstPad> audioTeeSinkPad = adoptGRef(gst_element_get_static_pad(audioTee, "sink")); 168 gst_element_add_pad(m_audioSinkBin.get(), gst_ghost_pad_new("sink", audioTeeSinkPad.get())); 169 } 170 171 // Link a new src pad from tee to queue ! audioconvert ! 172 // audioresample ! volume ! audioconvert ! audioresample ! 173 // autoaudiosink. The audioresample and audioconvert are needed to 174 // ensure the audio sink receives buffers in the correct format. 155 156 gst_bin_add_many(GST_BIN_CAST(m_audioSinkBin.get()), audioTee, audioQueue, audioConvert, audioResample, volumeElement, audioConvert2, audioResample2, audioSink, nullptr); 157 158 // Add a ghostpad to the bin so it can proxy to tee. 159 auto audioTeeSinkPad = adoptGRef(gst_element_get_static_pad(audioTee, "sink")); 160 gst_element_add_pad(m_audioSinkBin.get(), gst_ghost_pad_new("sink", audioTeeSinkPad.get())); 161 162 // Link a new src pad from tee to queue ! audioconvert ! audioresample ! volume ! audioconvert ! 163 // audioresample ! audiosink. The audioresample and audioconvert are needed to ensure the audio 164 // sink receives buffers in the correct format. 175 165 gst_element_link_pads_full(audioTee, "src_%u", audioQueue, "sink", GST_PAD_LINK_CHECK_NOTHING); 176 166 gst_element_link_pads_full(audioQueue, "src", audioConvert, "sink", GST_PAD_LINK_CHECK_NOTHING); -
trunk/Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.h
r248851 r267787 58 58 ~AudioSourceProviderGStreamer(); 59 59 60 void configureAudioBin(GstElement* audioBin, GstElement* teePredecessor);60 void configureAudioBin(GstElement* audioBin, GstElement* audioSink); 61 61 62 62 void provideInput(AudioBus*, size_t framesToProcess) override; -
trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerCommon.cpp
r265492 r267787 25 25 26 26 #include "GLVideoSinkGStreamer.h" 27 #include "GStreamerAudioMixer.h" 27 28 #include "GstAllocatorFastMalloc.h" 28 29 #include "IntSize.h" 29 30 #include "SharedBuffer.h" 31 #include "WebKitAudioSinkGStreamer.h" 30 32 #include <gst/audio/audio-info.h> 31 33 #include <gst/gst.h> … … 319 321 #endif 320 322 #endif 323 // We don't want autoaudiosink to autoplug our sink. 324 gst_element_register(0, "webkitaudiosink", GST_RANK_NONE, WEBKIT_TYPE_AUDIO_SINK); 321 325 322 326 // If the FDK-AAC decoder is available, promote it and downrank the … … 427 431 } 428 432 433 GstElement* createPlatformAudioSink() 434 { 435 GstElement* audioSink = webkitAudioSinkNew(); 436 if (!audioSink) { 437 // This means the WebKit audio sink configuration failed. It can happen for the following reasons: 438 // - audio mixing was not requested using the WEBKIT_GST_ENABLE_AUDIO_MIXER 439 // - audio mixing was requested using the WEBKIT_GST_ENABLE_AUDIO_MIXER but the audio mixer 440 // runtime requirements are not fullfilled. 441 // - the sink was created for the WPE port, audio mixing was not requested and no 442 // WPEBackend-FDO audio receiver has been registered at runtime. 443 audioSink = gst_element_factory_make("autoaudiosink", nullptr); 444 } 445 if (!audioSink) { 446 GST_WARNING("GStreamer's autoaudiosink not found. Please check your gst-plugins-good installation"); 447 return nullptr; 448 } 449 450 return audioSink; 451 } 452 429 453 } 430 454 -
trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerCommon.h
r264816 r267787 293 293 bool isGStreamerPluginAvailable(const char* name); 294 294 295 GstElement* createPlatformAudioSink(); 296 295 297 } 296 298 -
trunk/Source/WebCore/platform/graphics/gstreamer/GUniquePtrGStreamer.h
r254682 r267787 35 35 #endif 36 36 37 #if defined(BUILDING_WebCore) && PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION) 38 #include <wpe/extensions/audio.h> 39 #endif 40 37 41 namespace WTF { 38 42 … … 50 54 #endif 51 55 56 #if defined(BUILDING_WebCore) && PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION) 57 WTF_DEFINE_GPTR_DELETER(struct wpe_audio_source, wpe_audio_source_destroy) 58 #endif 52 59 } 53 60 -
trunk/Source/WebCore/platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.cpp
r267138 r267787 30 30 31 31 #include "GraphicsContext.h" 32 #include "GStreamerAudioMixer.h" 32 33 #include "GStreamerCommon.h" 33 34 #include "GStreamerRegistryScanner.h" … … 44 45 #include "TimeRanges.h" 45 46 #include "VideoSinkGStreamer.h" 47 #include "WebKitAudioSinkGStreamer.h" 46 48 #include "WebKitWebSourceGStreamer.h" 47 49 #include "AudioTrackPrivateGStreamer.h" … … 237 239 g_signal_handlers_disconnect_by_func(GST_ELEMENT_PARENT(m_source.get()), reinterpret_cast<gpointer>(uriDecodeBinElementAddedCallback), this); 238 240 239 if (m_autoAudioSink) { 240 g_signal_handlers_disconnect_by_func(G_OBJECT(m_autoAudioSink.get()), 241 reinterpret_cast<gpointer>(setAudioStreamPropertiesCallback), this); 242 } 241 auto* sink = audioSink(); 242 if (sink && !WEBKIT_IS_AUDIO_SINK(sink)) 243 g_signal_handlers_disconnect_by_func(G_OBJECT(sink), reinterpret_cast<gpointer>(setAudioStreamPropertiesCallback), this); 243 244 244 245 m_readyTimerHandler.stop(); … … 519 520 // Avoid useless seeking. 520 521 if (mediaTime == currentMediaTime()) { 521 GST_DEBUG_OBJECT(pipeline(), "[Seek] seek to EOS position unhandled"); 522 return; 523 } 524 525 MediaTime time = std::min(mediaTime, durationMediaTime()); 522 GST_DEBUG_OBJECT(pipeline(), "[Seek] Already at requested position. Aborting."); 523 return; 524 } 526 525 527 526 if (m_isLiveStream) { … … 530 529 } 531 530 531 MediaTime time = std::min(mediaTime, durationMediaTime()); 532 532 GST_INFO_OBJECT(pipeline(), "[Seek] seeking to %s", toString(time).utf8().data()); 533 533 … … 1005 1005 1006 1006 if (!GST_IS_BIN(element)) { 1007 g_object_set(element, "sync", sync, NULL);1008 return; 1009 } 1010 1011 G stIterator* it = gst_bin_iterate_sinks(GST_BIN(element));1012 while (gst_iterator_foreach(it , (GstIteratorForeachFunction)([](const GValue* item, void* syncPtr) {1007 g_object_set(element, "sync", sync, nullptr); 1008 return; 1009 } 1010 1011 GUniquePtr<GstIterator> iterator(gst_bin_iterate_sinks(GST_BIN_CAST(element))); 1012 while (gst_iterator_foreach(iterator.get(), static_cast<GstIteratorForeachFunction>([](const GValue* item, void* syncPtr) { 1013 1013 bool* sync = static_cast<bool*>(syncPtr); 1014 setSyncOnClock(GST_ELEMENT (g_value_get_object(item)), *sync);1014 setSyncOnClock(GST_ELEMENT_CAST(g_value_get_object(item)), *sync); 1015 1015 }), &sync) == GST_ITERATOR_RESYNC) 1016 gst_iterator_resync(it); 1017 gst_iterator_free(it); 1016 gst_iterator_resync(iterator.get()); 1018 1017 } 1019 1018 … … 1354 1353 GstElement* MediaPlayerPrivateGStreamer::createAudioSink() 1355 1354 { 1356 m_autoAudioSink = gst_element_factory_make("autoaudiosink", nullptr);1357 if (!m_autoAudioSink) {1358 GST_WARNING("GStreamer's autoaudiosink not found. Please check your gst-plugins-good installation");1355 GstElement* audioSink = createPlatformAudioSink(); 1356 RELEASE_ASSERT(audioSink); 1357 if (!audioSink) 1359 1358 return nullptr; 1360 } 1361 1362 g_signal_connect_swapped(m_autoAudioSink.get(), "child-added", G_CALLBACK(setAudioStreamPropertiesCallback), this);1359 1360 if (!WEBKIT_IS_AUDIO_SINK(audioSink)) 1361 g_signal_connect_swapped(audioSink, "child-added", G_CALLBACK(setAudioStreamPropertiesCallback), this); 1363 1362 1364 1363 #if ENABLE(WEB_AUDIO) 1365 1364 GstElement* audioSinkBin = gst_bin_new("audio-sink"); 1366 1365 ensureAudioSourceProvider(); 1367 m_audioSourceProvider->configureAudioBin(audioSinkBin, nullptr);1366 m_audioSourceProvider->configureAudioBin(audioSinkBin, audioSink); 1368 1367 return audioSinkBin; 1369 1368 #else 1370 return m_autoAudioSink.get();1369 return audioSink; 1371 1370 #endif 1372 1371 } … … 1374 1373 GstElement* MediaPlayerPrivateGStreamer::audioSink() const 1375 1374 { 1375 if (!m_pipeline) 1376 return nullptr; 1377 1376 1378 GstElement* sink; 1377 1379 g_object_get(m_pipeline.get(), "audio-sink", &sink, nullptr); -
trunk/Source/WebCore/platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.h
r267138 r267787 514 514 std::unique_ptr<AudioSourceProviderGStreamer> m_audioSourceProvider; 515 515 #endif 516 GRefPtr<GstElement> m_autoAudioSink;517 516 GRefPtr<GstElement> m_downloadBuffer; 518 517 Vector<RefPtr<MediaPlayerRequestInstallMissingPluginsCallback>> m_missingPluginCallbacks; -
trunk/Source/cmake/FindWPEBackend_fdo.cmake
r258412 r267787 44 44 mark_as_advanced(WPEBACKEND_FDO_INCLUDE_DIRS WPEBACKEND_FDO_LIBRARIES) 45 45 46 find_path(WPEBACKEND_FDO_AUDIO_EXTENSION 47 NAMES wpe/extensions/audio.h 48 HINTS ${PC_WPEBACKEND_FDO_INCLUDEDIR} ${PC_WPEBACKEND_FDO_INCLUDE_DIRS} 49 ) 50 46 51 include(FindPackageHandleStandardArgs) 47 52 find_package_handle_standard_args(WPEBackend_fdo -
trunk/Source/cmake/GStreamerChecks.cmake
r265492 r267787 1 1 if (ENABLE_VIDEO OR ENABLE_WEB_AUDIO) 2 3 if (PORT STREQUAL "WPE") 4 find_package(WPEBackend_fdo 1.9.0) 5 if ((NOT WPEBACKEND_FDO_FOUND) OR WPEBACKEND_FDO_AUDIO_EXTENSION STREQUAL "WPEBACKEND_FDO_AUDIO_EXTENSION-NOTFOUND") 6 message(WARNING "WPEBackend-fdo audio extension not found. Disabling external audio rendering support") 7 SET_AND_EXPOSE_TO_BUILD(USE_WPEBACKEND_FDO_AUDIO_EXTENSION FALSE) 8 else () 9 SET_AND_EXPOSE_TO_BUILD(USE_WPEBACKEND_FDO_AUDIO_EXTENSION TRUE) 10 endif () 11 endif () 2 12 3 13 SET_AND_EXPOSE_TO_BUILD(USE_GSTREAMER TRUE) -
trunk/Tools/ChangeLog
r267785 r267787 1 2020-09-30 Philippe Normand <pnormand@igalia.com> 2 3 [GStreamer] Internal audio rendering support 4 https://bugs.webkit.org/show_bug.cgi?id=207634 5 6 Reviewed by Xabier Rodriguez-Calvar. 7 8 * Scripts/webkitpy/style/checker.py: White-list the new audio sink from the style checker. 9 * TestWebKitAPI/Tests/WebKit/file-with-video.html: New utility functions to pause and seek in the video. 10 * TestWebKitAPI/Tests/WebKitGLib/TestWebKitWebView.cpp: WPE test for external audio 11 rendering support. A video file is loaded through the webview and the test receives 12 notifications during playback. In order to reduce timeout risks, a seek near the end of the 13 video is performed early on. 14 (AudioRenderingWebViewTest::setup): 15 (AudioRenderingWebViewTest::teardown): 16 (AudioRenderingWebViewTest::AudioRenderingWebViewTest): 17 (AudioRenderingWebViewTest::handleStart): 18 (AudioRenderingWebViewTest::handleStop): 19 (AudioRenderingWebViewTest::handlePause): 20 (AudioRenderingWebViewTest::handleResume): 21 (AudioRenderingWebViewTest::handlePacket): 22 (AudioRenderingWebViewTest::waitUntilPaused): 23 (AudioRenderingWebViewTest::waitUntilEOS): 24 (AudioRenderingWebViewTest::state const): 25 (beforeAll): 26 1 27 2020-09-30 Philippe Normand <pnormand@igalia.com> 2 28 -
trunk/Tools/Scripts/webkitpy/style/checker.py
r260988 r267787 222 222 os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'VideoSinkGStreamer.cpp'), 223 223 os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitWebSourceGStreamer.cpp'), 224 os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitAudioSinkGStreamer.cpp'), 225 os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitAudioSinkGStreamer.h'), 224 226 os.path.join('Source', 'WebCore', 'platform', 'audio', 'gstreamer', 'WebKitWebAudioSourceGStreamer.cpp'), 225 227 os.path.join('Source', 'WebCore', 'platform', 'mediastream', 'gstreamer', 'GStreamerMediaStreamSource.h'), -
trunk/Tools/TestWebKitAPI/Tests/WebKit/file-with-video.html
r174463 r267787 5 5 { 6 6 document.getElementById("test-video").play(); 7 } 8 function pauseVideo() 9 { 10 document.getElementById("test-video").pause(); 11 } 12 function seekNearTheEnd() 13 { 14 let video = document.getElementById("test-video"); 15 video.currentTime = video.duration - 0.5; 7 16 } 8 17 </script> -
trunk/Tools/TestWebKitAPI/Tests/WebKitGLib/TestWebKitWebView.cpp
r265080 r267787 25 25 #include <wtf/glib/GRefPtr.h> 26 26 27 #if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION) 28 #include <wpe/extensions/audio.h> 29 #endif 30 27 31 class IsPlayingAudioWebViewTest : public WebViewTest { 28 32 public: … … 1344 1348 #endif 1345 1349 1350 #if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION) 1351 enum class RenderingState { 1352 Unknown, 1353 Started, 1354 Paused, 1355 Stopped 1356 }; 1357 1358 class AudioRenderingWebViewTest : public WebViewTest { 1359 public: 1360 MAKE_GLIB_TEST_FIXTURE_WITH_SETUP_TEARDOWN(AudioRenderingWebViewTest, setup, teardown); 1361 1362 static void setup() 1363 { 1364 } 1365 1366 static void teardown() 1367 { 1368 wpe_audio_register_receiver(nullptr, nullptr); 1369 } 1370 1371 AudioRenderingWebViewTest() 1372 { 1373 wpe_audio_register_receiver(&m_audioReceiver, this); 1374 } 1375 1376 void handleStart(uint32_t id, int32_t channels, const char* layout, int32_t sampleRate) 1377 { 1378 g_assert(m_state == RenderingState::Unknown); 1379 g_assert_false(m_streamId.hasValue()); 1380 g_assert_cmpuint(id, ==, 0); 1381 m_streamId = id; 1382 m_state = RenderingState::Started; 1383 g_assert_cmpint(channels, ==, 2); 1384 g_assert_cmpstr(layout, ==, "S16LE"); 1385 g_assert_cmpint(sampleRate, ==, 44100); 1386 } 1387 1388 void handleStop(uint32_t id) 1389 { 1390 g_assert_cmpuint(*m_streamId, ==, id); 1391 g_assert(m_state != RenderingState::Unknown); 1392 m_state = RenderingState::Stopped; 1393 g_main_loop_quit(m_mainLoop); 1394 m_streamId.reset(); 1395 } 1396 1397 void handlePause(uint32_t id) 1398 { 1399 g_assert_cmpuint(*m_streamId, ==, id); 1400 g_assert(m_state != RenderingState::Unknown); 1401 m_state = RenderingState::Paused; 1402 } 1403 1404 void handleResume(uint32_t id) 1405 { 1406 g_assert_cmpuint(*m_streamId, ==, id); 1407 g_assert(m_state == RenderingState::Paused); 1408 m_state = RenderingState::Started; 1409 } 1410 1411 void handlePacket(struct wpe_audio_packet_export* packet_export, uint32_t id, int32_t fd, uint32_t size) 1412 { 1413 g_assert_cmpuint(*m_streamId, ==, id); 1414 g_assert(m_state == RenderingState::Started || m_state == RenderingState::Paused); 1415 g_assert_cmpuint(size, >, 0); 1416 wpe_audio_packet_export_release(packet_export); 1417 } 1418 1419 void waitUntilPaused() 1420 { 1421 g_timeout_add(200, [](gpointer userData) -> gboolean { 1422 auto* test = static_cast<AudioRenderingWebViewTest*>(userData); 1423 if (test->state() == RenderingState::Paused) { 1424 test->quitMainLoop(); 1425 return G_SOURCE_REMOVE; 1426 } 1427 return G_SOURCE_CONTINUE; 1428 }, this); 1429 g_main_loop_run(m_mainLoop); 1430 } 1431 1432 void waitUntilEOS() 1433 { 1434 g_main_loop_run(m_mainLoop); 1435 } 1436 1437 RenderingState state() const { return m_state; } 1438 1439 private: 1440 static const struct wpe_audio_receiver m_audioReceiver; 1441 RenderingState m_state { RenderingState::Unknown }; 1442 Optional<uint32_t> m_streamId; 1443 }; 1444 1445 const struct wpe_audio_receiver AudioRenderingWebViewTest::m_audioReceiver = { 1446 [](void* data, uint32_t id, int32_t channels, const char* layout, int32_t sampleRate) { static_cast<AudioRenderingWebViewTest*>(data)->handleStart(id, channels, layout, sampleRate); }, 1447 [](void* data, struct wpe_audio_packet_export* packet_export, uint32_t id, int32_t fd, uint32_t size) { static_cast<AudioRenderingWebViewTest*>(data)->handlePacket(packet_export, id, fd, size); }, 1448 [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handleStop(id); }, 1449 [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handlePause(id); }, 1450 [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handleResume(id); } 1451 }; 1452 1453 static void testWebViewExternalAudioRendering(AudioRenderingWebViewTest* test, gconstpointer) 1454 { 1455 GUniquePtr<char> resourcePath(g_build_filename(Test::getResourcesDir(Test::WebKit2Resources).data(), "file-with-video.html", nullptr)); 1456 GUniquePtr<char> resourceURL(g_filename_to_uri(resourcePath.get(), nullptr, nullptr)); 1457 webkit_web_view_load_uri(test->m_webView, resourceURL.get()); 1458 test->waitUntilLoadFinished(); 1459 1460 test->runJavaScriptAndWaitUntilFinished("playVideo();", nullptr); 1461 g_assert(test->state() == RenderingState::Started); 1462 test->runJavaScriptAndWaitUntilFinished("pauseVideo();", nullptr); 1463 test->waitUntilPaused(); 1464 g_assert(test->state() == RenderingState::Paused); 1465 1466 test->runJavaScriptAndWaitUntilFinished("playVideo(); seekNearTheEnd();", nullptr); 1467 test->waitUntilEOS(); 1468 g_assert(test->state() == RenderingState::Stopped); 1469 } 1470 #endif 1471 1346 1472 static void serverCallback(SoupServer* server, SoupMessage* message, const char* path, GHashTable*, SoupClientContext*, gpointer) 1347 1473 { … … 1404 1530 WebViewTest::add("WebKitWebView", "is-audio-muted", testWebViewIsAudioMuted); 1405 1531 WebViewTest::add("WebKitWebView", "autoplay-policy", testWebViewAutoplayPolicy); 1532 #if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION) 1533 AudioRenderingWebViewTest::add("WebKitWebView", "external-audio-rendering", testWebViewExternalAudioRendering); 1534 #endif 1406 1535 } 1407 1536
Note: See TracChangeset
for help on using the changeset viewer.