Changeset 267787 in webkit


Ignore:
Timestamp:
Sep 30, 2020 7:10:05 AM (3 years ago)
Author:
Philippe Normand
Message:

[GStreamer] Internal audio rendering support
https://bugs.webkit.org/show_bug.cgi?id=207634

Reviewed by Xabier Rodriguez-Calvar.

.:

  • Source/cmake/FindWPEBackend_fdo.cmake: Check for the audio extension header initially

shipped in the 1.8.0 release.

  • Source/cmake/GStreamerChecks.cmake: Check and enable external audio rendering support if

the WPEBackend-FDO audio extension was found.

Source/WebCore:

This patch introduces two features regarding audio rendering:

  1. Internal audio mixing enabled at runtime with the WEBKIT_GST_ENABLE_AUDIO_MIXER=1

environment variable. When this is enabled, the WebProcess will have its GStreamer backends
render to dedicated WebKit audio sinks. Those will forward buffers to a singleton audio
mixer. The resulting audio stream will then be rendered through the default audio sink
(PulseAudio in most cases). Using this approach, applications will maintain a single
connection to the audio daemon.

  1. For WPE, external audio pass-through. To enable this, the application has to register an

audio receiver using the WPEBackend-FDO wpe_audio_register_receiver() API. When this is
enabled, the WebKit audio sinks running in the WebProcess will forward audio samples to the
UIProcess, using a Wayland protocol defined in the WPEBackend-FDO backend and exposed
through its audio extension. This client-side rendering support allows applications to have
full control on the audio samples rendering.

The Internal mode should be considered a technology preview and can't be enabled by default
yet, because audiomixer lacks some features such as reverse playback support. External audio
rendering policy is covered by a new WPE API test.

  • platform/GStreamer.cmake:
  • platform/audio/gstreamer/AudioDestinationGStreamer.cpp:

(WebCore::AudioDestinationGStreamer::AudioDestinationGStreamer): Create sink depending on
selected audio rendering policy and probe platform for a working audio output device only
when the WebKit custom audio sink hasn't been selected. This is needed only for the
autoaudiosink case.

  • platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp:

(WebCore::AudioSourceProviderGStreamer::configureAudioBin): Instead of creating a new sink,
embed the one provided by the player into the audio bin. The resulting bin becomes the
player audio sink and it's able to render both to the WebAudio provider and the usual sink, as before.

  • platform/audio/gstreamer/AudioSourceProviderGStreamer.h:
  • platform/graphics/gstreamer/GStreamerAudioMixer.cpp: Added.

(WebCore::GStreamerAudioMixer::isAllowed): The mixer requires a recent GStreamer version and
the inter plugin (shipped in gst-plugins-bad until version 1.20 at least).
(WebCore::GStreamerAudioMixer::singleton): Entry point for the mixer. This is where the
singleton is created.
(WebCore::GStreamerAudioMixer::GStreamerAudioMixer): Configure the standalone mixer
pipeline.
(WebCore::GStreamerAudioMixer::~GStreamerAudioMixer):
(WebCore::GStreamerAudioMixer::ensureState): Lazily start/stop the mixer, depending on the
number of incoming streams. The pipeline starts when the first incoming stream is connected,
and stops when the last stream disappears.
(WebCore::GStreamerAudioMixer::registerProducer): Client pipelines require an interaudiosink, they
will render to that sink, which internally forwards data to a twin interaudiosrc element,
connected to the audiomixer.
(WebCore::GStreamerAudioMixer::unregisterProducer): Get rid of an interaudiosink and its interaudiosrc.
This is called by the WebKit audio sink when the element is being disposed.

  • platform/graphics/gstreamer/GStreamerAudioMixer.h: Added.
  • platform/graphics/gstreamer/GStreamerCommon.cpp:

(WebCore::initializeGStreamerAndRegisterWebKitElements): Register new audio sink element.
(WebCore::createPlatformAudioSink): New utility function to create an audio sink based on
the desired and implied runtime rendering policy.
(WebCore::initializeGStreamerAndRegisterWebKitElements):

  • platform/graphics/gstreamer/GStreamerCommon.h:
  • platform/graphics/gstreamer/GUniquePtrGStreamer.h:
  • platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.cpp:

(WebCore::MediaPlayerPrivateGStreamer::~MediaPlayerPrivateGStreamer):
(WebCore::MediaPlayerPrivateGStreamer::seek): Drive-by clean-up, no need to create the seek
Mediatime before the early return checking this is a live stream.
(WebCore::setSyncOnClock): Fixup code style in this method.
(WebCore::MediaPlayerPrivateGStreamer::createAudioSink):
(WebCore::MediaPlayerPrivateGStreamer::audioSink const):

  • platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.h:
  • platform/graphics/gstreamer/WebKitAudioSinkGStreamer.cpp: Added. This sink can either

forward incoming samples to the shared audiomixer running in its own pipeline, or forward
samples to the UIProcess using the WPEBackend-FDO audio extension.
(AudioPacketHolder::AudioPacketHolder): Wrapper around audio buffers, in charge of creating
the corresponding memfd descriptor and also keeping track of the corresponding
wpe_audio_packet_export.
(AudioPacketHolder::~AudioPacketHolder):
(AudioPacketHolder::map): Create the memfd descriptor and return it along with the buffer size.
(webKitAudioSinkHandleSample): Forward incoming samples using the WPEBackend-FDO audio
extension. The wpe_audio_source start is synchronized with the buffer flow.
(webKitAudioSinkConfigure): When internal mixing has been requested, create an
interaudiosink to which samples will be sent. Internally the interaudiosink will forward
data to its interaudiosrc which is connected to the audiomixer. Otherwise, if external
rendering has been requested, create an appsink in order to relay samples to the UIProcess.
(webKitAudioSinkDispose):
(getInternalVolumeObject): When internal mixing is enabled, volume and mute states are
tracked within the audiomixer sink pads. Otherwise our audio sink manages this using a volume
element.
(webKitAudioSinkSetProperty): Proxy volume and mute properties from the internal volume proxy.
(webKitAudioSinkGetProperty): Ditto.
(webKitAudioSinkChangeState): Keep the WPE audio source state synchronized with the element
state, in order to know when the pause/resume notifications should be sent to the UIProcess.
This is also where maintain the relationship between the interaudiosink and the audiomixer, when
it's enabled.
(webkit_audio_sink_class_init):
(webkitAudioSinkNew):

  • platform/graphics/gstreamer/WebKitAudioSinkGStreamer.h: Added.

Tools:

  • Scripts/webkitpy/style/checker.py: White-list the new audio sink from the style checker.
  • TestWebKitAPI/Tests/WebKit/file-with-video.html: New utility functions to pause and seek in the video.
  • TestWebKitAPI/Tests/WebKitGLib/TestWebKitWebView.cpp: WPE test for external audio

rendering support. A video file is loaded through the webview and the test receives
notifications during playback. In order to reduce timeout risks, a seek near the end of the
video is performed early on.
(AudioRenderingWebViewTest::setup):
(AudioRenderingWebViewTest::teardown):
(AudioRenderingWebViewTest::AudioRenderingWebViewTest):
(AudioRenderingWebViewTest::handleStart):
(AudioRenderingWebViewTest::handleStop):
(AudioRenderingWebViewTest::handlePause):
(AudioRenderingWebViewTest::handleResume):
(AudioRenderingWebViewTest::handlePacket):
(AudioRenderingWebViewTest::waitUntilPaused):
(AudioRenderingWebViewTest::waitUntilEOS):
(AudioRenderingWebViewTest::state const):
(beforeAll):

Location:
trunk
Files:
4 added
17 edited

Legend:

Unmodified
Added
Removed
  • trunk/ChangeLog

    r267771 r267787  
     12020-09-30  Philippe Normand  <pnormand@igalia.com>
     2
     3        [GStreamer] Internal audio rendering support
     4        https://bugs.webkit.org/show_bug.cgi?id=207634
     5
     6        Reviewed by Xabier Rodriguez-Calvar.
     7
     8        * Source/cmake/FindWPEBackend_fdo.cmake: Check for the audio extension header initially
     9        shipped in the 1.8.0 release.
     10        * Source/cmake/GStreamerChecks.cmake: Check and enable external audio rendering support if
     11        the WPEBackend-FDO audio extension was found.
     12
    1132020-09-29  Don Olmstead  <don.olmstead@sony.com>
    214
  • trunk/Source/WebCore/ChangeLog

    r267786 r267787  
     12020-09-30  Philippe Normand  <pnormand@igalia.com>
     2
     3        [GStreamer] Internal audio rendering support
     4        https://bugs.webkit.org/show_bug.cgi?id=207634
     5
     6        Reviewed by Xabier Rodriguez-Calvar.
     7
     8        This patch introduces two features regarding audio rendering:
     9
     10        1. Internal audio mixing enabled at runtime with the WEBKIT_GST_ENABLE_AUDIO_MIXER=1
     11        environment variable. When this is enabled, the WebProcess will have its GStreamer backends
     12        render to dedicated WebKit audio sinks. Those will forward buffers to a singleton audio
     13        mixer. The resulting audio stream will then be rendered through the default audio sink
     14        (PulseAudio in most cases). Using this approach, applications will maintain a single
     15        connection to the audio daemon.
     16
     17        2. For WPE, external audio pass-through. To enable this, the application has to register an
     18        audio receiver using the WPEBackend-FDO wpe_audio_register_receiver() API. When this is
     19        enabled, the WebKit audio sinks running in the WebProcess will forward audio samples to the
     20        UIProcess, using a Wayland protocol defined in the WPEBackend-FDO backend and exposed
     21        through its audio extension. This client-side rendering support allows applications to have
     22        full control on the audio samples rendering.
     23
     24        The Internal mode should be considered a technology preview and can't be enabled by default
     25        yet, because audiomixer lacks some features such as reverse playback support. External audio
     26        rendering policy is covered by a new WPE API test.
     27
     28        * platform/GStreamer.cmake:
     29        * platform/audio/gstreamer/AudioDestinationGStreamer.cpp:
     30        (WebCore::AudioDestinationGStreamer::AudioDestinationGStreamer): Create sink depending on
     31        selected audio rendering policy and probe platform for a working audio output device only
     32        when the WebKit custom audio sink hasn't been selected. This is needed only for the
     33        autoaudiosink case.
     34        * platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp:
     35        (WebCore::AudioSourceProviderGStreamer::configureAudioBin): Instead of creating a new sink,
     36        embed the one provided by the player into the audio bin. The resulting bin becomes the
     37        player audio sink and it's able to render both to the WebAudio provider and the usual sink, as before.
     38        * platform/audio/gstreamer/AudioSourceProviderGStreamer.h:
     39        * platform/graphics/gstreamer/GStreamerAudioMixer.cpp: Added.
     40        (WebCore::GStreamerAudioMixer::isAllowed): The mixer requires a recent GStreamer version and
     41        the inter plugin (shipped in gst-plugins-bad until version 1.20 at least).
     42        (WebCore::GStreamerAudioMixer::singleton): Entry point for the mixer. This is where the
     43        singleton is created.
     44        (WebCore::GStreamerAudioMixer::GStreamerAudioMixer): Configure the standalone mixer
     45        pipeline.
     46        (WebCore::GStreamerAudioMixer::~GStreamerAudioMixer):
     47        (WebCore::GStreamerAudioMixer::ensureState): Lazily start/stop the mixer, depending on the
     48        number of incoming streams. The pipeline starts when the first incoming stream is connected,
     49        and stops when the last stream disappears.
     50        (WebCore::GStreamerAudioMixer::registerProducer): Client pipelines require an interaudiosink, they
     51        will render to that sink, which internally forwards data to a twin interaudiosrc element,
     52        connected to the audiomixer.
     53        (WebCore::GStreamerAudioMixer::unregisterProducer): Get rid of an interaudiosink and its interaudiosrc.
     54        This is called by the WebKit audio sink when the element is being disposed.
     55        * platform/graphics/gstreamer/GStreamerAudioMixer.h: Added.
     56        * platform/graphics/gstreamer/GStreamerCommon.cpp:
     57        (WebCore::initializeGStreamerAndRegisterWebKitElements): Register new audio sink element.
     58        (WebCore::createPlatformAudioSink): New utility function to create an audio sink based on
     59        the desired and implied runtime rendering policy.
     60        (WebCore::initializeGStreamerAndRegisterWebKitElements):
     61        * platform/graphics/gstreamer/GStreamerCommon.h:
     62        * platform/graphics/gstreamer/GUniquePtrGStreamer.h:
     63        * platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.cpp:
     64        (WebCore::MediaPlayerPrivateGStreamer::~MediaPlayerPrivateGStreamer):
     65        (WebCore::MediaPlayerPrivateGStreamer::seek): Drive-by clean-up, no need to create the seek
     66        Mediatime before the early return checking this is a live stream.
     67        (WebCore::setSyncOnClock): Fixup code style in this method.
     68        (WebCore::MediaPlayerPrivateGStreamer::createAudioSink):
     69        (WebCore::MediaPlayerPrivateGStreamer::audioSink const):
     70        * platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.h:
     71        * platform/graphics/gstreamer/WebKitAudioSinkGStreamer.cpp: Added. This sink can either
     72        forward incoming samples to the shared audiomixer running in its own pipeline, or forward
     73        samples to the UIProcess using the WPEBackend-FDO audio extension.
     74        (AudioPacketHolder::AudioPacketHolder): Wrapper around audio buffers, in charge of creating
     75        the corresponding memfd descriptor and also keeping track of the corresponding
     76        wpe_audio_packet_export.
     77        (AudioPacketHolder::~AudioPacketHolder):
     78        (AudioPacketHolder::map): Create the memfd descriptor and return it along with the buffer size.
     79        (webKitAudioSinkHandleSample): Forward incoming samples using the WPEBackend-FDO audio
     80        extension. The wpe_audio_source start is synchronized with the buffer flow.
     81        (webKitAudioSinkConfigure): When internal mixing has been requested, create an
     82        interaudiosink to which samples will be sent. Internally the interaudiosink will forward
     83        data to its interaudiosrc which is connected to the audiomixer. Otherwise, if external
     84        rendering has been requested, create an appsink in order to relay samples to the UIProcess.
     85        (webKitAudioSinkDispose):
     86        (getInternalVolumeObject): When internal mixing is enabled, volume and mute states are
     87        tracked within the audiomixer sink pads. Otherwise our audio sink manages this using a volume
     88        element.
     89        (webKitAudioSinkSetProperty): Proxy volume and mute properties from the internal volume proxy.
     90        (webKitAudioSinkGetProperty): Ditto.
     91        (webKitAudioSinkChangeState): Keep the WPE audio source state synchronized with the element
     92        state, in order to know when the pause/resume notifications should be sent to the UIProcess.
     93        This is also where maintain the relationship between the interaudiosink and the audiomixer, when
     94        it's enabled.
     95        (webkit_audio_sink_class_init):
     96        (webkitAudioSinkNew):
     97        * platform/graphics/gstreamer/WebKitAudioSinkGStreamer.h: Added.
     98
    1992020-09-30  Zalan Bujtas  <zalan@apple.com>
    2100
  • trunk/Source/WebCore/platform/GStreamer.cmake

    r265492 r267787  
    1010        platform/graphics/gstreamer/GLVideoSinkGStreamer.cpp
    1111        platform/graphics/gstreamer/GRefPtrGStreamer.cpp
     12        platform/graphics/gstreamer/GStreamerAudioMixer.cpp
    1213        platform/graphics/gstreamer/GStreamerCommon.cpp
    1314        platform/graphics/gstreamer/GstAllocatorFastMalloc.cpp
     
    2425        platform/graphics/gstreamer/VideoSinkGStreamer.cpp
    2526        platform/graphics/gstreamer/VideoTrackPrivateGStreamer.cpp
     27        platform/graphics/gstreamer/WebKitAudioSinkGStreamer.cpp
    2628        platform/graphics/gstreamer/WebKitWebSourceGStreamer.cpp
    2729
  • trunk/Source/WebCore/platform/audio/gstreamer/AudioDestinationGStreamer.cpp

    r267541 r267787  
    2727#include "AudioSourceProvider.h"
    2828#include "AudioUtilities.h"
    29 #include "GRefPtrGStreamer.h"
     29#include "GStreamerCommon.h"
    3030#include "Logging.h"
     31#include "WebKitAudioSinkGStreamer.h"
    3132#include "WebKitWebAudioSourceGStreamer.h"
    3233#include <gst/audio/gstaudiobasesink.h>
     
    9394                                                                            "frames", AudioUtilities::renderQuantumSize, nullptr));
    9495
    95     GRefPtr<GstElement> audioSink = gst_element_factory_make("autoaudiosink", nullptr);
     96    GRefPtr<GstElement> audioSink = createPlatformAudioSink();
    9697    m_audioSinkAvailable = audioSink;
    9798    if (!audioSink) {
    98         LOG_ERROR("Failed to create GStreamer autoaudiosink element");
     99        LOG_ERROR("Failed to create GStreamer audio sink element");
    99100        return;
    100101    }
    101102
    102     g_signal_connect(audioSink.get(), "child-added", G_CALLBACK(autoAudioSinkChildAddedCallback), nullptr);
     103    // Probe platform early on for a working audio output device. This is not needed for the WebKit
     104    // custom audio sink because it doesn't rely on autoaudiosink.
     105    if (!WEBKIT_IS_AUDIO_SINK(audioSink.get())) {
     106        g_signal_connect(audioSink.get(), "child-added", G_CALLBACK(autoAudioSinkChildAddedCallback), nullptr);
    103107
    104     // Autoaudiosink does the real sink detection in the GST_STATE_NULL->READY transition
    105     // so it's best to roll it to READY as soon as possible to ensure the underlying platform
    106     // audiosink was loaded correctly.
    107     GstStateChangeReturn stateChangeReturn = gst_element_set_state(audioSink.get(), GST_STATE_READY);
    108     if (stateChangeReturn == GST_STATE_CHANGE_FAILURE) {
    109         LOG_ERROR("Failed to change autoaudiosink element state");
    110         gst_element_set_state(audioSink.get(), GST_STATE_NULL);
    111         m_audioSinkAvailable = false;
    112         return;
     108        // Autoaudiosink does the real sink detection in the GST_STATE_NULL->READY transition
     109        // so it's best to roll it to READY as soon as possible to ensure the underlying platform
     110        // audiosink was loaded correctly.
     111        GstStateChangeReturn stateChangeReturn = gst_element_set_state(audioSink.get(), GST_STATE_READY);
     112        if (stateChangeReturn == GST_STATE_CHANGE_FAILURE) {
     113            LOG_ERROR("Failed to change autoaudiosink element state");
     114            gst_element_set_state(audioSink.get(), GST_STATE_NULL);
     115            m_audioSinkAvailable = false;
     116            return;
     117        }
    113118    }
    114119
    115120    GstElement* audioConvert = gst_element_factory_make("audioconvert", nullptr);
    116121    GstElement* audioResample = gst_element_factory_make("audioresample", nullptr);
    117     gst_bin_add_many(GST_BIN(m_pipeline), webkitAudioSrc, audioConvert, audioResample, audioSink.get(), nullptr);
     122    gst_bin_add_many(GST_BIN_CAST(m_pipeline), webkitAudioSrc, audioConvert, audioResample, audioSink.get(), nullptr);
    118123
    119124    // Link src pads from webkitAudioSrc to audioConvert ! audioResample ! autoaudiosink.
  • trunk/Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp

    r241587 r267787  
    142142}
    143143
    144 void AudioSourceProviderGStreamer::configureAudioBin(GstElement* audioBin, GstElement* teePredecessor)
     144void AudioSourceProviderGStreamer::configureAudioBin(GstElement* audioBin, GstElement* audioSink)
    145145{
    146146    m_audioSinkBin = audioBin;
     
    153153    GstElement* audioResample2 = gst_element_factory_make("audioresample", nullptr);
    154154    GstElement* volumeElement = gst_element_factory_make("volume", "volume");
    155     GstElement* audioSink = gst_element_factory_make("autoaudiosink", nullptr);
    156 
    157     gst_bin_add_many(GST_BIN(m_audioSinkBin.get()), audioTee, audioQueue, audioConvert, audioResample, volumeElement, audioConvert2, audioResample2, audioSink, nullptr);
    158 
    159     // In cases where the audio-sink needs elements before tee (such
    160     // as scaletempo) they need to be linked to tee which in this case
    161     // doesn't need a ghost pad. It is assumed that the teePredecessor
    162     // chain already configured a ghost pad.
    163     if (teePredecessor)
    164         gst_element_link_pads_full(teePredecessor, "src", audioTee, "sink", GST_PAD_LINK_CHECK_NOTHING);
    165     else {
    166         // Add a ghostpad to the bin so it can proxy to tee.
    167         GRefPtr<GstPad> audioTeeSinkPad = adoptGRef(gst_element_get_static_pad(audioTee, "sink"));
    168         gst_element_add_pad(m_audioSinkBin.get(), gst_ghost_pad_new("sink", audioTeeSinkPad.get()));
    169     }
    170 
    171     // Link a new src pad from tee to queue ! audioconvert !
    172     // audioresample ! volume ! audioconvert ! audioresample !
    173     // autoaudiosink. The audioresample and audioconvert are needed to
    174     // ensure the audio sink receives buffers in the correct format.
     155
     156    gst_bin_add_many(GST_BIN_CAST(m_audioSinkBin.get()), audioTee, audioQueue, audioConvert, audioResample, volumeElement, audioConvert2, audioResample2, audioSink, nullptr);
     157
     158    // Add a ghostpad to the bin so it can proxy to tee.
     159    auto audioTeeSinkPad = adoptGRef(gst_element_get_static_pad(audioTee, "sink"));
     160    gst_element_add_pad(m_audioSinkBin.get(), gst_ghost_pad_new("sink", audioTeeSinkPad.get()));
     161
     162    // Link a new src pad from tee to queue ! audioconvert ! audioresample ! volume ! audioconvert !
     163    // audioresample ! audiosink. The audioresample and audioconvert are needed to ensure the audio
     164    // sink receives buffers in the correct format.
    175165    gst_element_link_pads_full(audioTee, "src_%u", audioQueue, "sink", GST_PAD_LINK_CHECK_NOTHING);
    176166    gst_element_link_pads_full(audioQueue, "src", audioConvert, "sink", GST_PAD_LINK_CHECK_NOTHING);
  • trunk/Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.h

    r248851 r267787  
    5858    ~AudioSourceProviderGStreamer();
    5959
    60     void configureAudioBin(GstElement* audioBin, GstElement* teePredecessor);
     60    void configureAudioBin(GstElement* audioBin, GstElement* audioSink);
    6161
    6262    void provideInput(AudioBus*, size_t framesToProcess) override;
  • trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerCommon.cpp

    r265492 r267787  
    2525
    2626#include "GLVideoSinkGStreamer.h"
     27#include "GStreamerAudioMixer.h"
    2728#include "GstAllocatorFastMalloc.h"
    2829#include "IntSize.h"
    2930#include "SharedBuffer.h"
     31#include "WebKitAudioSinkGStreamer.h"
    3032#include <gst/audio/audio-info.h>
    3133#include <gst/gst.h>
     
    319321#endif
    320322#endif
     323        // We don't want autoaudiosink to autoplug our sink.
     324        gst_element_register(0, "webkitaudiosink", GST_RANK_NONE, WEBKIT_TYPE_AUDIO_SINK);
    321325
    322326        // If the FDK-AAC decoder is available, promote it and downrank the
     
    427431}
    428432
     433GstElement* createPlatformAudioSink()
     434{
     435    GstElement* audioSink = webkitAudioSinkNew();
     436    if (!audioSink) {
     437        // This means the WebKit audio sink configuration failed. It can happen for the following reasons:
     438        // - audio mixing was not requested using the WEBKIT_GST_ENABLE_AUDIO_MIXER
     439        // - audio mixing was requested using the WEBKIT_GST_ENABLE_AUDIO_MIXER but the audio mixer
     440        //   runtime requirements are not fullfilled.
     441        // - the sink was created for the WPE port, audio mixing was not requested and no
     442        //   WPEBackend-FDO audio receiver has been registered at runtime.
     443        audioSink = gst_element_factory_make("autoaudiosink", nullptr);
     444    }
     445    if (!audioSink) {
     446        GST_WARNING("GStreamer's autoaudiosink not found. Please check your gst-plugins-good installation");
     447        return nullptr;
     448    }
     449
     450    return audioSink;
     451}
     452
    429453}
    430454
  • trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerCommon.h

    r264816 r267787  
    293293bool isGStreamerPluginAvailable(const char* name);
    294294
     295GstElement* createPlatformAudioSink();
     296
    295297}
    296298
  • trunk/Source/WebCore/platform/graphics/gstreamer/GUniquePtrGStreamer.h

    r254682 r267787  
    3535#endif
    3636
     37#if defined(BUILDING_WebCore) && PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION)
     38#include <wpe/extensions/audio.h>
     39#endif
     40
    3741namespace WTF {
    3842
     
    5054#endif
    5155
     56#if defined(BUILDING_WebCore) && PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION)
     57WTF_DEFINE_GPTR_DELETER(struct wpe_audio_source, wpe_audio_source_destroy)
     58#endif
    5259}
    5360
  • trunk/Source/WebCore/platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.cpp

    r267138 r267787  
    3030
    3131#include "GraphicsContext.h"
     32#include "GStreamerAudioMixer.h"
    3233#include "GStreamerCommon.h"
    3334#include "GStreamerRegistryScanner.h"
     
    4445#include "TimeRanges.h"
    4546#include "VideoSinkGStreamer.h"
     47#include "WebKitAudioSinkGStreamer.h"
    4648#include "WebKitWebSourceGStreamer.h"
    4749#include "AudioTrackPrivateGStreamer.h"
     
    237239        g_signal_handlers_disconnect_by_func(GST_ELEMENT_PARENT(m_source.get()), reinterpret_cast<gpointer>(uriDecodeBinElementAddedCallback), this);
    238240
    239     if (m_autoAudioSink) {
    240         g_signal_handlers_disconnect_by_func(G_OBJECT(m_autoAudioSink.get()),
    241             reinterpret_cast<gpointer>(setAudioStreamPropertiesCallback), this);
    242     }
     241    auto* sink = audioSink();
     242    if (sink && !WEBKIT_IS_AUDIO_SINK(sink))
     243        g_signal_handlers_disconnect_by_func(G_OBJECT(sink), reinterpret_cast<gpointer>(setAudioStreamPropertiesCallback), this);
    243244
    244245    m_readyTimerHandler.stop();
     
    519520    // Avoid useless seeking.
    520521    if (mediaTime == currentMediaTime()) {
    521         GST_DEBUG_OBJECT(pipeline(), "[Seek] seek to EOS position unhandled");
    522         return;
    523     }
    524 
    525     MediaTime time = std::min(mediaTime, durationMediaTime());
     522        GST_DEBUG_OBJECT(pipeline(), "[Seek] Already at requested position. Aborting.");
     523        return;
     524    }
    526525
    527526    if (m_isLiveStream) {
     
    530529    }
    531530
     531    MediaTime time = std::min(mediaTime, durationMediaTime());
    532532    GST_INFO_OBJECT(pipeline(), "[Seek] seeking to %s", toString(time).utf8().data());
    533533
     
    10051005
    10061006    if (!GST_IS_BIN(element)) {
    1007         g_object_set(element, "sync", sync, NULL);
    1008         return;
    1009     }
    1010 
    1011     GstIterator* it = gst_bin_iterate_sinks(GST_BIN(element));
    1012     while (gst_iterator_foreach(it, (GstIteratorForeachFunction)([](const GValue* item, void* syncPtr) {
     1007        g_object_set(element, "sync", sync, nullptr);
     1008        return;
     1009    }
     1010
     1011    GUniquePtr<GstIterator> iterator(gst_bin_iterate_sinks(GST_BIN_CAST(element)));
     1012    while (gst_iterator_foreach(iterator.get(), static_cast<GstIteratorForeachFunction>([](const GValue* item, void* syncPtr) {
    10131013        bool* sync = static_cast<bool*>(syncPtr);
    1014         setSyncOnClock(GST_ELEMENT(g_value_get_object(item)), *sync);
     1014        setSyncOnClock(GST_ELEMENT_CAST(g_value_get_object(item)), *sync);
    10151015    }), &sync) == GST_ITERATOR_RESYNC)
    1016         gst_iterator_resync(it);
    1017     gst_iterator_free(it);
     1016        gst_iterator_resync(iterator.get());
    10181017}
    10191018
     
    13541353GstElement* MediaPlayerPrivateGStreamer::createAudioSink()
    13551354{
    1356     m_autoAudioSink = gst_element_factory_make("autoaudiosink", nullptr);
    1357     if (!m_autoAudioSink) {
    1358         GST_WARNING("GStreamer's autoaudiosink not found. Please check your gst-plugins-good installation");
     1355    GstElement* audioSink = createPlatformAudioSink();
     1356    RELEASE_ASSERT(audioSink);
     1357    if (!audioSink)
    13591358        return nullptr;
    1360     }
    1361 
    1362     g_signal_connect_swapped(m_autoAudioSink.get(), "child-added", G_CALLBACK(setAudioStreamPropertiesCallback), this);
     1359
     1360    if (!WEBKIT_IS_AUDIO_SINK(audioSink))
     1361        g_signal_connect_swapped(audioSink, "child-added", G_CALLBACK(setAudioStreamPropertiesCallback), this);
    13631362
    13641363#if ENABLE(WEB_AUDIO)
    13651364    GstElement* audioSinkBin = gst_bin_new("audio-sink");
    13661365    ensureAudioSourceProvider();
    1367     m_audioSourceProvider->configureAudioBin(audioSinkBin, nullptr);
     1366    m_audioSourceProvider->configureAudioBin(audioSinkBin, audioSink);
    13681367    return audioSinkBin;
    13691368#else
    1370     return m_autoAudioSink.get();
     1369    return audioSink;
    13711370#endif
    13721371}
     
    13741373GstElement* MediaPlayerPrivateGStreamer::audioSink() const
    13751374{
     1375    if (!m_pipeline)
     1376        return nullptr;
     1377
    13761378    GstElement* sink;
    13771379    g_object_get(m_pipeline.get(), "audio-sink", &sink, nullptr);
  • trunk/Source/WebCore/platform/graphics/gstreamer/MediaPlayerPrivateGStreamer.h

    r267138 r267787  
    514514    std::unique_ptr<AudioSourceProviderGStreamer> m_audioSourceProvider;
    515515#endif
    516     GRefPtr<GstElement> m_autoAudioSink;
    517516    GRefPtr<GstElement> m_downloadBuffer;
    518517    Vector<RefPtr<MediaPlayerRequestInstallMissingPluginsCallback>> m_missingPluginCallbacks;
  • trunk/Source/cmake/FindWPEBackend_fdo.cmake

    r258412 r267787  
    4444mark_as_advanced(WPEBACKEND_FDO_INCLUDE_DIRS WPEBACKEND_FDO_LIBRARIES)
    4545
     46find_path(WPEBACKEND_FDO_AUDIO_EXTENSION
     47    NAMES wpe/extensions/audio.h
     48    HINTS ${PC_WPEBACKEND_FDO_INCLUDEDIR} ${PC_WPEBACKEND_FDO_INCLUDE_DIRS}
     49)
     50
    4651include(FindPackageHandleStandardArgs)
    4752find_package_handle_standard_args(WPEBackend_fdo
  • trunk/Source/cmake/GStreamerChecks.cmake

    r265492 r267787  
    11if (ENABLE_VIDEO OR ENABLE_WEB_AUDIO)
     2
     3    if (PORT STREQUAL "WPE")
     4        find_package(WPEBackend_fdo 1.9.0)
     5        if ((NOT WPEBACKEND_FDO_FOUND) OR WPEBACKEND_FDO_AUDIO_EXTENSION STREQUAL "WPEBACKEND_FDO_AUDIO_EXTENSION-NOTFOUND")
     6            message(WARNING "WPEBackend-fdo audio extension not found. Disabling external audio rendering support")
     7            SET_AND_EXPOSE_TO_BUILD(USE_WPEBACKEND_FDO_AUDIO_EXTENSION FALSE)
     8        else ()
     9            SET_AND_EXPOSE_TO_BUILD(USE_WPEBACKEND_FDO_AUDIO_EXTENSION TRUE)
     10        endif ()
     11    endif ()
    212
    313    SET_AND_EXPOSE_TO_BUILD(USE_GSTREAMER TRUE)
  • trunk/Tools/ChangeLog

    r267785 r267787  
     12020-09-30  Philippe Normand  <pnormand@igalia.com>
     2
     3        [GStreamer] Internal audio rendering support
     4        https://bugs.webkit.org/show_bug.cgi?id=207634
     5
     6        Reviewed by Xabier Rodriguez-Calvar.
     7
     8        * Scripts/webkitpy/style/checker.py: White-list the new audio sink from the style checker.
     9        * TestWebKitAPI/Tests/WebKit/file-with-video.html: New utility functions to pause and seek in the video.
     10        * TestWebKitAPI/Tests/WebKitGLib/TestWebKitWebView.cpp: WPE test for external audio
     11        rendering support. A video file is loaded through the webview and the test receives
     12        notifications during playback. In order to reduce timeout risks, a seek near the end of the
     13        video is performed early on.
     14        (AudioRenderingWebViewTest::setup):
     15        (AudioRenderingWebViewTest::teardown):
     16        (AudioRenderingWebViewTest::AudioRenderingWebViewTest):
     17        (AudioRenderingWebViewTest::handleStart):
     18        (AudioRenderingWebViewTest::handleStop):
     19        (AudioRenderingWebViewTest::handlePause):
     20        (AudioRenderingWebViewTest::handleResume):
     21        (AudioRenderingWebViewTest::handlePacket):
     22        (AudioRenderingWebViewTest::waitUntilPaused):
     23        (AudioRenderingWebViewTest::waitUntilEOS):
     24        (AudioRenderingWebViewTest::state const):
     25        (beforeAll):
     26
    1272020-09-30  Philippe Normand  <pnormand@igalia.com>
    228
  • trunk/Tools/Scripts/webkitpy/style/checker.py

    r260988 r267787  
    222222      os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'VideoSinkGStreamer.cpp'),
    223223      os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitWebSourceGStreamer.cpp'),
     224      os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitAudioSinkGStreamer.cpp'),
     225      os.path.join('Source', 'WebCore', 'platform', 'graphics', 'gstreamer', 'WebKitAudioSinkGStreamer.h'),
    224226      os.path.join('Source', 'WebCore', 'platform', 'audio', 'gstreamer', 'WebKitWebAudioSourceGStreamer.cpp'),
    225227      os.path.join('Source', 'WebCore', 'platform', 'mediastream', 'gstreamer', 'GStreamerMediaStreamSource.h'),
  • trunk/Tools/TestWebKitAPI/Tests/WebKit/file-with-video.html

    r174463 r267787  
    55    {
    66        document.getElementById("test-video").play();
     7    }
     8    function pauseVideo()
     9    {
     10        document.getElementById("test-video").pause();
     11    }
     12    function seekNearTheEnd()
     13    {
     14        let video = document.getElementById("test-video");
     15        video.currentTime = video.duration - 0.5;
    716    }
    817  </script>
  • trunk/Tools/TestWebKitAPI/Tests/WebKitGLib/TestWebKitWebView.cpp

    r265080 r267787  
    2525#include <wtf/glib/GRefPtr.h>
    2626
     27#if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION)
     28#include <wpe/extensions/audio.h>
     29#endif
     30
    2731class IsPlayingAudioWebViewTest : public WebViewTest {
    2832public:
     
    13441348#endif
    13451349
     1350#if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION)
     1351enum class RenderingState {
     1352    Unknown,
     1353    Started,
     1354    Paused,
     1355    Stopped
     1356};
     1357
     1358class AudioRenderingWebViewTest : public WebViewTest {
     1359public:
     1360    MAKE_GLIB_TEST_FIXTURE_WITH_SETUP_TEARDOWN(AudioRenderingWebViewTest, setup, teardown);
     1361
     1362    static void setup()
     1363    {
     1364    }
     1365
     1366    static void teardown()
     1367    {
     1368        wpe_audio_register_receiver(nullptr, nullptr);
     1369    }
     1370
     1371    AudioRenderingWebViewTest()
     1372    {
     1373        wpe_audio_register_receiver(&m_audioReceiver, this);
     1374    }
     1375
     1376    void handleStart(uint32_t id, int32_t channels, const char* layout, int32_t sampleRate)
     1377    {
     1378        g_assert(m_state == RenderingState::Unknown);
     1379        g_assert_false(m_streamId.hasValue());
     1380        g_assert_cmpuint(id, ==, 0);
     1381        m_streamId = id;
     1382        m_state = RenderingState::Started;
     1383        g_assert_cmpint(channels, ==, 2);
     1384        g_assert_cmpstr(layout, ==, "S16LE");
     1385        g_assert_cmpint(sampleRate, ==, 44100);
     1386    }
     1387
     1388    void handleStop(uint32_t id)
     1389    {
     1390        g_assert_cmpuint(*m_streamId, ==, id);
     1391        g_assert(m_state != RenderingState::Unknown);
     1392        m_state = RenderingState::Stopped;
     1393        g_main_loop_quit(m_mainLoop);
     1394        m_streamId.reset();
     1395    }
     1396
     1397    void handlePause(uint32_t id)
     1398    {
     1399        g_assert_cmpuint(*m_streamId, ==, id);
     1400        g_assert(m_state != RenderingState::Unknown);
     1401        m_state = RenderingState::Paused;
     1402    }
     1403
     1404    void handleResume(uint32_t id)
     1405    {
     1406        g_assert_cmpuint(*m_streamId, ==, id);
     1407        g_assert(m_state == RenderingState::Paused);
     1408        m_state = RenderingState::Started;
     1409    }
     1410
     1411    void handlePacket(struct wpe_audio_packet_export* packet_export, uint32_t id, int32_t fd, uint32_t size)
     1412    {
     1413        g_assert_cmpuint(*m_streamId, ==, id);
     1414        g_assert(m_state == RenderingState::Started || m_state == RenderingState::Paused);
     1415        g_assert_cmpuint(size, >, 0);
     1416        wpe_audio_packet_export_release(packet_export);
     1417    }
     1418
     1419    void waitUntilPaused()
     1420    {
     1421        g_timeout_add(200, [](gpointer userData) -> gboolean {
     1422            auto* test = static_cast<AudioRenderingWebViewTest*>(userData);
     1423            if (test->state() == RenderingState::Paused) {
     1424                test->quitMainLoop();
     1425                return G_SOURCE_REMOVE;
     1426            }
     1427            return G_SOURCE_CONTINUE;
     1428        }, this);
     1429        g_main_loop_run(m_mainLoop);
     1430    }
     1431
     1432    void waitUntilEOS()
     1433    {
     1434        g_main_loop_run(m_mainLoop);
     1435    }
     1436
     1437    RenderingState state() const { return m_state; }
     1438
     1439private:
     1440    static const struct wpe_audio_receiver m_audioReceiver;
     1441    RenderingState m_state { RenderingState::Unknown };
     1442    Optional<uint32_t> m_streamId;
     1443};
     1444
     1445const struct wpe_audio_receiver AudioRenderingWebViewTest::m_audioReceiver = {
     1446    [](void* data, uint32_t id, int32_t channels, const char* layout, int32_t sampleRate) { static_cast<AudioRenderingWebViewTest*>(data)->handleStart(id, channels, layout, sampleRate); },
     1447    [](void* data, struct wpe_audio_packet_export* packet_export, uint32_t id, int32_t fd, uint32_t size) { static_cast<AudioRenderingWebViewTest*>(data)->handlePacket(packet_export, id, fd, size); },
     1448    [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handleStop(id); },
     1449    [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handlePause(id); },
     1450    [](void* data, uint32_t id) { static_cast<AudioRenderingWebViewTest*>(data)->handleResume(id); }
     1451};
     1452
     1453static void testWebViewExternalAudioRendering(AudioRenderingWebViewTest* test, gconstpointer)
     1454{
     1455    GUniquePtr<char> resourcePath(g_build_filename(Test::getResourcesDir(Test::WebKit2Resources).data(), "file-with-video.html", nullptr));
     1456    GUniquePtr<char> resourceURL(g_filename_to_uri(resourcePath.get(), nullptr, nullptr));
     1457    webkit_web_view_load_uri(test->m_webView, resourceURL.get());
     1458    test->waitUntilLoadFinished();
     1459
     1460    test->runJavaScriptAndWaitUntilFinished("playVideo();", nullptr);
     1461    g_assert(test->state() == RenderingState::Started);
     1462    test->runJavaScriptAndWaitUntilFinished("pauseVideo();", nullptr);
     1463    test->waitUntilPaused();
     1464    g_assert(test->state() == RenderingState::Paused);
     1465
     1466    test->runJavaScriptAndWaitUntilFinished("playVideo(); seekNearTheEnd();", nullptr);
     1467    test->waitUntilEOS();
     1468    g_assert(test->state() == RenderingState::Stopped);
     1469}
     1470#endif
     1471
    13461472static void serverCallback(SoupServer* server, SoupMessage* message, const char* path, GHashTable*, SoupClientContext*, gpointer)
    13471473{
     
    14041530    WebViewTest::add("WebKitWebView", "is-audio-muted", testWebViewIsAudioMuted);
    14051531    WebViewTest::add("WebKitWebView", "autoplay-policy", testWebViewAutoplayPolicy);
     1532#if PLATFORM(WPE) && USE(WPEBACKEND_FDO_AUDIO_EXTENSION)
     1533    AudioRenderingWebViewTest::add("WebKitWebView", "external-audio-rendering", testWebViewExternalAudioRendering);
     1534#endif
    14061535}
    14071536
Note: See TracChangeset for help on using the changeset viewer.