Changeset 272165 in webkit


Ignore:
Timestamp:
Feb 1, 2021 12:30:42 PM (3 years ago)
Author:
commit-queue@webkit.org
Message:

Use user media permission prompt for speech recognition
https://bugs.webkit.org/show_bug.cgi?id=221082
rdar://problem/73372499

Patch by Sihui Liu <sihui_liu@appe.com> on 2021-02-01
Reviewed by Youenn Fablet.

Source/WebCore:

Add frame identifier to SpeechRecognitionRequest as it is needed for checking user media permission.

Updated existing tests for changed behavior.

  • Modules/speech/SpeechRecognition.cpp:

(WebCore::SpeechRecognition::startRecognition):

  • Modules/speech/SpeechRecognitionConnection.h:
  • Modules/speech/SpeechRecognitionRequest.h:

(WebCore::SpeechRecognitionRequest::frameIdentifier const):

  • Modules/speech/SpeechRecognitionRequestInfo.h:

(WebCore::SpeechRecognitionRequestInfo::encode const):
(WebCore::SpeechRecognitionRequestInfo::decode):

  • page/DummySpeechRecognitionProvider.h:

Source/WebKit:

Make SpeechRecognitionPermissionManager ask UserMediaPermissionRequestManagerProxy for user permission on
microphone.

  • UIProcess/SpeechRecognitionPermissionManager.cpp:

(WebKit::SpeechRecognitionPermissionManager::request):
(WebKit::SpeechRecognitionPermissionManager::startProcessingRequest):
(WebKit::SpeechRecognitionPermissionManager::requestUserPermission):

  • UIProcess/SpeechRecognitionPermissionManager.h:
  • UIProcess/SpeechRecognitionPermissionRequest.h:

(WebKit::SpeechRecognitionPermissionRequest::create):
(WebKit::SpeechRecognitionPermissionRequest::frameIdentifier const):
(WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest):

  • UIProcess/SpeechRecognitionServer.cpp:

(WebKit::SpeechRecognitionServer::start):
(WebKit::SpeechRecognitionServer::requestPermissionForRequest):

  • UIProcess/SpeechRecognitionServer.h:
  • UIProcess/SpeechRecognitionServer.messages.in:
  • UIProcess/UserMediaPermissionRequestManagerProxy.cpp:

(WebKit::UserMediaPermissionRequestManagerProxy::denyRequest):
(WebKit::UserMediaPermissionRequestManagerProxy::grantRequest):
(WebKit::UserMediaPermissionRequestManagerProxy::checkUserMediaPermissionForSpeechRecognition):

  • UIProcess/UserMediaPermissionRequestManagerProxy.h:
  • UIProcess/UserMediaPermissionRequestProxy.cpp:

(WebKit::UserMediaPermissionRequestProxy::UserMediaPermissionRequestProxy):

  • UIProcess/UserMediaPermissionRequestProxy.h:

(WebKit::UserMediaPermissionRequestProxy::create):
(WebKit::UserMediaPermissionRequestProxy::decisionCompletionHandler):

  • UIProcess/WebPageProxy.cpp:

(WebKit::WebPageProxy::requestSpeechRecognitionPermission):
(WebKit::WebPageProxy::requestUserMediaPermissionForSpeechRecognition):

  • UIProcess/WebPageProxy.h:
  • UIProcess/WebProcessProxy.cpp:

(WebKit::WebProcessProxy::createSpeechRecognitionServer):

  • WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp:

(WebKit::WebSpeechRecognitionConnection::start):

  • WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:

Tools:

  • TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm:

(-[SpeechRecognitionUIDelegate _webView:requestMediaCaptureAuthorization:decisionHandler:]):

LayoutTests:

  • fast/speechrecognition/permission-error.html:
  • fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt:
  • fast/speechrecognition/start-recognition-in-removed-iframe.html:
Location:
trunk
Files:
28 edited

Legend:

Unmodified
Added
Removed
  • trunk/LayoutTests/ChangeLog

    r272153 r272165  
     12021-02-01  Sihui Liu  <sihui_liu@appe.com>
     2
     3        Use user media permission prompt for speech recognition
     4        https://bugs.webkit.org/show_bug.cgi?id=221082
     5        rdar://problem/73372499
     6
     7        Reviewed by Youenn Fablet.
     8
     9        * fast/speechrecognition/permission-error.html:
     10        * fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt:
     11        * fast/speechrecognition/start-recognition-in-removed-iframe.html:
     12
    1132021-02-01  Rini Patel  <rini_patel@apple.com>
    214
  • trunk/LayoutTests/fast/speechrecognition/permission-error.html

    r271381 r272165  
    88if (window.testRunner) {
    99    jsTestIsAsync = true;
    10     testRunner.setIsSpeechRecognitionPermissionGranted(false);
     10    testRunner.setUserMediaPermission(false);
    1111}
    1212
  • trunk/LayoutTests/fast/speechrecognition/start-recognition-in-removed-iframe-expected.txt

    r269810 r272165  
    55
    66PASS iframe.parentNode.removeChild(iframe) did not throw exception.
    7 PASS iframe.contentWindow.startRecognition() did not throw exception.
     7PASS iframe.contentWindow.startRecognition() threw exception UnknownError: Recognition is not in a valid frame.
    88PASS successfullyParsed is true
    99
  • trunk/LayoutTests/fast/speechrecognition/start-recognition-in-removed-iframe.html

    r270158 r272165  
    1313{
    1414    iframe = document.getElementsByTagName('iframe')[0];
    15     shouldNotThrow("iframe.contentWindow.startRecognition()");
     15    shouldThrow("iframe.contentWindow.startRecognition()");
    1616}
    1717
  • trunk/Source/WebCore/ChangeLog

    r272164 r272165  
     12021-02-01  Sihui Liu  <sihui_liu@appe.com>
     2
     3        Use user media permission prompt for speech recognition
     4        https://bugs.webkit.org/show_bug.cgi?id=221082
     5        rdar://problem/73372499
     6
     7        Reviewed by Youenn Fablet.
     8
     9        Add frame identifier to SpeechRecognitionRequest as it is needed for checking user media permission.
     10
     11        Updated existing tests for changed behavior.
     12
     13        * Modules/speech/SpeechRecognition.cpp:
     14        (WebCore::SpeechRecognition::startRecognition):
     15        * Modules/speech/SpeechRecognitionConnection.h:
     16        * Modules/speech/SpeechRecognitionRequest.h:
     17        (WebCore::SpeechRecognitionRequest::frameIdentifier const):
     18        * Modules/speech/SpeechRecognitionRequestInfo.h:
     19        (WebCore::SpeechRecognitionRequestInfo::encode const):
     20        (WebCore::SpeechRecognitionRequestInfo::decode):
     21        * page/DummySpeechRecognitionProvider.h:
     22
    1232021-02-01  Wenson Hsieh  <wenson_hsieh@apple.com>
    224
  • trunk/Source/WebCore/Modules/speech/SpeechRecognition.cpp

    r271791 r272165  
    7272
    7373    auto& document = downcast<Document>(*scriptExecutionContext());
    74     m_connection->start(identifier(), m_lang, m_continuous, m_interimResults, m_maxAlternatives, ClientOrigin { document.topOrigin().data(), document.securityOrigin().data() });
     74    auto* frame = document.frame();
     75    if (!frame)
     76        return Exception { UnknownError, "Recognition is not in a valid frame"_s };
     77
     78    auto optionalFrameIdentifier = document.frameID();
     79    auto frameIdentifier = optionalFrameIdentifier ? *optionalFrameIdentifier : FrameIdentifier { };
     80    m_connection->start(identifier(), m_lang, m_continuous, m_interimResults, m_maxAlternatives, ClientOrigin { document.topOrigin().data(), document.securityOrigin().data() }, frameIdentifier);
    7581    m_state = State::Starting;
    7682    return { };
  • trunk/Source/WebCore/Modules/speech/SpeechRecognitionConnection.h

    r271636 r272165  
    2626#pragma once
    2727
     28#include "FrameIdentifier.h"
    2829#include "SpeechRecognitionConnectionClientIdentifier.h"
    2930
     
    3940    virtual void registerClient(SpeechRecognitionConnectionClient&) = 0;
    4041    virtual void unregisterClient(SpeechRecognitionConnectionClient&) = 0;
    41     virtual void start(SpeechRecognitionConnectionClientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, ClientOrigin&&) = 0;
     42    virtual void start(SpeechRecognitionConnectionClientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, ClientOrigin&&, FrameIdentifier) = 0;
    4243    virtual void stop(SpeechRecognitionConnectionClientIdentifier) = 0;
    4344    virtual void abort(SpeechRecognitionConnectionClientIdentifier) = 0;
  • trunk/Source/WebCore/Modules/speech/SpeechRecognitionRequest.h

    r269810 r272165  
    4242    uint64_t maxAlternatives() const { return m_info.maxAlternatives; }
    4343    const ClientOrigin clientOrigin() const { return m_info.clientOrigin; }
     44    FrameIdentifier frameIdentifier() const { return m_info.frameIdentifier; }
    4445
    4546private:
  • trunk/Source/WebCore/Modules/speech/SpeechRecognitionRequestInfo.h

    r269810 r272165  
    2727
    2828#include "ClientOrigin.h"
     29#include "FrameIdentifier.h"
    2930#include "SpeechRecognitionConnectionClientIdentifier.h"
    3031
     
    3839    uint64_t maxAlternatives { 1 };
    3940    ClientOrigin clientOrigin;
     41    FrameIdentifier frameIdentifier;
    4042
    4143    template<class Encoder> void encode(Encoder&) const;
     
    4648void SpeechRecognitionRequestInfo::encode(Encoder& encoder) const
    4749{
    48     encoder << clientIdentifier << lang << continuous << interimResults << maxAlternatives << clientOrigin;
     50    encoder << clientIdentifier << lang << continuous << interimResults << maxAlternatives << clientOrigin << frameIdentifier;
    4951}
    5052
     
    8284        return WTF::nullopt;
    8385
     86    Optional<FrameIdentifier> frameIdentifier;
     87    decoder >> frameIdentifier;
     88    if (!frameIdentifier)
     89        return WTF::nullopt;
     90
    8491    return {{
    8592        WTFMove(*clientIdentifier),
     
    8895        WTFMove(*interimResults),
    8996        WTFMove(*maxAlternatives),
    90         WTFMove(*clientOrigin)
     97        WTFMove(*clientOrigin),
     98        WTFMove(*frameIdentifier)
    9199    }};
    92100}
  • trunk/Source/WebCore/page/DummySpeechRecognitionProvider.h

    r271636 r272165  
    4141        void registerClient(SpeechRecognitionConnectionClient&) final { }
    4242        void unregisterClient(SpeechRecognitionConnectionClient&) final { }
    43         void start(SpeechRecognitionConnectionClientIdentifier, const String&, bool, bool, uint64_t, ClientOrigin&&) final { }
     43        void start(SpeechRecognitionConnectionClientIdentifier, const String&, bool, bool, uint64_t, ClientOrigin&&, FrameIdentifier) final { }
    4444        void stop(SpeechRecognitionConnectionClientIdentifier) final { }
    4545        void abort(SpeechRecognitionConnectionClientIdentifier) final { }
  • trunk/Source/WebKit/ChangeLog

    r272164 r272165  
     12021-02-01  Sihui Liu  <sihui_liu@appe.com>
     2
     3        Use user media permission prompt for speech recognition
     4        https://bugs.webkit.org/show_bug.cgi?id=221082
     5        rdar://problem/73372499
     6
     7        Reviewed by Youenn Fablet.
     8
     9        Make SpeechRecognitionPermissionManager ask UserMediaPermissionRequestManagerProxy for user permission on
     10        microphone.
     11
     12        * UIProcess/SpeechRecognitionPermissionManager.cpp:
     13        (WebKit::SpeechRecognitionPermissionManager::request):
     14        (WebKit::SpeechRecognitionPermissionManager::startProcessingRequest):
     15        (WebKit::SpeechRecognitionPermissionManager::requestUserPermission):
     16        * UIProcess/SpeechRecognitionPermissionManager.h:
     17        * UIProcess/SpeechRecognitionPermissionRequest.h:
     18        (WebKit::SpeechRecognitionPermissionRequest::create):
     19        (WebKit::SpeechRecognitionPermissionRequest::frameIdentifier const):
     20        (WebKit::SpeechRecognitionPermissionRequest::SpeechRecognitionPermissionRequest):
     21        * UIProcess/SpeechRecognitionServer.cpp:
     22        (WebKit::SpeechRecognitionServer::start):
     23        (WebKit::SpeechRecognitionServer::requestPermissionForRequest):
     24        * UIProcess/SpeechRecognitionServer.h:
     25        * UIProcess/SpeechRecognitionServer.messages.in:
     26        * UIProcess/UserMediaPermissionRequestManagerProxy.cpp:
     27        (WebKit::UserMediaPermissionRequestManagerProxy::denyRequest):
     28        (WebKit::UserMediaPermissionRequestManagerProxy::grantRequest):
     29        (WebKit::UserMediaPermissionRequestManagerProxy::checkUserMediaPermissionForSpeechRecognition):
     30        * UIProcess/UserMediaPermissionRequestManagerProxy.h:
     31        * UIProcess/UserMediaPermissionRequestProxy.cpp:
     32        (WebKit::UserMediaPermissionRequestProxy::UserMediaPermissionRequestProxy):
     33        * UIProcess/UserMediaPermissionRequestProxy.h:
     34        (WebKit::UserMediaPermissionRequestProxy::create):
     35        (WebKit::UserMediaPermissionRequestProxy::decisionCompletionHandler):
     36        * UIProcess/WebPageProxy.cpp:
     37        (WebKit::WebPageProxy::requestSpeechRecognitionPermission):
     38        (WebKit::WebPageProxy::requestUserMediaPermissionForSpeechRecognition):
     39        * UIProcess/WebPageProxy.h:
     40        * UIProcess/WebProcessProxy.cpp:
     41        (WebKit::WebProcessProxy::createSpeechRecognitionServer):
     42        * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp:
     43        (WebKit::WebSpeechRecognitionConnection::start):
     44        * WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:
     45
    1462021-02-01  Wenson Hsieh  <wenson_hsieh@apple.com>
    247
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.cpp

    r271415 r272165  
    7676}
    7777   
    78 void SpeechRecognitionPermissionManager::request(const String& lang, const WebCore::ClientOrigin& origin, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&& completiontHandler)
    79 {
    80     m_requests.append(SpeechRecognitionPermissionRequest::create(lang, origin, WTFMove(completiontHandler)));
     78void SpeechRecognitionPermissionManager::request(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completiontHandler)
     79{
     80    m_requests.append(SpeechRecognitionPermissionRequest::create(lang, origin, frameIdentifier, WTFMove(completiontHandler)));
    8181    if (m_requests.size() == 1)
    8282        startNextRequest();
     
    126126    }
    127127
    128     // We currently don't allow third-party access.
    129     if (m_userPermissionCheck == CheckResult::Unknown) {
    130         auto clientOrigin = m_requests.first()->origin();
    131         auto requestingOrigin = clientOrigin.clientOrigin.securityOrigin();
    132         auto topOrigin = clientOrigin.topOrigin.securityOrigin();
    133         if (!requestingOrigin->isSameOriginAs(topOrigin))
    134             m_userPermissionCheck = CheckResult::Denied;
    135     }
    136 
    137128    if (m_userPermissionCheck == CheckResult::Denied) {
    138129        completeCurrentRequest(WebCore::SpeechRecognitionError { WebCore::SpeechRecognitionErrorType::NotAllowed, "User permission check has failed"_s });
     
    226217    auto& currentRequest = m_requests.first();
    227218    auto clientOrigin = currentRequest->origin();
     219    auto requestingOrigin = clientOrigin.clientOrigin.securityOrigin();
    228220    auto topOrigin = clientOrigin.topOrigin.securityOrigin();
    229221    auto decisionHandler = [this, weakThis = makeWeakPtr(*this)](bool granted) {
     
    239231        continueProcessingRequest();
    240232    };
    241     m_page.uiClient().decidePolicyForSpeechRecognitionPermissionRequest(m_page, API::SecurityOrigin::create(topOrigin.get()).get(), WTFMove(decisionHandler));
     233    m_page.requestUserMediaPermissionForSpeechRecognition(currentRequest->frameIdentifier(), requestingOrigin, topOrigin, WTFMove(decisionHandler));
    242234}
    243235
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionManager.h

    r271381 r272165  
    4040    explicit SpeechRecognitionPermissionManager(WebPageProxy&);
    4141    ~SpeechRecognitionPermissionManager();
    42     void request(const String& lang, const WebCore::ClientOrigin&, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&&);
     42    void request(const String& lang, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&);
    4343
    4444    void decideByDefaultAction(const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionPermissionRequest.h

    r271381 r272165  
    2828#include "APIObject.h"
    2929#include <WebCore/ClientOrigin.h>
     30#include <WebCore/FrameIdentifier.h>
    3031#include <WebCore/SpeechRecognitionError.h>
    3132#include <wtf/CompletionHandler.h>
     
    3334namespace WebKit {
    3435
     36using SpeechRecognitionPermissionRequestCallback = CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>;
     37
    3538class SpeechRecognitionPermissionRequest : public RefCounted<SpeechRecognitionPermissionRequest> {
    3639public:
    37     static Ref<SpeechRecognitionPermissionRequest> create(const String& lang, const WebCore::ClientOrigin& origin, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&& completionHandler)
     40    static Ref<SpeechRecognitionPermissionRequest> create(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completionHandler)
    3841    {
    39         return adoptRef(*new SpeechRecognitionPermissionRequest(lang, origin, WTFMove(completionHandler)));
     42        return adoptRef(*new SpeechRecognitionPermissionRequest(lang, origin, frameIdentifier, WTFMove(completionHandler)));
    4043    }
    4144
     
    4851    const WebCore::ClientOrigin& origin() const { return m_origin; }
    4952    const String& lang() const { return m_lang; }
     53    WebCore::FrameIdentifier frameIdentifier() const { return m_frameIdentifier; }
    5054
    5155private:
    52     SpeechRecognitionPermissionRequest(const String& lang, const WebCore::ClientOrigin& origin, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&& completionHandler)
     56    SpeechRecognitionPermissionRequest(const String& lang, const WebCore::ClientOrigin& origin, WebCore::FrameIdentifier frameIdentifier, SpeechRecognitionPermissionRequestCallback&& completionHandler)
    5357        : m_lang(lang)
    5458        , m_origin(origin)
     59        , m_frameIdentifier(frameIdentifier)
    5560        , m_completionHandler(WTFMove(completionHandler))
    5661    { }
     
    5863    String m_lang;
    5964    WebCore::ClientOrigin m_origin;
    60     CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)> m_completionHandler;
     65    WebCore::FrameIdentifier m_frameIdentifier;
     66    SpeechRecognitionPermissionRequestCallback m_completionHandler;
    6167};
    6268
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.cpp

    r271935 r272165  
    2727#include "SpeechRecognitionServer.h"
    2828
    29 #include "SpeechRecognitionPermissionRequest.h"
    3029#include "UserMediaProcessManager.h"
    3130#include "WebProcessProxy.h"
     
    5352}
    5453
    55 void SpeechRecognitionServer::start(WebCore::SpeechRecognitionConnectionClientIdentifier clientIdentifier, String&& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&& origin)
     54void SpeechRecognitionServer::start(WebCore::SpeechRecognitionConnectionClientIdentifier clientIdentifier, String&& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&& origin, WebCore::FrameIdentifier frameIdentifier)
    5655{
    5756    MESSAGE_CHECK(clientIdentifier);
    5857    ASSERT(!m_requests.contains(clientIdentifier));
    59     auto requestInfo = WebCore::SpeechRecognitionRequestInfo { clientIdentifier, WTFMove(lang), continuous, interimResults, maxAlternatives, WTFMove(origin) };
     58    auto requestInfo = WebCore::SpeechRecognitionRequestInfo { clientIdentifier, WTFMove(lang), continuous, interimResults, maxAlternatives, WTFMove(origin), frameIdentifier };
    6059    auto& newRequest = m_requests.add(clientIdentifier, makeUnique<WebCore::SpeechRecognitionRequest>(WTFMove(requestInfo))).iterator->value;
    6160
     
    6564void SpeechRecognitionServer::requestPermissionForRequest(WebCore::SpeechRecognitionRequest& request)
    6665{
    67     m_permissionChecker(request.lang(), request.clientOrigin(), [this, weakThis = makeWeakPtr(this), weakRequest = makeWeakPtr(request)](auto error) mutable {
     66    m_permissionChecker(request.lang(), request.clientOrigin(), request.frameIdentifier(), [this, weakThis = makeWeakPtr(this), weakRequest = makeWeakPtr(request)](auto error) mutable {
    6867        if (!weakThis)
    6968            return;
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.h

    r271935 r272165  
    2828#include "MessageReceiver.h"
    2929#include "MessageSender.h"
     30#include "SpeechRecognitionPermissionRequest.h"
    3031#include <WebCore/PageIdentifier.h>
    3132#include <WebCore/SpeechRecognitionError.h>
     
    4647
    4748using SpeechRecognitionServerIdentifier = WebCore::PageIdentifier;
    48 using SpeechRecognitionPermissionChecker = Function<void(const String&, const WebCore::ClientOrigin&, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&&)>;
     49using SpeechRecognitionPermissionChecker = Function<void(const String&, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&)>;
    4950using SpeechRecognitionCheckIfMockSpeechRecognitionEnabled = Function<bool()>;
    5051
     
    5960#endif
    6061
    61     void start(WebCore::SpeechRecognitionConnectionClientIdentifier, String&& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&&);
     62    void start(WebCore::SpeechRecognitionConnectionClientIdentifier, String&& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&&, WebCore::FrameIdentifier);
    6263    void stop(WebCore::SpeechRecognitionConnectionClientIdentifier);
    6364    void abort(WebCore::SpeechRecognitionConnectionClientIdentifier);
  • trunk/Source/WebKit/UIProcess/SpeechRecognitionServer.messages.in

    r269810 r272165  
    2525 
    2626 messages -> SpeechRecognitionServer NotRefCounted {
    27     Start(WebCore::SpeechRecognitionConnectionClientIdentifier identifier, String lang, bool continuous, bool interimResults, uint64_t maxAlternatives, struct WebCore::ClientOrigin origin)
     27    Start(WebCore::SpeechRecognitionConnectionClientIdentifier identifier, String lang, bool continuous, bool interimResults, uint64_t maxAlternatives, struct WebCore::ClientOrigin origin, WebCore::FrameIdentifier frameIdentifier)
    2828    Stop(WebCore::SpeechRecognitionConnectionClientIdentifier identifier)
    2929    Abort(WebCore::SpeechRecognitionConnectionClientIdentifier identifier)
  • trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestManagerProxy.cpp

    r271513 r272165  
    214214        m_deniedRequests.append(DeniedRequest { request.mainFrameID(), request.userMediaDocumentSecurityOrigin(), request.topLevelDocumentSecurityOrigin(), request.requiresAudioCapture(), request.requiresVideoCapture(), request.requiresDisplayCapture() });
    215215
     216    if (auto callback = request.decisionCompletionHandler()) {
     217        callback(false);
     218        return;
     219    }
     220
    216221#if ENABLE(MEDIA_STREAM)
    217222    if (m_pregrantedRequests.isEmpty() && request.userRequest().audioConstraints.isValid)
     
    234239#if ENABLE(MEDIA_STREAM)
    235240    ALWAYS_LOG(LOGIDENTIFIER, request.userMediaID(), ", video: ", request.videoDevice().label(), ", audio: ", request.audioDevice().label());
     241
     242    if (auto callback = request.decisionCompletionHandler()) {
     243        m_grantedRequests.append(makeRef(request));
     244        callback(true);
     245        return;
     246    }
    236247
    237248    auto& userMediaDocumentSecurityOrigin = request.userMediaDocumentSecurityOrigin();
     
    602613    m_page.uiClient().decidePolicyForUserMediaPermissionRequest(m_page, *webFrame, WTFMove(userMediaOrigin), WTFMove(topLevelOrigin), *m_currentUserMediaRequest);
    603614}
     615
     616void UserMediaPermissionRequestManagerProxy::checkUserMediaPermissionForSpeechRecognition(WebCore::FrameIdentifier frameIdentifier, const WebCore::SecurityOrigin& requestingOrigin, const WebCore::SecurityOrigin& topOrigin, const WebCore::CaptureDevice& device, CompletionHandler<void(bool)>&& completionHandler)
     617{
     618    auto* frame = m_page.process().webFrame(frameIdentifier);
     619    if (!frame || !SecurityOrigin::createFromString(m_page.pageLoadState().activeURL())->isSameSchemeHostPort(topOrigin)) {
     620        completionHandler(false);
     621        return;
     622    }
     623
     624    auto request = UserMediaPermissionRequestProxy::create(*this, 0, frameIdentifier, frameIdentifier, requestingOrigin.isolatedCopy(), topOrigin.isolatedCopy(), Vector<WebCore::CaptureDevice> { device }, { }, { }, WTFMove(completionHandler));
     625
     626    auto action = getRequestAction(request.get());
     627    if (action == RequestAction::Deny) {
     628        completionHandler(false);
     629        return;
     630    }
     631   
     632    if (action == RequestAction::Grant) {
     633        completionHandler(true);
     634        return;
     635    }
     636
     637    auto apiRequestingOrigin = API::SecurityOrigin::create(requestingOrigin);
     638    auto apiTopOrigin = API::SecurityOrigin::create(topOrigin);
     639    m_page.uiClient().decidePolicyForUserMediaPermissionRequest(m_page, *frame, WTFMove(apiRequestingOrigin), WTFMove(apiTopOrigin), request.get());
     640}
     641
    604642
    605643#if !PLATFORM(COCOA)
  • trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestManagerProxy.h

    r271513 r272165  
    9898    bool hasPendingCapture() const { return m_hasPendingCapture; }
    9999
     100    void checkUserMediaPermissionForSpeechRecognition(WebCore::FrameIdentifier, const WebCore::SecurityOrigin&, const WebCore::SecurityOrigin&, const WebCore::CaptureDevice&, CompletionHandler<void(bool)>&&);
     101
    100102private:
    101103#if !RELEASE_LOG_DISABLED
  • trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestProxy.cpp

    r269918 r272165  
    3232using namespace WebCore;
    3333
    34 UserMediaPermissionRequestProxy::UserMediaPermissionRequestProxy(UserMediaPermissionRequestManagerProxy& manager, uint64_t userMediaID, FrameIdentifier mainFrameID, FrameIdentifier frameID, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&& request)
     34UserMediaPermissionRequestProxy::UserMediaPermissionRequestProxy(UserMediaPermissionRequestManagerProxy& manager, uint64_t userMediaID, FrameIdentifier mainFrameID, FrameIdentifier frameID, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&& request, CompletionHandler<void(bool)>&& decisionCompletionHandler)
    3535    : m_manager(&manager)
    3636    , m_userMediaID(userMediaID)
     
    4242    , m_eligibleAudioDevices(WTFMove(audioDevices))
    4343    , m_request(WTFMove(request))
     44    , m_decisionCompletionHandler(WTFMove(decisionCompletionHandler))
    4445{
    4546}
     
    100101{
    101102    m_manager = nullptr;
     103    if (m_decisionCompletionHandler)
     104        m_decisionCompletionHandler(false);
    102105}
    103106
  • trunk/Source/WebKit/UIProcess/UserMediaPermissionRequestProxy.h

    r269918 r272165  
    2424#include <WebCore/FrameIdentifier.h>
    2525#include <WebCore/MediaStreamRequest.h>
     26#include <wtf/CompletionHandler.h>
    2627#include <wtf/Vector.h>
    2728#include <wtf/text/WTFString.h>
     
    3738class UserMediaPermissionRequestProxy : public API::ObjectImpl<API::Object::Type::UserMediaPermissionRequest> {
    3839public:
    39     static Ref<UserMediaPermissionRequestProxy> create(UserMediaPermissionRequestManagerProxy& manager, uint64_t userMediaID, WebCore::FrameIdentifier mainFrameID, WebCore::FrameIdentifier frameID, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&& request)
     40    static Ref<UserMediaPermissionRequestProxy> create(UserMediaPermissionRequestManagerProxy& manager, uint64_t userMediaID, WebCore::FrameIdentifier mainFrameID, WebCore::FrameIdentifier frameID, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&& request, CompletionHandler<void(bool)>&& decisionCompletionHandler = { })
    4041    {
    41         return adoptRef(*new UserMediaPermissionRequestProxy(manager, userMediaID, mainFrameID, frameID, WTFMove(userMediaDocumentOrigin), WTFMove(topLevelDocumentOrigin), WTFMove(audioDevices), WTFMove(videoDevices), WTFMove(request)));
     42        return adoptRef(*new UserMediaPermissionRequestProxy(manager, userMediaID, mainFrameID, frameID, WTFMove(userMediaDocumentOrigin), WTFMove(topLevelDocumentOrigin), WTFMove(audioDevices), WTFMove(videoDevices), WTFMove(request), WTFMove(decisionCompletionHandler)));
    4243    }
    4344
     
    9091    void doDefaultAction();
    9192
     93    CompletionHandler<void(bool)> decisionCompletionHandler() { return std::exchange(m_decisionCompletionHandler, { }); }
     94
    9295private:
    93     UserMediaPermissionRequestProxy(UserMediaPermissionRequestManagerProxy&, uint64_t userMediaID, WebCore::FrameIdentifier mainFrameID, WebCore::FrameIdentifier, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&&);
     96    UserMediaPermissionRequestProxy(UserMediaPermissionRequestManagerProxy&, uint64_t userMediaID, WebCore::FrameIdentifier mainFrameID, WebCore::FrameIdentifier, Ref<WebCore::SecurityOrigin>&& userMediaDocumentOrigin, Ref<WebCore::SecurityOrigin>&& topLevelDocumentOrigin, Vector<WebCore::CaptureDevice>&& audioDevices, Vector<WebCore::CaptureDevice>&& videoDevices, WebCore::MediaStreamRequest&&, CompletionHandler<void(bool)>&&);
    9497
    9598    UserMediaPermissionRequestManagerProxy* m_manager;
     
    104107    bool m_hasPersistentAccess { false };
    105108    String m_deviceIdentifierHashSalt;
     109    CompletionHandler<void(bool)> m_decisionCompletionHandler;
    106110};
    107111
  • trunk/Source/WebKit/UIProcess/WebPageProxy.cpp

    r272069 r272165  
    1039810398}
    1039910399
    10400 void WebPageProxy::requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin& clientOrigin, CompletionHandler<void(Optional<SpeechRecognitionError>&&)>&& completionHandler)
     10400void WebPageProxy::requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin& clientOrigin, WebCore::FrameIdentifier frameIdentifier, CompletionHandler<void(Optional<SpeechRecognitionError>&&)>&& completionHandler)
    1040110401{
    1040210402    if (!m_speechRecognitionPermissionManager)
    1040310403        m_speechRecognitionPermissionManager = makeUnique<SpeechRecognitionPermissionManager>(*this);
    1040410404
    10405     m_speechRecognitionPermissionManager->request(lang, clientOrigin, WTFMove(completionHandler));
     10405    m_speechRecognitionPermissionManager->request(lang, clientOrigin, frameIdentifier, WTFMove(completionHandler));
    1040610406}
    1040710407
     
    1041410414
    1041510415    m_speechRecognitionPermissionManager->decideByDefaultAction(origin, WTFMove(completionHandler));
     10416}
     10417
     10418void WebPageProxy::requestUserMediaPermissionForSpeechRecognition(FrameIdentifier frameIdentifier, const WebCore::SecurityOrigin& requestingOrigin, const WebCore::SecurityOrigin& topOrigin, CompletionHandler<void(bool)>&& completionHandler)
     10419{
     10420#if ENABLE(MEDIA_STREAM)
     10421    auto captureDevice = SpeechRecognitionCaptureSource::findCaptureDevice();
     10422    if (!captureDevice)
     10423        completionHandler(false);
     10424
     10425    userMediaPermissionRequestManager().checkUserMediaPermissionForSpeechRecognition(frameIdentifier, requestingOrigin, topOrigin, *captureDevice, WTFMove(completionHandler));
     10426#else
     10427    completionHandler(false);
     10428#endif
    1041610429}
    1041710430
  • trunk/Source/WebKit/UIProcess/WebPageProxy.h

    r272096 r272165  
    5353#include "ShareableBitmap.h"
    5454#include "ShareableResource.h"
     55#include "SpeechRecognitionPermissionRequest.h"
    5556#include "SuspendedPageProxy.h"
    5657#include "SyntheticEditingCommandType.h"
     
    18171818    size_t suspendMediaPlaybackCounter() { return m_suspendMediaPlaybackCounter; }
    18181819
    1819     void requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin&, CompletionHandler<void(Optional<WebCore::SpeechRecognitionError>&&)>&&);
     1820    void requestSpeechRecognitionPermission(const String& lang, const WebCore::ClientOrigin&, WebCore::FrameIdentifier, SpeechRecognitionPermissionRequestCallback&&);
    18201821    void requestSpeechRecognitionPermissionByDefaultAction(const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
     1822    void requestUserMediaPermissionForSpeechRecognition(WebCore::FrameIdentifier, const WebCore::SecurityOrigin&, const WebCore::SecurityOrigin&, CompletionHandler<void(bool)>&&);
    18211823
    18221824    void syncIfMockDevicesEnabledChanged();
  • trunk/Source/WebKit/UIProcess/WebProcessProxy.cpp

    r271935 r272165  
    17231723    ASSERT(!m_speechRecognitionServerMap.contains(identifier));
    17241724    auto& speechRecognitionServer = m_speechRecognitionServerMap.add(identifier, nullptr).iterator->value;
    1725     auto permissionChecker = [weakPage = makeWeakPtr(targetPage)](auto& lang, auto& origin, auto&& completionHandler) mutable {
     1725    auto permissionChecker = [weakPage = makeWeakPtr(targetPage)](auto& lang, auto& origin, auto frameIdentifier, auto&& completionHandler) mutable {
    17261726        if (!weakPage) {
    17271727            completionHandler(WebCore::SpeechRecognitionError { SpeechRecognitionErrorType::NotAllowed, "Page no longer exists"_s });
     
    17291729        }
    17301730
    1731         weakPage->requestSpeechRecognitionPermission(lang, origin, WTFMove(completionHandler));
     1731        weakPage->requestSpeechRecognitionPermission(lang, origin, frameIdentifier, WTFMove(completionHandler));
    17321732    };
    17331733    auto checkIfMockCaptureDevicesEnabled = [weakPage = makeWeakPtr(targetPage)]() {
  • trunk/Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.cpp

    r271636 r272165  
    2828
    2929#include "SpeechRecognitionServerMessages.h"
     30#include "WebFrame.h"
    3031#include "WebProcess.h"
    3132#include "WebProcessProxyMessages.h"
     
    6970}
    7071
    71 void WebSpeechRecognitionConnection::start(WebCore::SpeechRecognitionConnectionClientIdentifier clientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&& clientOrigin)
     72void WebSpeechRecognitionConnection::start(WebCore::SpeechRecognitionConnectionClientIdentifier clientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&& clientOrigin, WebCore::FrameIdentifier frameIdentifier)
    7273{
    73     send(Messages::SpeechRecognitionServer::Start(clientIdentifier, lang, continuous, interimResults, maxAlternatives, WTFMove(clientOrigin)));
     74    send(Messages::SpeechRecognitionServer::Start(clientIdentifier, lang, continuous, interimResults, maxAlternatives, WTFMove(clientOrigin), frameIdentifier));
    7475}
    7576
  • trunk/Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h

    r271636 r272165  
    4848    static Ref<WebSpeechRecognitionConnection> create(SpeechRecognitionConnectionIdentifier);
    4949
    50     void start(WebCore::SpeechRecognitionConnectionClientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&&) final;
     50    void start(WebCore::SpeechRecognitionConnectionClientIdentifier, const String& lang, bool continuous, bool interimResults, uint64_t maxAlternatives, WebCore::ClientOrigin&&, WebCore::FrameIdentifier) final;
    5151    void stop(WebCore::SpeechRecognitionConnectionClientIdentifier) final;
    5252    void abort(WebCore::SpeechRecognitionConnectionClientIdentifier) final;
  • trunk/Tools/ChangeLog

    r272144 r272165  
     12021-02-01  Sihui Liu  <sihui_liu@appe.com>
     2
     3        Use user media permission prompt for speech recognition
     4        https://bugs.webkit.org/show_bug.cgi?id=221082
     5        rdar://problem/73372499
     6
     7        Reviewed by Youenn Fablet.
     8
     9        * TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm:
     10        (-[SpeechRecognitionUIDelegate _webView:requestMediaCaptureAuthorization:decisionHandler:]):
     11
    1122021-02-01  Aakash Jain  <aakash_jain@apple.com>
    213
  • trunk/Tools/TestWebKitAPI/Tests/WebKitCocoa/SpeechRecognition.mm

    r271935 r272165  
    6161- (void)_webView:(WKWebView *)webView requestMediaCaptureAuthorization: (_WKCaptureDevices)devices decisionHandler:(void (^)(BOOL))decisionHandler
    6262{
    63     decisionHandler(YES);
     63    permissionRequested = true;
     64    decisionHandler(shouldGrantPermissionRequest);
    6465}
    6566
Note: See TracChangeset for help on using the changeset viewer.