Changeset 213645 in webkit


Ignore:
Timestamp:
Mar 9, 2017 9:40:10 AM (7 years ago)
Author:
fpizlo@apple.com
Message:

WTF should make it super easy to do ARM concurrency tricks
https://bugs.webkit.org/show_bug.cgi?id=169300

Reviewed by Mark Lam.

Source/JavaScriptCore:

This changes a bunch of GC hot paths to use new concurrency APIs that lead to optimal
code on both x86 (fully leverage TSO, transactions become CAS loops) and ARM (use
dependency chains for fencing, transactions become LL/SC loops). While inspecting the
machine code, I found other opportunities for improvement, like inlining the "am I
marked" part of the marking functions.

  • heap/Heap.cpp:

(JSC::Heap::setGCDidJIT):

  • heap/HeapInlines.h:

(JSC::Heap::testAndSetMarked):

  • heap/LargeAllocation.h:

(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedConcurrently):
(JSC::LargeAllocation::aboutToMark):
(JSC::LargeAllocation::testAndSetMarked):

  • heap/MarkedBlock.h:

(JSC::MarkedBlock::areMarksStaleWithDependency):
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarkedConcurrently):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::testAndSetMarked):

  • heap/SlotVisitor.cpp:

(JSC::SlotVisitor::appendSlow):
(JSC::SlotVisitor::appendHiddenSlow):
(JSC::SlotVisitor::appendHiddenSlowImpl):
(JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
(JSC::SlotVisitor::appendUnbarriered): Deleted.
(JSC::SlotVisitor::appendHidden): Deleted.

  • heap/SlotVisitor.h:
  • heap/SlotVisitorInlines.h:

(JSC::SlotVisitor::appendUnbarriered):
(JSC::SlotVisitor::appendHidden):
(JSC::SlotVisitor::append):
(JSC::SlotVisitor::appendValues):
(JSC::SlotVisitor::appendValuesHidden):

  • runtime/CustomGetterSetter.cpp:
  • runtime/JSObject.cpp:

(JSC::JSObject::visitButterflyImpl):

  • runtime/JSObject.h:

Source/WTF:

This adds Atomic<>::loadLink and Atomic<>::storeCond, available only when HAVE(LL_SC).

It abstracts loadLink/storeCond behind prepare/attempt. You can write prepare/attempt
loops whenever your loop fits into the least common denominator of LL/SC and CAS.

This modifies Atomic<>::transaction to use prepare/attempt. So, if you write your loop
using Atomic<>::transaction, then you get LL/SC for free.

Depending on the kind of transaction you are doing, you may not want to perform an LL
until you have a chance to just load the current value. Atomic<>::transaction() assumes
that you do not care to have any ordering guarantees in that case. If you think that
the transaction has a good chance of aborting this way, you want
Atomic<>::transaction() to first do a plain load. But if you don't think that such an
abort is likely, then you want to go straight to the LL. The API supports this concept
via TransactionAbortLikelihood.

Additionally, this redoes the depend/consume API to be dead simple. Dependency is
unsigned. You get a dependency on a loaded value by just saying
dependency(loadedValue). You consume the dependency by using it as a bonus index to
some pointer dereference. This is made easy with the consume<T*>(ptr, dependency)
helper. In those cases where you want to pass around both a computed value and a
dependency, there's DependencyWith<T>. But you won't need it in most cases. The loaded
value or any value computed from the loaded value is a fine input to dependency()!

This change updates a bunch of hot paths to use the new APIs. Using transaction() gives
us optimal LL/SC loops for object marking and lock acquisition.

This change also updates a bunch of hot paths to use dependency()/consume().

This is a significant Octane/splay speed-up on ARM.

  • wtf/Atomics.h:

(WTF::hasFence):
(WTF::Atomic::prepare):
(WTF::Atomic::attempt):
(WTF::Atomic::transaction):
(WTF::Atomic::transactionRelaxed):
(WTF::nullDependency):
(WTF::dependency):
(WTF::DependencyWith::DependencyWith):
(WTF::dependencyWith):
(WTF::consume):
(WTF::Atomic::tryTransactionRelaxed): Deleted.
(WTF::Atomic::tryTransaction): Deleted.
(WTF::zeroWithConsumeDependency): Deleted.
(WTF::consumeLoad): Deleted.

  • wtf/Bitmap.h:

(WTF::WordType>::get):
(WTF::WordType>::concurrentTestAndSet):
(WTF::WordType>::concurrentTestAndClear):

  • wtf/LockAlgorithm.h:

(WTF::LockAlgorithm::lockFast):
(WTF::LockAlgorithm::unlockFast):
(WTF::LockAlgorithm::unlockSlow):

  • wtf/Platform.h:

Tools:

This vastly simplifies the consume API. The new API is thoroughly tested by being used
in the GC's guts. I think that unit tests are a pain to maintain, so we shouldn't have
them unless we are legitimately worried about coverage. We're not in this case.

  • TestWebKitAPI/CMakeLists.txt:
  • TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj:
  • TestWebKitAPI/Tests/WTF/Consume.cpp: Removed.
Location:
trunk
Files:
1 deleted
19 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r213631 r213645  
     12017-03-07  Filip Pizlo  <fpizlo@apple.com>
     2
     3        WTF should make it super easy to do ARM concurrency tricks
     4        https://bugs.webkit.org/show_bug.cgi?id=169300
     5
     6        Reviewed by Mark Lam.
     7       
     8        This changes a bunch of GC hot paths to use new concurrency APIs that lead to optimal
     9        code on both x86 (fully leverage TSO, transactions become CAS loops) and ARM (use
     10        dependency chains for fencing, transactions become LL/SC loops). While inspecting the
     11        machine code, I found other opportunities for improvement, like inlining the "am I
     12        marked" part of the marking functions.
     13
     14        * heap/Heap.cpp:
     15        (JSC::Heap::setGCDidJIT):
     16        * heap/HeapInlines.h:
     17        (JSC::Heap::testAndSetMarked):
     18        * heap/LargeAllocation.h:
     19        (JSC::LargeAllocation::isMarked):
     20        (JSC::LargeAllocation::isMarkedConcurrently):
     21        (JSC::LargeAllocation::aboutToMark):
     22        (JSC::LargeAllocation::testAndSetMarked):
     23        * heap/MarkedBlock.h:
     24        (JSC::MarkedBlock::areMarksStaleWithDependency):
     25        (JSC::MarkedBlock::aboutToMark):
     26        (JSC::MarkedBlock::isMarkedConcurrently):
     27        (JSC::MarkedBlock::isMarked):
     28        (JSC::MarkedBlock::testAndSetMarked):
     29        * heap/SlotVisitor.cpp:
     30        (JSC::SlotVisitor::appendSlow):
     31        (JSC::SlotVisitor::appendHiddenSlow):
     32        (JSC::SlotVisitor::appendHiddenSlowImpl):
     33        (JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
     34        (JSC::SlotVisitor::appendUnbarriered): Deleted.
     35        (JSC::SlotVisitor::appendHidden): Deleted.
     36        * heap/SlotVisitor.h:
     37        * heap/SlotVisitorInlines.h:
     38        (JSC::SlotVisitor::appendUnbarriered):
     39        (JSC::SlotVisitor::appendHidden):
     40        (JSC::SlotVisitor::append):
     41        (JSC::SlotVisitor::appendValues):
     42        (JSC::SlotVisitor::appendValuesHidden):
     43        * runtime/CustomGetterSetter.cpp:
     44        * runtime/JSObject.cpp:
     45        (JSC::JSObject::visitButterflyImpl):
     46        * runtime/JSObject.h:
     47
    1482017-03-08  Yusuke Suzuki  <utatane.tea@gmail.com>
    249
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r213175 r213645  
    18541854{
    18551855    m_worldState.transaction(
    1856         [&] (unsigned& state) {
     1856        [&] (unsigned& state) -> bool {
    18571857            RELEASE_ASSERT(state & stoppedBit);
    18581858            state |= gcDidJITBit;
     1859            return true;
    18591860        });
    18601861}
  • trunk/Source/JavaScriptCore/heap/HeapInlines.h

    r212778 r213645  
    9494        return cell->largeAllocation().testAndSetMarked();
    9595    MarkedBlock& block = cell->markedBlock();
    96     block.aboutToMark(markingVersion);
    97     return block.testAndSetMarked(cell);
     96    Dependency dependency = block.aboutToMark(markingVersion);
     97    return block.testAndSetMarked(cell, dependency);
    9898}
    9999
  • trunk/Source/JavaScriptCore/heap/LargeAllocation.h

    r210844 r213645  
    7575    bool isNewlyAllocated() const { return m_isNewlyAllocated; }
    7676    ALWAYS_INLINE bool isMarked() { return m_isMarked.load(std::memory_order_relaxed); }
    77     ALWAYS_INLINE bool isMarked(HeapCell*) { return m_isMarked.load(std::memory_order_relaxed); }
    78     ALWAYS_INLINE bool isMarkedConcurrently(HeapVersion, HeapCell*) { return m_isMarked.load(std::memory_order_relaxed); }
     77    ALWAYS_INLINE bool isMarked(HeapCell*) { return isMarked(); }
     78    ALWAYS_INLINE bool isMarked(HeapCell*, Dependency) { return isMarked(); }
     79    ALWAYS_INLINE bool isMarkedConcurrently(HeapVersion, HeapCell*) { return isMarked(); }
    7980    bool isLive() { return isMarked() || isNewlyAllocated(); }
    8081   
     
    110111    const AllocatorAttributes& attributes() const { return m_attributes; }
    111112   
    112     void aboutToMark(HeapVersion) { }
     113    Dependency aboutToMark(HeapVersion) { return nullDependency(); }
    113114   
    114115    ALWAYS_INLINE bool testAndSetMarked()
     
    121122        return m_isMarked.compareExchangeStrong(false, true);
    122123    }
    123     ALWAYS_INLINE bool testAndSetMarked(HeapCell*) { return testAndSetMarked(); }
     124    ALWAYS_INLINE bool testAndSetMarked(HeapCell*, Dependency, TransactionAbortLikelihood = TransactionAbortLikelihood::Likely) { return testAndSetMarked(); }
    124125    void clearMarked() { m_isMarked.store(false); }
    125126   
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.h

    r210844 r213645  
    259259    bool isMarked(HeapVersion markingVersion, const void*);
    260260    bool isMarkedConcurrently(HeapVersion markingVersion, const void*);
    261     bool testAndSetMarked(const void*);
     261    bool isMarked(const void*, Dependency);
     262    bool testAndSetMarked(const void*, Dependency, TransactionAbortLikelihood = TransactionAbortLikelihood::Likely);
    262263       
    263264    bool isAtom(const void*);
     
    279280    JS_EXPORT_PRIVATE bool areMarksStale();
    280281    bool areMarksStale(HeapVersion markingVersion);
    281     struct MarksWithDependency {
    282         bool areStale;
    283         ConsumeDependency dependency;
    284     };
    285     MarksWithDependency areMarksStaleWithDependency(HeapVersion markingVersion);
    286    
    287     void aboutToMark(HeapVersion markingVersion);
    288        
    289     void assertMarksNotStale();
     282    DependencyWith<bool> areMarksStaleWithDependency(HeapVersion markingVersion);
     283   
     284    Dependency aboutToMark(HeapVersion markingVersion);
     285       
     286#if ASSERT_DISABLED
     287    void assertMarksNotStale() { }
     288#else
     289    JS_EXPORT_PRIVATE void assertMarksNotStale();
     290#endif
    290291       
    291292    bool needsDestruction() const { return m_needsDestruction; }
     
    307308    Atom* atoms();
    308309       
    309     void aboutToMarkSlow(HeapVersion markingVersion);
     310    JS_EXPORT_PRIVATE void aboutToMarkSlow(HeapVersion markingVersion);
    310311    void clearHasAnyMarked();
    311312   
     
    492493}
    493494
    494 ALWAYS_INLINE MarkedBlock::MarksWithDependency MarkedBlock::areMarksStaleWithDependency(HeapVersion markingVersion)
    495 {
    496     auto consumed = consumeLoad(&m_markingVersion);
    497     MarksWithDependency ret;
    498     ret.areStale = consumed.value != markingVersion;
    499     ret.dependency = consumed.dependency;
    500     return ret;
    501 }
    502 
    503 inline void MarkedBlock::aboutToMark(HeapVersion markingVersion)
    504 {
    505     if (UNLIKELY(areMarksStale(markingVersion)))
     495ALWAYS_INLINE DependencyWith<bool> MarkedBlock::areMarksStaleWithDependency(HeapVersion markingVersion)
     496{
     497    HeapVersion version = m_markingVersion;
     498    return dependencyWith(dependency(version), version != markingVersion);
     499}
     500
     501inline Dependency MarkedBlock::aboutToMark(HeapVersion markingVersion)
     502{
     503    auto result = areMarksStaleWithDependency(markingVersion);
     504    if (UNLIKELY(result.value))
    506505        aboutToMarkSlow(markingVersion);
    507     WTF::loadLoadFence();
    508 }
    509 
    510 #if ASSERT_DISABLED
    511 inline void MarkedBlock::assertMarksNotStale()
    512 {
    513 }
    514 #endif // ASSERT_DISABLED
     506    return result.dependency;
     507}
    515508
    516509inline void MarkedBlock::Handle::assertMarksNotStale()
     
    531524inline bool MarkedBlock::isMarkedConcurrently(HeapVersion markingVersion, const void* p)
    532525{
    533     auto marksWithDependency = areMarksStaleWithDependency(markingVersion);
    534     if (marksWithDependency.areStale)
     526    auto result = areMarksStaleWithDependency(markingVersion);
     527    if (result.value)
    535528        return false;
    536     return m_marks.get(atomNumber(p) + marksWithDependency.dependency);
    537 }
    538 
    539 inline bool MarkedBlock::testAndSetMarked(const void* p)
     529    return m_marks.get(atomNumber(p), result.dependency);
     530}
     531
     532inline bool MarkedBlock::isMarked(const void* p, Dependency dependency)
    540533{
    541534    assertMarksNotStale();
    542     return m_marks.concurrentTestAndSet(atomNumber(p));
     535    return m_marks.get(atomNumber(p), dependency);
     536}
     537
     538inline bool MarkedBlock::testAndSetMarked(const void* p, Dependency dependency, TransactionAbortLikelihood abortLikelihood)
     539{
     540    assertMarksNotStale();
     541    return m_marks.concurrentTestAndSet(atomNumber(p), dependency, abortLikelihood);
    543542}
    544543
  • trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp

    r212778 r213645  
    223223}
    224224
    225 void SlotVisitor::appendUnbarriered(JSValue value)
    226 {
    227     if (!value || !value.isCell())
    228         return;
    229 
     225void SlotVisitor::appendSlow(JSCell* cell, Dependency dependency)
     226{
    230227    if (UNLIKELY(m_heapSnapshotBuilder))
    231         m_heapSnapshotBuilder->appendEdge(m_currentCell, value.asCell());
    232 
    233     setMarkedAndAppendToMarkStack(value.asCell());
    234 }
    235 
    236 void SlotVisitor::appendHidden(JSValue value)
    237 {
    238     if (!value || !value.isCell())
    239         return;
    240 
    241     setMarkedAndAppendToMarkStack(value.asCell());
    242 }
    243 
    244 void SlotVisitor::setMarkedAndAppendToMarkStack(JSCell* cell)
    245 {
    246     SuperSamplerScope superSamplerScope(false);
    247    
     228        m_heapSnapshotBuilder->appendEdge(m_currentCell, cell);
     229   
     230    appendHiddenSlowImpl(cell, dependency);
     231}
     232
     233void SlotVisitor::appendHiddenSlow(JSCell* cell, Dependency dependency)
     234{
     235    appendHiddenSlowImpl(cell, dependency);
     236}
     237
     238ALWAYS_INLINE void SlotVisitor::appendHiddenSlowImpl(JSCell* cell, Dependency dependency)
     239{
    248240    ASSERT(!m_isCheckingForDefaultMarkViolation);
    249     if (!cell)
    250         return;
    251241
    252242#if ENABLE(GC_VALIDATION)
     
    255245   
    256246    if (cell->isLargeAllocation())
    257         setMarkedAndAppendToMarkStack(cell->largeAllocation(), cell);
     247        setMarkedAndAppendToMarkStack(cell->largeAllocation(), cell, dependency);
    258248    else
    259         setMarkedAndAppendToMarkStack(cell->markedBlock(), cell);
     249        setMarkedAndAppendToMarkStack(cell->markedBlock(), cell, dependency);
    260250}
    261251
    262252template<typename ContainerType>
    263 ALWAYS_INLINE void SlotVisitor::setMarkedAndAppendToMarkStack(ContainerType& container, JSCell* cell)
    264 {
    265     container.aboutToMark(m_markingVersion);
    266    
    267     if (container.testAndSetMarked(cell))
     253ALWAYS_INLINE void SlotVisitor::setMarkedAndAppendToMarkStack(ContainerType& container, JSCell* cell, Dependency dependency)
     254{
     255    if (container.testAndSetMarked(cell, dependency, TransactionAbortLikelihood::Unlikely))
    268256        return;
    269257   
  • trunk/Source/JavaScriptCore/heap/SlotVisitor.h

    r212778 r213645  
    176176    void appendJSCellOrAuxiliary(HeapCell*);
    177177    void appendHidden(JSValue);
    178 
    179     JS_EXPORT_PRIVATE void setMarkedAndAppendToMarkStack(JSCell*);
     178    void appendHidden(JSCell*);
     179
     180    JS_EXPORT_PRIVATE void appendSlow(JSCell*, Dependency);
     181    JS_EXPORT_PRIVATE void appendHiddenSlow(JSCell*, Dependency);
     182    void appendHiddenSlowImpl(JSCell*, Dependency);
    180183   
    181184    template<typename ContainerType>
    182     void setMarkedAndAppendToMarkStack(ContainerType&, JSCell*);
     185    void setMarkedAndAppendToMarkStack(ContainerType&, JSCell*, Dependency);
    183186   
    184187    void appendToMarkStack(JSCell*);
  • trunk/Source/JavaScriptCore/heap/SlotVisitorInlines.h

    r211448 r213645  
    3232namespace JSC {
    3333
    34 inline void SlotVisitor::appendUnbarriered(JSValue* slot, size_t count)
     34ALWAYS_INLINE void SlotVisitor::appendUnbarriered(JSValue* slot, size_t count)
    3535{
    3636    for (size_t i = count; i--;)
     
    3838}
    3939
    40 inline void SlotVisitor::appendUnbarriered(JSCell* cell)
     40ALWAYS_INLINE void SlotVisitor::appendUnbarriered(JSCell* cell)
    4141{
    42     appendUnbarriered(JSValue(cell));
     42    // This needs to be written in a very specific way to ensure that it gets inlined
     43    // properly. In particular, it appears that using templates here breaks ALWAYS_INLINE.
     44   
     45    if (!cell)
     46        return;
     47   
     48    Dependency dependency;
     49    if (UNLIKELY(cell->isLargeAllocation())) {
     50        dependency = nullDependency();
     51        if (LIKELY(cell->largeAllocation().isMarked())) {
     52            if (LIKELY(!m_heapSnapshotBuilder))
     53                return;
     54        }
     55    } else {
     56        MarkedBlock& block = cell->markedBlock();
     57        dependency = block.aboutToMark(m_markingVersion);
     58        if (LIKELY(block.isMarked(cell, dependency))) {
     59            if (LIKELY(!m_heapSnapshotBuilder))
     60                return;
     61        }
     62    }
     63   
     64    appendSlow(cell, dependency);
     65}
     66
     67ALWAYS_INLINE void SlotVisitor::appendUnbarriered(JSValue value)
     68{
     69    if (value.isCell())
     70        appendUnbarriered(value.asCell());
     71}
     72
     73ALWAYS_INLINE void SlotVisitor::appendHidden(JSValue value)
     74{
     75    if (value.isCell())
     76        appendHidden(value.asCell());
     77}
     78
     79ALWAYS_INLINE void SlotVisitor::appendHidden(JSCell* cell)
     80{
     81    // This needs to be written in a very specific way to ensure that it gets inlined
     82    // properly. In particular, it appears that using templates here breaks ALWAYS_INLINE.
     83   
     84    if (!cell)
     85        return;
     86   
     87    Dependency dependency;
     88    if (UNLIKELY(cell->isLargeAllocation())) {
     89        dependency = nullDependency();
     90        if (LIKELY(cell->largeAllocation().isMarked()))
     91            return;
     92    } else {
     93        MarkedBlock& block = cell->markedBlock();
     94        dependency = block.aboutToMark(m_markingVersion);
     95        if (LIKELY(block.isMarked(cell, dependency)))
     96            return;
     97    }
     98   
     99    appendHiddenSlow(cell, dependency);
    43100}
    44101
    45102template<typename T>
    46 inline void SlotVisitor::append(const Weak<T>& weak)
     103ALWAYS_INLINE void SlotVisitor::append(const Weak<T>& weak)
    47104{
    48105    appendUnbarriered(weak.get());
     
    50107
    51108template<typename T>
    52 inline void SlotVisitor::append(const WriteBarrierBase<T>& slot)
     109ALWAYS_INLINE void SlotVisitor::append(const WriteBarrierBase<T>& slot)
    53110{
    54111    appendUnbarriered(slot.get());
     
    56113
    57114template<typename T>
    58 inline void SlotVisitor::appendHidden(const WriteBarrierBase<T>& slot)
     115ALWAYS_INLINE void SlotVisitor::appendHidden(const WriteBarrierBase<T>& slot)
    59116{
    60117    appendHidden(slot.get());
     
    62119
    63120template<typename Iterator>
    64 inline void SlotVisitor::append(Iterator begin, Iterator end)
     121ALWAYS_INLINE void SlotVisitor::append(Iterator begin, Iterator end)
    65122{
    66123    for (auto it = begin; it != end; ++it)
     
    68125}
    69126
    70 inline void SlotVisitor::appendValues(const WriteBarrierBase<Unknown>* barriers, size_t count)
     127ALWAYS_INLINE void SlotVisitor::appendValues(const WriteBarrierBase<Unknown>* barriers, size_t count)
    71128{
    72129    for (size_t i = 0; i < count; ++i)
     
    74131}
    75132
    76 inline void SlotVisitor::appendValuesHidden(const WriteBarrierBase<Unknown>* barriers, size_t count)
     133ALWAYS_INLINE void SlotVisitor::appendValuesHidden(const WriteBarrierBase<Unknown>* barriers, size_t count)
    77134{
    78135    for (size_t i = 0; i < count; ++i)
  • trunk/Source/JavaScriptCore/runtime/CustomGetterSetter.cpp

    r201322 r213645  
    2727#include "CustomGetterSetter.h"
    2828
    29 #include "JSCJSValueInlines.h"
    30 #include "JSObject.h"
    31 #include "SlotVisitorInlines.h"
     29#include "JSCInlines.h"
    3230#include <wtf/Assertions.h>
    3331
  • trunk/Source/JavaScriptCore/runtime/JSObject.cpp

    r212779 r213645  
    383383    lastOffset = structure->lastOffset();
    384384    IndexingType indexingType = structure->indexingType();
     385    Dependency indexingTypeDependency = dependency(indexingType);
    385386    Locker<JSCell> locker(NoLockingNecessary);
    386387    switch (indexingType) {
     
    396397        break;
    397398    }
    398     WTF::loadLoadFence();
    399     butterfly = this->butterfly();
     399    butterfly = consume(this, indexingTypeDependency)->butterfly();
     400    Dependency butterflyDependency = dependency(butterfly);
    400401    if (!butterfly)
    401402        return structure;
    402     WTF::loadLoadFence();
    403     if (this->structureID() != structureID)
     403    if (consume(this, butterflyDependency)->structureID() != structureID)
    404404        return nullptr;
    405     if (structure->lastOffset() != lastOffset)
     405    if (consume(structure, butterflyDependency)->lastOffset() != lastOffset)
    406406        return nullptr;
    407407   
  • trunk/Source/JavaScriptCore/runtime/JSObject.h

    r211247 r213645  
    665665    const Butterfly* butterfly() const { return m_butterfly.get(); }
    666666    Butterfly* butterfly() { return m_butterfly.get(); }
    667        
     667   
    668668    ConstPropertyStorage outOfLineStorage() const { return m_butterfly.get()->propertyStorage(); }
    669669    PropertyStorage outOfLineStorage() { return m_butterfly.get()->propertyStorage(); }
  • trunk/Source/WTF/ChangeLog

    r213601 r213645  
     12017-03-07  Filip Pizlo  <fpizlo@apple.com>
     2
     3        WTF should make it super easy to do ARM concurrency tricks
     4        https://bugs.webkit.org/show_bug.cgi?id=169300
     5
     6        Reviewed by Mark Lam.
     7       
     8        This adds Atomic<>::loadLink and Atomic<>::storeCond, available only when HAVE(LL_SC).
     9       
     10        It abstracts loadLink/storeCond behind prepare/attempt. You can write prepare/attempt
     11        loops whenever your loop fits into the least common denominator of LL/SC and CAS.
     12       
     13        This modifies Atomic<>::transaction to use prepare/attempt. So, if you write your loop
     14        using Atomic<>::transaction, then you get LL/SC for free.
     15
     16        Depending on the kind of transaction you are doing, you may not want to perform an LL
     17        until you have a chance to just load the current value. Atomic<>::transaction() assumes
     18        that you do not care to have any ordering guarantees in that case. If you think that
     19        the transaction has a good chance of aborting this way, you want
     20        Atomic<>::transaction() to first do a plain load. But if you don't think that such an
     21        abort is likely, then you want to go straight to the LL. The API supports this concept
     22        via TransactionAbortLikelihood.
     23       
     24        Additionally, this redoes the depend/consume API to be dead simple. Dependency is
     25        unsigned. You get a dependency on a loaded value by just saying
     26        dependency(loadedValue). You consume the dependency by using it as a bonus index to
     27        some pointer dereference. This is made easy with the consume<T*>(ptr, dependency)
     28        helper. In those cases where you want to pass around both a computed value and a
     29        dependency, there's DependencyWith<T>. But you won't need it in most cases. The loaded
     30        value or any value computed from the loaded value is a fine input to dependency()!
     31       
     32        This change updates a bunch of hot paths to use the new APIs. Using transaction() gives
     33        us optimal LL/SC loops for object marking and lock acquisition.
     34       
     35        This change also updates a bunch of hot paths to use dependency()/consume().
     36       
     37        This is a significant Octane/splay speed-up on ARM.
     38
     39        * wtf/Atomics.h:
     40        (WTF::hasFence):
     41        (WTF::Atomic::prepare):
     42        (WTF::Atomic::attempt):
     43        (WTF::Atomic::transaction):
     44        (WTF::Atomic::transactionRelaxed):
     45        (WTF::nullDependency):
     46        (WTF::dependency):
     47        (WTF::DependencyWith::DependencyWith):
     48        (WTF::dependencyWith):
     49        (WTF::consume):
     50        (WTF::Atomic::tryTransactionRelaxed): Deleted.
     51        (WTF::Atomic::tryTransaction): Deleted.
     52        (WTF::zeroWithConsumeDependency): Deleted.
     53        (WTF::consumeLoad): Deleted.
     54        * wtf/Bitmap.h:
     55        (WTF::WordType>::get):
     56        (WTF::WordType>::concurrentTestAndSet):
     57        (WTF::WordType>::concurrentTestAndClear):
     58        * wtf/LockAlgorithm.h:
     59        (WTF::LockAlgorithm::lockFast):
     60        (WTF::LockAlgorithm::unlockFast):
     61        (WTF::LockAlgorithm::unlockSlow):
     62        * wtf/Platform.h:
     63
    1642017-03-08  Anders Carlsson  <andersca@apple.com>
    265
  • trunk/Source/WTF/wtf/Atomics.h

    r208806 r213645  
    11/*
    2  * Copyright (C) 2007-2008, 2010, 2012-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2007-2017 Apple Inc. All rights reserved.
    33 * Copyright (C) 2007 Justin Haygood (jhaygood@reaktix.com)
    44 *
     
    4141namespace WTF {
    4242
     43ALWAYS_INLINE bool hasFence(std::memory_order order)
     44{
     45    return order != std::memory_order_relaxed;
     46}
     47
     48enum class TransactionAbortLikelihood {
     49    Unlikely,
     50    Likely
     51};
     52   
    4353// Atomic wraps around std::atomic with the sole purpose of making the compare_exchange
    4454// operations not alter the expected value. This is more in line with how we typically
     
    108118    ALWAYS_INLINE T exchange(T newValue, std::memory_order order = std::memory_order_seq_cst) { return value.exchange(newValue, order); }
    109119   
     120#if HAVE(LL_SC)
     121    ALWAYS_INLINE T loadLink(std::memory_order order = std::memory_order_seq_cst);
     122    ALWAYS_INLINE bool storeCond(T value,  std::memory_order order = std::memory_order_seq_cst);
     123#endif // HAVE(LL_SC)
     124
     125    ALWAYS_INLINE T prepare(std::memory_order order = std::memory_order_seq_cst)
     126    {
     127#if HAVE(LL_SC)
     128        return loadLink(order);
     129#else
     130        UNUSED_PARAM(order);
     131        return load(std::memory_order_relaxed);
     132#endif
     133    }
     134   
     135#if HAVE(LL_SC)
     136    static const bool prepareIsFast = false;
     137#else
     138    static const bool prepareIsFast = true;
     139#endif
     140
     141    ALWAYS_INLINE bool attempt(T oldValue, T newValue, std::memory_order order = std::memory_order_seq_cst)
     142    {
     143#if HAVE(LL_SC)
     144        UNUSED_PARAM(oldValue);
     145        return storeCond(newValue, order);
     146#else
     147        return compareExchangeWeak(oldValue, newValue, order);
     148#endif
     149    }
     150
    110151    template<typename Func>
    111     ALWAYS_INLINE bool tryTransactionRelaxed(const Func& func)
    112     {
    113         T oldValue = load(std::memory_order_relaxed);
    114         T newValue = oldValue;
    115         func(newValue);
    116         return compareExchangeWeakRelaxed(oldValue, newValue);
     152    ALWAYS_INLINE bool transaction(const Func& func, std::memory_order order = std::memory_order_seq_cst, TransactionAbortLikelihood abortLikelihood = TransactionAbortLikelihood::Likely)
     153    {
     154        // If preparing is not fast then we want to skip the loop when func would fail.
     155        if (!prepareIsFast && abortLikelihood == TransactionAbortLikelihood::Likely) {
     156            T oldValue = load(std::memory_order_relaxed);
     157            // Note: many funcs will constant-fold to true, which will kill all of this code.
     158            if (!func(oldValue))
     159                return false;
     160        }
     161        for (;;) {
     162            T oldValue = prepare(order);
     163            T newValue = oldValue;
     164            if (!func(newValue))
     165                return false;
     166            if (attempt(oldValue, newValue, order))
     167                return true;
     168        }
    117169    }
    118170
    119171    template<typename Func>
    120     ALWAYS_INLINE void transactionRelaxed(const Func& func)
    121     {
    122         while (!tryTransationRelaxed(func)) { }
    123     }
    124 
    125     template<typename Func>
    126     ALWAYS_INLINE bool tryTransaction(const Func& func)
    127     {
    128         T oldValue = load(std::memory_order_relaxed);
    129         T newValue = oldValue;
    130         func(newValue);
    131         return compareExchangeWeak(oldValue, newValue);
    132     }
    133 
    134     template<typename Func>
    135     ALWAYS_INLINE void transaction(const Func& func)
    136     {
    137         while (!tryTransaction(func)) { }
     172    ALWAYS_INLINE bool transactionRelaxed(const Func& func, TransactionAbortLikelihood abortLikelihood = TransactionAbortLikelihood::Likely)
     173    {
     174        return transaction(func, std::memory_order_relaxed, abortLikelihood);
    138175    }
    139176
    140177    std::atomic<T> value;
    141178};
     179
     180#if CPU(ARM64)
     181#define DEFINE_LL_SC(width, modifier, suffix)   \
     182    template<> \
     183    ALWAYS_INLINE uint ## width ## _t Atomic<uint ## width ##_t>::loadLink(std::memory_order order) \
     184    { \
     185        int ## width ## _t result; \
     186        if (hasFence(order)) { \
     187            asm volatile ( \
     188                "ldaxr" suffix " %" modifier "0, [%1]" \
     189                : "=r"(result) \
     190                : "r"(this) \
     191                : "memory"); \
     192        } else { \
     193            asm ( \
     194                "ldxr" suffix " %" modifier "0, [%1]" \
     195                : "=r"(result) \
     196                : "r"(this) \
     197                : "memory"); \
     198        } \
     199        return result; \
     200    } \
     201    \
     202    template<> \
     203    ALWAYS_INLINE bool Atomic<uint ## width ## _t>::storeCond(uint ## width ## _t value, std::memory_order order) \
     204    { \
     205        bool result; \
     206        if (hasFence(order)) { \
     207            asm volatile ( \
     208                "stlxr" suffix " %w0, %" modifier "1, [%2]" \
     209                : "=&r"(result) \
     210                : "r"(value), "r"(this) \
     211                : "memory"); \
     212        } else { \
     213            asm ( \
     214                "stxr" suffix " %w0, %" modifier "1, [%2]" \
     215                : "=&r"(result) \
     216                : "r"(value), "r"(this) \
     217                : "memory"); \
     218        } \
     219        return !result; \
     220    } \
     221    \
     222    template<> \
     223    ALWAYS_INLINE int ## width ## _t Atomic<int ## width ## _t>::loadLink(std::memory_order order) \
     224    { \
     225        return bitwise_cast<Atomic<uint ## width ## _t>*>(this)->loadLink(order); \
     226    } \
     227    \
     228    template<> \
     229    ALWAYS_INLINE bool Atomic<int ## width ## _t>::storeCond(int ## width ## _t value, std::memory_order order) \
     230    { \
     231        return bitwise_cast<Atomic<uint ## width ## _t>*>(this)->storeCond(value, order); \
     232    }
     233
     234DEFINE_LL_SC(8, "w", "b")
     235DEFINE_LL_SC(16, "w", "h")
     236DEFINE_LL_SC(32, "w", "")
     237DEFINE_LL_SC(64, "", "")
     238DEFINE_LL_SC(ptr, "", "")
     239
     240#undef DEFINE_LL_SC
     241#endif // CPU(ARM64)
    142242
    143243template<typename T>
     
    309409#endif
    310410
    311 typedef size_t ConsumeDependency;
     411typedef unsigned Dependency;
     412
     413ALWAYS_INLINE Dependency nullDependency()
     414{
     415    return 0;
     416}
    312417
    313418template <typename T, typename std::enable_if<sizeof(T) == 8>::type* = nullptr>
    314 ALWAYS_INLINE ConsumeDependency zeroWithConsumeDependency(T value)
    315 {
    316     uint64_t dependency;
     419ALWAYS_INLINE Dependency dependency(T value)
     420{
     421    unsigned dependency;
    317422    uint64_t copy = bitwise_cast<uint64_t>(value);
    318423#if CPU(ARM64)
     
    326431    // dependent loads in their store order without the reader using a barrier
    327432    // or an acquire load.
    328     asm volatile("eor %x[dependency], %x[in], %x[in]"
    329                  : [dependency] "=r"(dependency)
    330                  : [in] "r"(copy)
    331                  // Lie about touching memory. Not strictly needed, but is
    332                  // likely to avoid unwanted load/store motion.
    333                  : "memory");
     433    asm("eor %w[dependency], %w[in], %w[in]"
     434        : [dependency] "=r"(dependency)
     435        : [in] "r"(copy));
    334436#elif CPU(ARM)
    335     asm volatile("eor %[dependency], %[in], %[in]"
    336                  : [dependency] "=r"(dependency)
    337                  : [in] "r"(copy)
    338                  : "memory");
     437    asm("eor %[dependency], %[in], %[in]"
     438        : [dependency] "=r"(dependency)
     439        : [in] "r"(copy));
    339440#else
    340441    // No dependency is needed for this architecture.
    341442    loadLoadFence();
    342443    dependency = 0;
    343     (void)copy;
    344 #endif
    345     return static_cast<ConsumeDependency>(dependency);
    346 }
    347 
     444    UNUSED_PARAM(copy);
     445#endif
     446    return dependency;
     447}
     448
     449// FIXME: This code is almost identical to the other dependency() overload.
     450// https://bugs.webkit.org/show_bug.cgi?id=169405
    348451template <typename T, typename std::enable_if<sizeof(T) == 4>::type* = nullptr>
    349 ALWAYS_INLINE ConsumeDependency zeroWithConsumeDependency(T value)
    350 {
    351     uint32_t dependency;
     452ALWAYS_INLINE Dependency dependency(T value)
     453{
     454    unsigned dependency;
    352455    uint32_t copy = bitwise_cast<uint32_t>(value);
    353456#if CPU(ARM64)
    354     asm volatile("eor %w[dependency], %w[in], %w[in]"
    355                  : [dependency] "=r"(dependency)
    356                  : [in] "r"(copy)
    357                  : "memory");
     457    asm("eor %w[dependency], %w[in], %w[in]"
     458        : [dependency] "=r"(dependency)
     459        : [in] "r"(copy));
    358460#elif CPU(ARM)
    359     asm volatile("eor %[dependency], %[in], %[in]"
    360                  : [dependency] "=r"(dependency)
    361                  : [in] "r"(copy)
    362                  : "memory");
     461    asm("eor %[dependency], %[in], %[in]"
     462        : [dependency] "=r"(dependency)
     463        : [in] "r"(copy));
    363464#else
    364465    loadLoadFence();
    365466    dependency = 0;
    366     (void)copy;
    367 #endif
    368     return static_cast<ConsumeDependency>(dependency);
     467    UNUSED_PARAM(copy);
     468#endif
     469    return dependency;
    369470}
    370471
    371472template <typename T, typename std::enable_if<sizeof(T) == 2>::type* = nullptr>
    372 ALWAYS_INLINE ConsumeDependency zeroWithConsumeDependency(T value)
    373 {
    374     uint16_t copy = bitwise_cast<uint16_t>(value);
    375     return zeroWithConsumeDependency(static_cast<size_t>(copy));
     473ALWAYS_INLINE Dependency dependency(T value)
     474{
     475    return dependency(static_cast<uint32_t>(value));
    376476}
    377477
    378478template <typename T, typename std::enable_if<sizeof(T) == 1>::type* = nullptr>
    379 ALWAYS_INLINE ConsumeDependency zeroWithConsumeDependency(T value)
    380 {
    381     uint8_t copy = bitwise_cast<uint8_t>(value);
    382     return zeroWithConsumeDependency(static_cast<size_t>(copy));
    383 }
    384 
    385 template <typename T>
    386 struct Consumed {
     479ALWAYS_INLINE Dependency dependency(T value)
     480{
     481    return dependency(static_cast<uint32_t>(value));
     482}
     483
     484template<typename T>
     485struct DependencyWith {
     486public:
     487    DependencyWith()
     488        : dependency(nullDependency())
     489        , value()
     490    {
     491    }
     492   
     493    DependencyWith(Dependency dependency, const T& value)
     494        : dependency(dependency)
     495        , value(value)
     496    {
     497    }
     498   
     499    Dependency dependency;
    387500    T value;
    388     ConsumeDependency dependency;
    389501};
    390 
    391 // Consume load, returning the loaded `value` at `location` and a dependent-zero
    392 // which creates an address dependency from the `location`.
    393 //
    394 // Usage notes:
    395 //
    396 //  * Regarding control dependencies: merely branching based on `value` or
    397 //    `dependency` isn't sufficient to impose a dependency ordering: you must
    398 //    use `dependency` in the address computation of subsequent loads which
    399 //    should observe the store order w.r.t. `location`.
    400 // * Regarding memory ordering: consume load orders the `location` load with
    401 //   susequent dependent loads *only*. It says nothing about ordering of other
    402 //   loads!
    403 //
    404 // Caveat emptor.
    405 template <typename T>
    406 ALWAYS_INLINE auto consumeLoad(const T* location)
    407 {
    408     typedef typename std::remove_cv<T>::type Returned;
    409     Consumed<Returned> ret { };
    410     // Force the read of `location` to occur exactly once and without fusing or
    411     // forwarding using volatile. This is important because the compiler could
    412     // otherwise rematerialize or find equivalent loads, or simply forward from
    413     // a previous one, and lose the dependency we're trying so hard to
    414     // create. Prevent tearing by using an atomic, but let it move around by
    415     // using relaxed. We have at least a memory fence after this which prevents
    416     // the load from moving too much.
    417     ret.value = reinterpret_cast<const volatile std::atomic<Returned>*>(location)->load(std::memory_order_relaxed);
    418     ret.dependency = zeroWithConsumeDependency(ret.value);
    419     return ret;
     502   
     503template<typename T>
     504inline DependencyWith<T> dependencyWith(Dependency dependency, const T& value)
     505{
     506    return DependencyWith<T>(dependency, value);
     507}
     508
     509template<typename T>
     510inline T* consume(T* pointer, Dependency dependency)
     511{
     512#if CPU(ARM64) || CPU(ARM)
     513    return bitwise_cast<T*>(bitwise_cast<char*>(pointer) + dependency);
     514#else
     515    UNUSED_PARAM(dependency);
     516    return pointer;
     517#endif
    420518}
    421519
     
    423521
    424522using WTF::Atomic;
    425 using WTF::ConsumeDependency;
    426 using WTF::consumeLoad;
     523using WTF::Dependency;
     524using WTF::DependencyWith;
     525using WTF::TransactionAbortLikelihood;
     526using WTF::consume;
     527using WTF::dependency;
     528using WTF::dependencyWith;
     529using WTF::nullDependency;
    427530
    428531#endif // Atomics_h
  • trunk/Source/WTF/wtf/Bitmap.h

    r208209 r213645  
    4141    }
    4242
    43     bool get(size_t) const;
     43    bool get(size_t, Dependency = nullDependency()) const;
    4444    void set(size_t);
    4545    void set(size_t, bool);
    4646    bool testAndSet(size_t);
    4747    bool testAndClear(size_t);
    48     bool concurrentTestAndSet(size_t);
    49     bool concurrentTestAndClear(size_t);
     48    bool concurrentTestAndSet(size_t, Dependency = nullDependency(), TransactionAbortLikelihood = TransactionAbortLikelihood::Likely);
     49    bool concurrentTestAndClear(size_t, Dependency = nullDependency(), TransactionAbortLikelihood = TransactionAbortLikelihood::Likely);
    5050    size_t nextPossiblyUnset(size_t) const;
    5151    void clear(size_t);
     
    9292
    9393template<size_t bitmapSize, typename WordType>
    94 inline bool Bitmap<bitmapSize, WordType>::get(size_t n) const
    95 {
    96     return !!(bits[n / wordSize] & (one << (n % wordSize)));
     94inline bool Bitmap<bitmapSize, WordType>::get(size_t n, Dependency dependency) const
     95{
     96    return !!(bits[n / wordSize + dependency] & (one << (n % wordSize)));
    9797}
    9898
     
    133133
    134134template<size_t bitmapSize, typename WordType>
    135 inline bool Bitmap<bitmapSize, WordType>::concurrentTestAndSet(size_t n)
     135ALWAYS_INLINE bool Bitmap<bitmapSize, WordType>::concurrentTestAndSet(size_t n, Dependency dependency, TransactionAbortLikelihood abortLikelihood)
    136136{
    137137    WordType mask = one << (n % wordSize);
    138138    size_t index = n / wordSize;
    139     WordType* wordPtr = bits.data() + index;
    140     WordType oldValue;
    141     do {
    142         oldValue = *wordPtr;
    143         if (oldValue & mask)
     139    WordType* data = bits.data() + index + dependency;
     140    return !bitwise_cast<Atomic<WordType>*>(data)->transactionRelaxed(
     141        [&] (WordType& value) -> bool {
     142            if (value & mask)
     143                return false;
     144           
     145            value |= mask;
    144146            return true;
    145     } while (!atomicCompareExchangeWeakRelaxed(wordPtr, oldValue, static_cast<WordType>(oldValue | mask)));
    146     return false;
    147 }
    148 
    149 template<size_t bitmapSize, typename WordType>
    150 inline bool Bitmap<bitmapSize, WordType>::concurrentTestAndClear(size_t n)
     147        },
     148        abortLikelihood);
     149}
     150
     151template<size_t bitmapSize, typename WordType>
     152ALWAYS_INLINE bool Bitmap<bitmapSize, WordType>::concurrentTestAndClear(size_t n, Dependency dependency, TransactionAbortLikelihood abortLikelihood)
    151153{
    152154    WordType mask = one << (n % wordSize);
    153155    size_t index = n / wordSize;
    154     WordType* wordPtr = bits.data() + index;
    155     WordType oldValue;
    156     do {
    157         oldValue = *wordPtr;
    158         if (!(oldValue & mask))
    159             return false;
    160     } while (!atomicCompareExchangeWeakRelaxed(wordPtr, oldValue, static_cast<WordType>(oldValue & ~mask)));
    161     return true;
     156    WordType* data = bits.data() + index + dependency;
     157    return !bitwise_cast<Atomic<WordType>*>(data)->transactionRelaxed(
     158        [&] (WordType& value) -> bool {
     159            if (!(value & mask))
     160                return false;
     161           
     162            value &= ~mask;
     163            return true;
     164        },
     165        abortLikelihood);
    162166}
    163167
  • trunk/Source/WTF/wtf/LockAlgorithm.h

    r209570 r213645  
    5151    static bool lockFast(Atomic<LockType>& lock)
    5252    {
    53         LockType oldValue = lock.load(std::memory_order_relaxed);
    54         if (oldValue & isHeldBit)
    55             return false;
    56         return lock.compareExchangeWeak(oldValue, oldValue | isHeldBit, std::memory_order_acquire);
     53        return lock.transaction(
     54            [&] (LockType& value) -> bool {
     55                if (value & isHeldBit)
     56                    return false;
     57                value |= isHeldBit;
     58                return true;
     59            },
     60            std::memory_order_acquire,
     61            TransactionAbortLikelihood::Unlikely);
    5762    }
    5863   
     
    8186    static bool unlockFast(Atomic<LockType>& lock)
    8287    {
    83         LockType oldValue = lock.load(std::memory_order_relaxed);
    84         if ((oldValue & mask) != isHeldBit)
    85             return false;
    86         return lock.compareExchangeWeak(oldValue, oldValue & ~isHeldBit, std::memory_order_release);
     88        return lock.transaction(
     89            [&] (LockType& value) -> bool {
     90                if ((value & mask) != isHeldBit)
     91                    return false;
     92                value &= ~isHeldBit;
     93                return true;
     94            },
     95            std::memory_order_relaxed,
     96            TransactionAbortLikelihood::Unlikely);
    8797    }
    8898   
     
    207217                   
    208218                    lock.transaction(
    209                         [&] (LockType& value) {
     219                        [&] (LockType& value) -> bool {
    210220                            value &= ~mask;
    211221                            if (result.mayHaveMoreThreads)
    212222                                value |= hasParkedBit;
     223                            return true;
    213224                        });
    214225                    return BargingOpportunity;
  • trunk/Source/WTF/wtf/Platform.h

    r212967 r213645  
    758758#endif
    759759
     760#if CPU(ARM64)
     761#define HAVE_LL_SC 1
     762#endif // CPU(ARM64)
     763
    760764/* This controls whether B3 is built. B3 is needed for FTL JIT and WebAssembly */
    761765#if ENABLE(FTL_JIT) || ENABLE(WEBASSEMBLY)
  • trunk/Tools/ChangeLog

    r213630 r213645  
     12017-03-08  Filip Pizlo  <fpizlo@apple.com>
     2
     3        WTF should make it super easy to do ARM concurrency tricks
     4        https://bugs.webkit.org/show_bug.cgi?id=169300
     5
     6        Reviewed by Mark Lam.
     7       
     8        This vastly simplifies the consume API. The new API is thoroughly tested by being used
     9        in the GC's guts. I think that unit tests are a pain to maintain, so we shouldn't have
     10        them unless we are legitimately worried about coverage. We're not in this case.
     11
     12        * TestWebKitAPI/CMakeLists.txt:
     13        * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj:
     14        * TestWebKitAPI/Tests/WTF/Consume.cpp: Removed.
     15
    1162017-03-08  Srinivasan Vijayaraghavan  <svijayaraghavan@apple.com>
    217
  • trunk/Tools/TestWebKitAPI/CMakeLists.txt

    r209991 r213645  
    4343    ${TESTWEBKITAPI_DIR}/Tests/WTF/AtomicString.cpp
    4444    ${TESTWEBKITAPI_DIR}/Tests/WTF/BloomFilter.cpp
    45     ${TESTWEBKITAPI_DIR}/Tests/WTF/Consume.cpp
    4645    ${TESTWEBKITAPI_DIR}/Tests/WTF/CrossThreadTask.cpp
    4746    ${TESTWEBKITAPI_DIR}/Tests/WTF/CString.cpp
  • trunk/Tools/TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj

    r213547 r213645  
    493493                AD57AC221DA7466E00FF1BDE /* many-iframes.html in Copy Resources */ = {isa = PBXBuildFile; fileRef = AD57AC1D1DA7463800FF1BDE /* many-iframes.html */; };
    494494                AD7C434D1DD2A54E0026888B /* Expected.cpp in Sources */ = {isa = PBXBuildFile; fileRef = AD7C434C1DD2A5470026888B /* Expected.cpp */; };
    495                 ADCEBBA61D9AE229002E283A /* Consume.cpp in Sources */ = {isa = PBXBuildFile; fileRef = ADCEBBA51D99D4CF002E283A /* Consume.cpp */; };
    496495                B55AD1D5179F3B3000AC1494 /* PreventImageLoadWithAutoResizing_Bundle.cpp in Sources */ = {isa = PBXBuildFile; fileRef = B55AD1D3179F3ABF00AC1494 /* PreventImageLoadWithAutoResizing_Bundle.cpp */; };
    497496                B55F11B71517D03300915916 /* attributedStringCustomFont.html in Copy Resources */ = {isa = PBXBuildFile; fileRef = B55F11B01517A2C400915916 /* attributedStringCustomFont.html */; };
     
    12481247                AD57AC1F1DA7464D00FF1BDE /* DidRemoveFrameFromHiearchyInPageCache.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DidRemoveFrameFromHiearchyInPageCache.cpp; sourceTree = "<group>"; };
    12491248                AD7C434C1DD2A5470026888B /* Expected.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Expected.cpp; sourceTree = "<group>"; };
    1250                 ADCEBBA51D99D4CF002E283A /* Consume.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Consume.cpp; sourceTree = "<group>"; };
    12511249                B4039F9C15E6D8B3007255D6 /* MathExtras.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MathExtras.cpp; sourceTree = "<group>"; };
    12521250                B55AD1D1179F336600AC1494 /* PreventImageLoadWithAutoResizing.mm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.objcpp; name = PreventImageLoadWithAutoResizing.mm; path = WebKit2ObjC/PreventImageLoadWithAutoResizing.mm; sourceTree = "<group>"; };
     
    20262024                                A7A966DA140ECCC8005EF9B4 /* CheckedArithmeticOperations.cpp */,
    20272025                                0FEAE3671B7D19CB00CE17F2 /* Condition.cpp */,
    2028                                 ADCEBBA51D99D4CF002E283A /* Consume.cpp */,
    20292026                                51714EB91D087416004723C4 /* CrossThreadTask.cpp */,
    20302027                                26A2C72E15E2E73C005B1A14 /* CString.cpp */,
     
    25952592                                531C1D8E1DF8EF72006E979F /* LEBDecoder.cpp in Sources */,
    25962593                                7C83DF011D0A590C00FEBCF3 /* Optional.cpp in Sources */,
    2597                                 ADCEBBA61D9AE229002E283A /* Consume.cpp in Sources */,
    25982594                                AD7C434D1DD2A54E0026888B /* Expected.cpp in Sources */,
    25992595                                7C83DF021D0A590C00FEBCF3 /* OSObjectPtr.cpp in Sources */,
Note: See TracChangeset for help on using the changeset viewer.