Changeset 252298 in webkit


Ignore:
Timestamp:
Nov 8, 2019 5:37:55 PM (4 years ago)
Author:
ysuzuki@apple.com
Message:

[JSC] Make IsoSubspace scalable
https://bugs.webkit.org/show_bug.cgi?id=201908

Reviewed by Keith Miller.

This patch introduces lower-tier into IsoSubspace so that we can avoid allocating MarkedBlock
if a certain type of object is not allocated so many. This optimization allows us apply IsoSubspace
more aggressively to various types of objects without introducing memory regression even if such a
type of object is allocated so frequently.

We use LargeAllocation for these lower-tier objects. Each IsoSubspace holds up to 8 lower-tier objects
allocated via LargeAllocation. We use this special LargeAllocation when we tend to allocate small # of cells
for this type. Specifically, what we are doing right now is, (1) first, try to allocate in an existing
MarkedBlock (there won't be one to start), and (2) then, try to allocate in LargeAllocation, and if we cannot
allocate lower-tier objects, (3) finally we allocate a new MarkedBlock. Once this LargeAllocation is allocated
to a certain type, we do not deallocate it until VM is destroyed, so that we can keep IsoSubspace's
characteristics: once an address is assigned to a certain type, we continue using this address only for this type.

To introduce this optimization, we need to remove an restriction that no callee cells can be a LargeAllocation.
This also turns out that SamplingProfiler's isValueGCObject is heavily relies on that all the callee is small-sized.
isValueGCObject relies on the thing that MarkedSpace::m_largeAllocations is sorted. But this is not true since
this vector is sorted only when conservative scan happens. And further, this vector is only partially sorted: we
sort only an eden part part of this vector. So we cannot use this vector to implement isValueGCObject in the sampling
profiler. Instead we register HeapCell address into a hash-set in MarkedSpace. Since we do not need to find a pointer
that is pointing at the middle of the JSCell in sampling profiler, just registering cell address is enough. And we
maintain this hash-set only when sampling profiler is enabled to save memory in major cases.

We also fix the code that is relying on that JSString is always allocated in MarkedBlock. And we also fix PackedCellPtr's
assumption that CodeBlock is always allocated in MarkedBlock.

We also make sizeof(LargeAllocation) small since it is now used for non-large allocations.

JetStream2 and Speedometer2 are neutral. RAMification shows 0.6% progression on iOS devices.

  • heap/BlockDirectory.cpp:

(JSC::BlockDirectory::BlockDirectory):

  • heap/BlockDirectory.h:
  • heap/BlockDirectoryInlines.h:

(JSC::BlockDirectory::tryAllocateFromLowerTier):

  • heap/CompleteSubspace.cpp:

(JSC::CompleteSubspace::allocatorForSlow):
(JSC::CompleteSubspace::tryAllocateSlow):
(JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):

  • heap/Heap.cpp:

(JSC::Heap::dumpHeapStatisticsAtVMDestruction):
(JSC::Heap::addCoreConstraints):

  • heap/HeapUtil.h:

(JSC::HeapUtil::isPointerGCObjectJSCell):
(JSC::HeapUtil::isValueGCObject):

  • heap/IsoAlignedMemoryAllocator.cpp:

(JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
(JSC::IsoAlignedMemoryAllocator::freeMemory):
(JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):

  • heap/IsoCellSet.cpp:

(JSC::IsoCellSet::~IsoCellSet):

  • heap/IsoCellSet.h:
  • heap/IsoCellSetInlines.h:

(JSC::IsoCellSet::add):
(JSC::IsoCellSet::remove):
(JSC::IsoCellSet::contains const):
(JSC::IsoCellSet::forEachMarkedCell):
(JSC::IsoCellSet::forEachMarkedCellInParallel):
(JSC::IsoCellSet::forEachLiveCell):
(JSC::IsoCellSet::sweepLowerTierCell):

  • heap/IsoSubspace.cpp:

(JSC::IsoSubspace::IsoSubspace):
(JSC::IsoSubspace::tryAllocateFromLowerTier):
(JSC::IsoSubspace::sweepLowerTierCell):

  • heap/IsoSubspace.h:
  • heap/LargeAllocation.cpp:

(JSC::LargeAllocation::tryReallocate):
(JSC::LargeAllocation::createForLowerTier):
(JSC::LargeAllocation::reuseForLowerTier):
(JSC::LargeAllocation::LargeAllocation):

  • heap/LargeAllocation.h:

(JSC::LargeAllocation::lowerTierIndex const):
(JSC::LargeAllocation::isLowerTier const):

  • heap/LocalAllocator.cpp:

(JSC::LocalAllocator::allocateSlowCase):

  • heap/MarkedBlock.cpp:

(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::stopAllocating):

  • heap/MarkedBlock.h:

(JSC::MarkedBlock::Handle::forEachCell):

  • heap/MarkedSpace.cpp:

(JSC::MarkedSpace::freeMemory):
(JSC::MarkedSpace::lastChanceToFinalize):
(JSC::MarkedSpace::sweepLargeAllocations):
(JSC::MarkedSpace::enableLargeAllocationTracking):

  • heap/MarkedSpace.h:

(JSC::MarkedSpace:: const):

  • heap/PackedCellPtr.h:

(JSC::PackedCellPtr::PackedCellPtr):

  • heap/Subspace.h:
  • heap/WeakSet.cpp:

(JSC::WeakSet::~WeakSet):
(JSC::WeakSet::findAllocator):
(JSC::WeakSet::addAllocator):

  • heap/WeakSet.h:

(JSC::WeakSet::WeakSet):
(JSC::WeakSet::resetAllocator):
(JSC::WeakSet::container const): Deleted.
(JSC::WeakSet::setContainer): Deleted.

  • heap/WeakSetInlines.h:

(JSC::WeakSet::allocate):

  • runtime/InternalFunction.cpp:

(JSC::InternalFunction::InternalFunction):

  • runtime/JSCallee.cpp:

(JSC::JSCallee::JSCallee):

  • runtime/JSString.h:
  • runtime/SamplingProfiler.cpp:

(JSC::SamplingProfiler::SamplingProfiler):
(JSC::SamplingProfiler::processUnverifiedStackTraces):
(JSC::SamplingProfiler::releaseStackTraces):
(JSC::SamplingProfiler::stackTracesAsJSON):
(JSC::SamplingProfiler::reportTopFunctions):
(JSC::SamplingProfiler::reportTopBytecodes):

  • runtime/SamplingProfiler.h:
Location:
trunk/Source/JavaScriptCore
Files:
28 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r252273 r252298  
     12019-11-08  Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [JSC] Make IsoSubspace scalable
     4        https://bugs.webkit.org/show_bug.cgi?id=201908
     5
     6        Reviewed by Keith Miller.
     7
     8        This patch introduces lower-tier into IsoSubspace so that we can avoid allocating MarkedBlock
     9        if a certain type of object is not allocated so many. This optimization allows us apply IsoSubspace
     10        more aggressively to various types of objects without introducing memory regression even if such a
     11        type of object is allocated so frequently.
     12
     13        We use LargeAllocation for these lower-tier objects. Each IsoSubspace holds up to 8 lower-tier objects
     14        allocated via LargeAllocation. We use this special LargeAllocation when we tend to allocate small # of cells
     15        for this type. Specifically, what we are doing right now is, (1) first, try to allocate in an existing
     16        MarkedBlock (there won't be one to start), and (2) then, try to allocate in LargeAllocation, and if we cannot
     17        allocate lower-tier objects, (3) finally we allocate a new MarkedBlock. Once this LargeAllocation is allocated
     18        to a certain type, we do not deallocate it until VM is destroyed, so that we can keep IsoSubspace's
     19        characteristics: once an address is assigned to a certain type, we continue using this address only for this type.
     20
     21        To introduce this optimization, we need to remove an restriction that no callee cells can be a LargeAllocation.
     22        This also turns out that SamplingProfiler's isValueGCObject is heavily relies on that all the callee is small-sized.
     23        isValueGCObject relies on the thing that MarkedSpace::m_largeAllocations is sorted. But this is not true since
     24        this vector is sorted only when conservative scan happens. And further, this vector is only partially sorted: we
     25        sort only an eden part part of this vector. So we cannot use this vector to implement isValueGCObject in the sampling
     26        profiler. Instead we register HeapCell address into a hash-set in MarkedSpace. Since we do not need to find a pointer
     27        that is pointing at the middle of the JSCell in sampling profiler, just registering cell address is enough. And we
     28        maintain this hash-set only when sampling profiler is enabled to save memory in major cases.
     29
     30        We also fix the code that is relying on that JSString is always allocated in MarkedBlock. And we also fix PackedCellPtr's
     31        assumption that CodeBlock is always allocated in MarkedBlock.
     32
     33        We also make sizeof(LargeAllocation) small since it is now used for non-large allocations.
     34
     35        JetStream2 and Speedometer2 are neutral. RAMification shows 0.6% progression on iOS devices.
     36
     37        * heap/BlockDirectory.cpp:
     38        (JSC::BlockDirectory::BlockDirectory):
     39        * heap/BlockDirectory.h:
     40        * heap/BlockDirectoryInlines.h:
     41        (JSC::BlockDirectory::tryAllocateFromLowerTier):
     42        * heap/CompleteSubspace.cpp:
     43        (JSC::CompleteSubspace::allocatorForSlow):
     44        (JSC::CompleteSubspace::tryAllocateSlow):
     45        (JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):
     46        * heap/Heap.cpp:
     47        (JSC::Heap::dumpHeapStatisticsAtVMDestruction):
     48        (JSC::Heap::addCoreConstraints):
     49        * heap/HeapUtil.h:
     50        (JSC::HeapUtil::isPointerGCObjectJSCell):
     51        (JSC::HeapUtil::isValueGCObject):
     52        * heap/IsoAlignedMemoryAllocator.cpp:
     53        (JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
     54        (JSC::IsoAlignedMemoryAllocator::freeMemory):
     55        (JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):
     56        * heap/IsoCellSet.cpp:
     57        (JSC::IsoCellSet::~IsoCellSet):
     58        * heap/IsoCellSet.h:
     59        * heap/IsoCellSetInlines.h:
     60        (JSC::IsoCellSet::add):
     61        (JSC::IsoCellSet::remove):
     62        (JSC::IsoCellSet::contains const):
     63        (JSC::IsoCellSet::forEachMarkedCell):
     64        (JSC::IsoCellSet::forEachMarkedCellInParallel):
     65        (JSC::IsoCellSet::forEachLiveCell):
     66        (JSC::IsoCellSet::sweepLowerTierCell):
     67        * heap/IsoSubspace.cpp:
     68        (JSC::IsoSubspace::IsoSubspace):
     69        (JSC::IsoSubspace::tryAllocateFromLowerTier):
     70        (JSC::IsoSubspace::sweepLowerTierCell):
     71        * heap/IsoSubspace.h:
     72        * heap/LargeAllocation.cpp:
     73        (JSC::LargeAllocation::tryReallocate):
     74        (JSC::LargeAllocation::createForLowerTier):
     75        (JSC::LargeAllocation::reuseForLowerTier):
     76        (JSC::LargeAllocation::LargeAllocation):
     77        * heap/LargeAllocation.h:
     78        (JSC::LargeAllocation::lowerTierIndex const):
     79        (JSC::LargeAllocation::isLowerTier const):
     80        * heap/LocalAllocator.cpp:
     81        (JSC::LocalAllocator::allocateSlowCase):
     82        * heap/MarkedBlock.cpp:
     83        (JSC::MarkedBlock::Handle::Handle):
     84        (JSC::MarkedBlock::Handle::stopAllocating):
     85        * heap/MarkedBlock.h:
     86        (JSC::MarkedBlock::Handle::forEachCell):
     87        * heap/MarkedSpace.cpp:
     88        (JSC::MarkedSpace::freeMemory):
     89        (JSC::MarkedSpace::lastChanceToFinalize):
     90        (JSC::MarkedSpace::sweepLargeAllocations):
     91        (JSC::MarkedSpace::enableLargeAllocationTracking):
     92        * heap/MarkedSpace.h:
     93        (JSC::MarkedSpace:: const):
     94        * heap/PackedCellPtr.h:
     95        (JSC::PackedCellPtr::PackedCellPtr):
     96        * heap/Subspace.h:
     97        * heap/WeakSet.cpp:
     98        (JSC::WeakSet::~WeakSet):
     99        (JSC::WeakSet::findAllocator):
     100        (JSC::WeakSet::addAllocator):
     101        * heap/WeakSet.h:
     102        (JSC::WeakSet::WeakSet):
     103        (JSC::WeakSet::resetAllocator):
     104        (JSC::WeakSet::container const): Deleted.
     105        (JSC::WeakSet::setContainer): Deleted.
     106        * heap/WeakSetInlines.h:
     107        (JSC::WeakSet::allocate):
     108        * runtime/InternalFunction.cpp:
     109        (JSC::InternalFunction::InternalFunction):
     110        * runtime/JSCallee.cpp:
     111        (JSC::JSCallee::JSCallee):
     112        * runtime/JSString.h:
     113        * runtime/SamplingProfiler.cpp:
     114        (JSC::SamplingProfiler::SamplingProfiler):
     115        (JSC::SamplingProfiler::processUnverifiedStackTraces):
     116        (JSC::SamplingProfiler::releaseStackTraces):
     117        (JSC::SamplingProfiler::stackTracesAsJSON):
     118        (JSC::SamplingProfiler::reportTopFunctions):
     119        (JSC::SamplingProfiler::reportTopBytecodes):
     120        * runtime/SamplingProfiler.h:
     121
    11222019-11-08  Matt Lewis  <jlewis3@apple.com>
    2123
  • trunk/Source/JavaScriptCore/heap/CompleteSubspace.cpp

    r249175 r252298  
    8080        dataLog("Creating BlockDirectory/LocalAllocator for ", m_name, ", ", attributes(), ", ", sizeClass, ".\n");
    8181   
    82     std::unique_ptr<BlockDirectory> uniqueDirectory =
    83         makeUnique<BlockDirectory>(m_space.heap(), sizeClass);
     82    std::unique_ptr<BlockDirectory> uniqueDirectory = makeUnique<BlockDirectory>(m_space.heap(), sizeClass);
    8483    BlockDirectory* directory = uniqueDirectory.get();
    8584    m_directories.append(WTFMove(uniqueDirectory));
     
    146145   
    147146    m_space.m_largeAllocations.append(allocation);
     147    if (auto* set = m_space.largeAllocationSet())
     148        set->add(allocation->cell());
    148149    ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1);
    149150    vm.heap.didAllocate(size);
     
    195196    ASSERT(oldIndexInSpace == allocation->indexInSpace());
    196197
     198    // If reallocation changes the address, we should update HashSet.
     199    if (oldAllocation != allocation) {
     200        if (auto* set = m_space.largeAllocationSet()) {
     201            set->remove(oldAllocation->cell());
     202            set->add(allocation->cell());
     203        }
     204    }
     205
    197206    m_space.m_largeAllocations[oldIndexInSpace] = allocation;
    198207    vm.heap.didAllocate(difference);
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r250285 r252298  
    373373    m_objectSpace.forEachBlock([&] (MarkedBlock::Handle* block) {
    374374        unsigned live = 0;
    375         block->forEachCell([&] (HeapCell* cell, HeapCell::Kind) {
     375        block->forEachCell([&] (size_t, HeapCell* cell, HeapCell::Kind) {
    376376            if (cell->isLive())
    377377                live++;
     
    379379        });
    380380        dataLogLn("[", counter++, "] ", block->cellSize(), ", ", live, " / ", block->cellsPerBlock(), " ", static_cast<double>(live) / block->cellsPerBlock() * 100, "% ", block->attributes(), " ", block->subspace()->name());
    381         block->forEachCell([&] (HeapCell* heapCell, HeapCell::Kind kind) {
     381        block->forEachCell([&] (size_t, HeapCell* heapCell, HeapCell::Kind kind) {
    382382            if (heapCell->isLive() && kind == HeapCell::Kind::JSCell) {
    383383                auto* cell = static_cast<JSCell*>(heapCell);
     
    27752775#if ENABLE(SAMPLING_PROFILER)
    27762776            if (SamplingProfiler* samplingProfiler = m_vm.samplingProfiler()) {
    2777                 LockHolder locker(samplingProfiler->getLock());
    2778                 samplingProfiler->processUnverifiedStackTraces();
     2777                auto locker = holdLock(samplingProfiler->getLock());
     2778                samplingProfiler->processUnverifiedStackTraces(locker);
    27792779                samplingProfiler->visit(slotVisitor);
    27802780                if (Options::logGC() == GCLogging::Verbose)
  • trunk/Source/JavaScriptCore/heap/HeapUtil.h

    r250005 r252298  
    129129    }
    130130   
    131     static bool isPointerGCObjectJSCell(
    132         Heap& heap, TinyBloomFilter filter, const void* pointer)
     131    static bool isPointerGCObjectJSCell(Heap& heap, TinyBloomFilter filter, JSCell* pointer)
    133132    {
    134133        // It could point to a large allocation.
    135         const Vector<LargeAllocation*>& largeAllocations = heap.objectSpace().largeAllocations();
    136         if (!largeAllocations.isEmpty()) {
    137             if (largeAllocations[0]->aboveLowerBound(pointer)
    138                 && largeAllocations.last()->belowUpperBound(pointer)) {
    139                 LargeAllocation*const* result = approximateBinarySearch<LargeAllocation*const>(
    140                     largeAllocations.begin(), largeAllocations.size(),
    141                     LargeAllocation::fromCell(pointer),
    142                     [] (LargeAllocation*const* ptr) -> LargeAllocation* { return *ptr; });
    143                 if (result) {
    144                     if (result > largeAllocations.begin()
    145                         && result[-1]->cell() == pointer
    146                         && isJSCellKind(result[-1]->attributes().cellKind))
    147                         return true;
    148                     if (result[0]->cell() == pointer
    149                         && isJSCellKind(result[0]->attributes().cellKind))
    150                         return true;
    151                     if (result + 1 < largeAllocations.end()
    152                         && result[1]->cell() == pointer
    153                         && isJSCellKind(result[1]->attributes().cellKind))
    154                         return true;
    155                 }
    156             }
     134        if (pointer->isLargeAllocation()) {
     135            auto* set = heap.objectSpace().largeAllocationSet();
     136            ASSERT(set);
     137            if (set->isEmpty())
     138                return false;
     139            return set->contains(pointer);
    157140        }
    158141   
     
    180163    }
    181164   
     165    // This does not find the cell if the pointer is pointing at the middle of a JSCell.
    182166    static bool isValueGCObject(
    183167        Heap& heap, TinyBloomFilter filter, JSValue value)
    184168    {
     169        ASSERT(heap.objectSpace().largeAllocationSet());
    185170        if (!value.isCell())
    186171            return false;
    187         return isPointerGCObjectJSCell(heap, filter, static_cast<void*>(value.asCell()));
     172        return isPointerGCObjectJSCell(heap, filter, value.asCell());
    188173    }
    189174};
  • trunk/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp

    r244019 r252298  
    9090}
    9191
    92 void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t)
     92void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t size)
    9393{
    94     RELEASE_ASSERT_NOT_REACHED();
     94    return FastMalloc::tryMalloc(size);
    9595}
    9696
    97 void IsoAlignedMemoryAllocator::freeMemory(void*)
     97void IsoAlignedMemoryAllocator::freeMemory(void* pointer)
    9898{
    99     RELEASE_ASSERT_NOT_REACHED();
     99    FastMalloc::free(pointer);
    100100}
    101101
    102102void* IsoAlignedMemoryAllocator::tryReallocateMemory(void*, size_t)
    103103{
     104    // In IsoSubspace-managed LargeAllocation, we must not perform realloc.
    104105    RELEASE_ASSERT_NOT_REACHED();
    105106}
  • trunk/Source/JavaScriptCore/heap/IsoCellSet.cpp

    r248846 r252298  
    4444{
    4545    if (isOnList())
    46         BasicRawSentinelNode<IsoCellSet>::remove();
     46        PackedRawSentinelNode<IsoCellSet>::remove();
    4747}
    4848
  • trunk/Source/JavaScriptCore/heap/IsoCellSet.h

    r239535 r252298  
    4141// removal. Each such set should be thought of as a 0.8% increase in object size for objects in that
    4242// IsoSubspace (it's like adding 1 bit every 16 bytes, or 1 bit every 128 bits).
    43 class IsoCellSet : public BasicRawSentinelNode<IsoCellSet> {
     43class IsoCellSet : public PackedRawSentinelNode<IsoCellSet> {
    4444public:
    4545    IsoCellSet(IsoSubspace& subspace);
     
    7373    void didRemoveBlock(size_t blockIndex);
    7474    void sweepToFreeList(MarkedBlock::Handle*);
     75    void sweepLowerTierCell(unsigned);
    7576   
     77    Bitmap<MarkedBlock::numberOfLowerTierCells> m_lowerTierBits;
     78
    7679    IsoSubspace& m_subspace;
    7780   
  • trunk/Source/JavaScriptCore/heap/IsoCellSetInlines.h

    r239535 r252298  
    3434inline bool IsoCellSet::add(HeapCell* cell)
    3535{
     36    if (cell->isLargeAllocation())
     37        return !m_lowerTierBits.concurrentTestAndSet(cell->largeAllocation().lowerTierIndex());
    3638    AtomIndices atomIndices(cell);
    3739    auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
     
    4446inline bool IsoCellSet::remove(HeapCell* cell)
    4547{
     48    if (cell->isLargeAllocation())
     49        return !m_lowerTierBits.concurrentTestAndClear(cell->largeAllocation().lowerTierIndex());
    4650    AtomIndices atomIndices(cell);
    4751    auto& bitsPtrRef = m_bits[atomIndices.blockIndex];
     
    5458inline bool IsoCellSet::contains(HeapCell* cell) const
    5559{
     60    if (cell->isLargeAllocation())
     61        return !m_lowerTierBits.get(cell->largeAllocation().lowerTierIndex());
    5662    AtomIndices atomIndices(cell);
    5763    auto* bits = m_bits[atomIndices.blockIndex].get();
     
    7682                    return IterationStatus::Continue;
    7783                });
     84        });
     85
     86    CellAttributes attributes = m_subspace.attributes();
     87    m_subspace.forEachLargeAllocation(
     88        [&] (LargeAllocation* allocation) {
     89            if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked())
     90                func(allocation->cell(), attributes.cellKind);
    7891        });
    7992}
     
    103116                    });
    104117            }
     118
     119            {
     120                auto locker = holdLock(m_lock);
     121                if (!m_needToVisitLargeAllocations)
     122                    return;
     123                m_needToVisitLargeAllocations = false;
     124            }
     125
     126            CellAttributes attributes = m_set.m_subspace.attributes();
     127            m_set.m_subspace.forEachLargeAllocation(
     128                [&] (LargeAllocation* allocation) {
     129                    if (m_set.m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked())
     130                        m_func(visitor, allocation->cell(), attributes.cellKind);
     131                });
    105132        }
    106133       
     
    110137        Func m_func;
    111138        Lock m_lock;
     139        bool m_needToVisitLargeAllocations { true };
    112140    };
    113141   
     
    123151            MarkedBlock::Handle* block = directory.m_blocks[blockIndex];
    124152
    125             // FIXME: We could optimize this by checking our bits before querying isLive.
    126             // OOPS! (need bug URL)
    127153            auto* bits = m_bits[blockIndex].get();
    128             block->forEachLiveCell(
     154            block->forEachCell(
    129155                [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
    130                     if (bits->get(atomNumber))
     156                    if (bits->get(atomNumber) && block->isLive(cell))
    131157                        func(cell, kind);
    132158                    return IterationStatus::Continue;
    133159                });
    134160        });
     161
     162    CellAttributes attributes = m_subspace.attributes();
     163    m_subspace.forEachLargeAllocation(
     164        [&] (LargeAllocation* allocation) {
     165            if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isLive())
     166                func(allocation->cell(), attributes.cellKind);
     167        });
     168}
     169
     170inline void IsoCellSet::sweepLowerTierCell(unsigned index)
     171{
     172    m_lowerTierBits.concurrentTestAndClear(index);
    135173}
    136174
  • trunk/Source/JavaScriptCore/heap/IsoSubspace.cpp

    r248846 r252298  
    3030#include "BlockDirectoryInlines.h"
    3131#include "IsoAlignedMemoryAllocator.h"
     32#include "IsoCellSetInlines.h"
    3233#include "IsoSubspaceInlines.h"
    3334#include "LocalAllocatorInlines.h"
     
    4243    , m_isoAlignedMemoryAllocator(makeUnique<IsoAlignedMemoryAllocator>())
    4344{
     45    m_isIsoSubspace = true;
    4446    initialize(heapCellType, m_isoAlignedMemoryAllocator.get());
    4547
     
    8991}
    9092
     93void* IsoSubspace::tryAllocateFromLowerTier()
     94{
     95    auto revive = [&] (LargeAllocation* allocation) {
     96        allocation->setIndexInSpace(m_space.m_largeAllocations.size());
     97        allocation->m_hasValidCell = true;
     98        m_space.m_largeAllocations.append(allocation);
     99        if (auto* set = m_space.largeAllocationSet())
     100            set->add(allocation->cell());
     101        ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1);
     102        m_largeAllocations.append(allocation);
     103        return allocation->cell();
     104    };
     105
     106    if (!m_lowerTierFreeList.isEmpty()) {
     107        LargeAllocation* allocation = m_lowerTierFreeList.begin();
     108        allocation->remove();
     109        return revive(allocation);
     110    }
     111    if (m_lowerTierCellCount != MarkedBlock::numberOfLowerTierCells) {
     112        size_t size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(m_size);
     113        LargeAllocation* allocation = LargeAllocation::createForLowerTier(*m_space.heap(), size, this, m_lowerTierCellCount++);
     114        return revive(allocation);
     115    }
     116    return nullptr;
     117}
     118
     119void IsoSubspace::sweepLowerTierCell(LargeAllocation* largeAllocation)
     120{
     121    unsigned lowerTierIndex = largeAllocation->lowerTierIndex();
     122    largeAllocation = largeAllocation->reuseForLowerTier();
     123    m_lowerTierFreeList.append(largeAllocation);
     124    m_cellSets.forEach(
     125        [&] (IsoCellSet* set) {
     126            set->sweepLowerTierCell(lowerTierIndex);
     127        });
     128}
     129
     130void IsoSubspace::destroyLowerTierFreeList()
     131{
     132    m_lowerTierFreeList.forEach([&](LargeAllocation* allocation) {
     133        allocation->destroy();
     134    });
     135}
     136
    91137} // namespace JSC
    92138
  • trunk/Source/JavaScriptCore/heap/IsoSubspace.h

    r247844 r252298  
    4949    void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode);
    5050
     51    void sweepLowerTierCell(LargeAllocation*);
     52
     53    void* tryAllocateFromLowerTier();
     54    void destroyLowerTierFreeList();
     55
    5156private:
    5257    friend class IsoCellSet;
     
    6065    LocalAllocator m_localAllocator;
    6166    std::unique_ptr<IsoAlignedMemoryAllocator> m_isoAlignedMemoryAllocator;
    62     SentinelLinkedList<IsoCellSet, BasicRawSentinelNode<IsoCellSet>> m_cellSets;
     67    SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_lowerTierFreeList;
     68    SentinelLinkedList<IsoCellSet, PackedRawSentinelNode<IsoCellSet>> m_cellSets;
     69    uint8_t m_lowerTierCellCount { 0 };
    6370};
    6471
  • trunk/Source/JavaScriptCore/heap/LargeAllocation.cpp

    r249175 r252298  
    6868LargeAllocation* LargeAllocation::tryReallocate(size_t size, Subspace* subspace)
    6969{
     70    ASSERT(!isLowerTier());
    7071    size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
    7172    static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
     
    119120}
    120121
     122
     123LargeAllocation* LargeAllocation::createForLowerTier(Heap& heap, size_t size, Subspace* subspace, uint8_t lowerTierIndex)
     124{
     125    if (validateDFGDoesGC)
     126        RELEASE_ASSERT(heap.expectDoesGC());
     127
     128    size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
     129    static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
     130
     131    void* space = subspace->alignedMemoryAllocator()->tryAllocateMemory(adjustedAlignmentAllocationSize);
     132    RELEASE_ASSERT(space);
     133
     134    bool adjustedAlignment = false;
     135    if (!isAlignedForLargeAllocation(space)) {
     136        space = bitwise_cast<void*>(bitwise_cast<uintptr_t>(space) + halfAlignment);
     137        adjustedAlignment = true;
     138        ASSERT(isAlignedForLargeAllocation(space));
     139    }
     140
     141    if (scribbleFreeCells())
     142        scribble(space, size);
     143    LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment);
     144    largeAllocation->m_lowerTierIndex = lowerTierIndex;
     145    return largeAllocation;
     146}
     147
     148LargeAllocation* LargeAllocation::reuseForLowerTier()
     149{
     150    Heap& heap = *this->heap();
     151    size_t size = m_cellSize;
     152    Subspace* subspace = m_subspace;
     153    bool adjustedAlignment = m_adjustedAlignment;
     154    uint8_t lowerTierIndex = m_lowerTierIndex;
     155
     156    void* space = this->basePointer();
     157    this->~LargeAllocation();
     158
     159    LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment);
     160    largeAllocation->m_lowerTierIndex = lowerTierIndex;
     161    largeAllocation->m_hasValidCell = false;
     162    return largeAllocation;
     163}
     164
    121165LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace, unsigned indexInSpace, bool adjustedAlignment)
    122     : m_cellSize(size)
    123     , m_indexInSpace(indexInSpace)
     166    : m_indexInSpace(indexInSpace)
     167    , m_cellSize(size)
    124168    , m_isNewlyAllocated(true)
    125169    , m_hasValidCell(true)
     
    127171    , m_attributes(subspace->attributes())
    128172    , m_subspace(subspace)
    129     , m_weakSet(heap.vm(), *this)
     173    , m_weakSet(heap.vm())
    130174{
    131175    m_isMarked.store(0);
  • trunk/Source/JavaScriptCore/heap/LargeAllocation.h

    r251556 r252298  
    3131namespace JSC {
    3232
     33class IsoSubspace;
    3334class SlotVisitor;
    3435
     
    3839// when a HeapCell* is a LargeAllocation because it will have the MarkedBlock::atomSize / 2 bit set.
    3940
    40 class LargeAllocation : public BasicRawSentinelNode<LargeAllocation> {
     41class LargeAllocation : public PackedRawSentinelNode<LargeAllocation> {
    4142public:
    4243    friend class LLIntOffsetsExtractor;
     44    friend class IsoSubspace;
    4345
    4446    static LargeAllocation* tryCreate(Heap&, size_t, Subspace*, unsigned indexInSpace);
     47
     48    static LargeAllocation* createForLowerTier(Heap&, size_t, Subspace*, uint8_t lowerTierIndex);
     49    LargeAllocation* reuseForLowerTier();
    4550
    4651    LargeAllocation* tryReallocate(size_t, Subspace*);
     
    9499   
    95100    size_t cellSize() const { return m_cellSize; }
     101
     102    uint8_t lowerTierIndex() const { return m_lowerTierIndex; }
    96103   
    97104    bool aboveLowerBound(const void* rawPtr)
     
    147154   
    148155    void dump(PrintStream&) const;
     156
     157    bool isLowerTier() const { return m_lowerTierIndex != UINT8_MAX; }
    149158   
    150159    static constexpr unsigned alignment = MarkedBlock::atomSize;
     
    157166    void* basePointer() const;
    158167   
     168    unsigned m_indexInSpace { 0 };
    159169    size_t m_cellSize;
    160     unsigned m_indexInSpace { 0 };
    161170    bool m_isNewlyAllocated : 1;
    162171    bool m_hasValidCell : 1;
     
    164173    Atomic<bool> m_isMarked;
    165174    CellAttributes m_attributes;
     175    uint8_t m_lowerTierIndex { UINT8_MAX };
    166176    Subspace* m_subspace;
    167177    WeakSet m_weakSet;
  • trunk/Source/JavaScriptCore/heap/LocalAllocator.cpp

    r249175 r252298  
    134134    void* result = tryAllocateWithoutCollecting();
    135135   
    136     if (LIKELY(result != 0))
     136    if (LIKELY(result != nullptr))
    137137        return result;
     138
     139    Subspace* subspace = m_directory->m_subspace;
     140    if (subspace->isIsoSubspace()) {
     141        if (void* result = static_cast<IsoSubspace*>(subspace)->tryAllocateFromLowerTier())
     142            return result;
     143    }
    138144   
    139145    MarkedBlock::Handle* block = m_directory->tryAllocateBlock();
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp

    r251263 r252298  
    6363MarkedBlock::Handle::Handle(Heap& heap, AlignedMemoryAllocator* alignedMemoryAllocator, void* blockSpace)
    6464    : m_alignedMemoryAllocator(alignedMemoryAllocator)
    65     , m_weakSet(heap.vm(), CellContainer())
     65    , m_weakSet(heap.vm())
    6666{
    6767    m_block = new (NotNull, blockSpace) MarkedBlock(heap.vm(), *this);
    68    
    69     m_weakSet.setContainer(*m_block);
    7068   
    7169    heap.didAllocateBlock(blockSize);
     
    150148
    151149    forEachCell(
    152         [&] (HeapCell* cell, HeapCell::Kind) -> IterationStatus {
     150        [&] (size_t, HeapCell* cell, HeapCell::Kind) -> IterationStatus {
    153151            block().setNewlyAllocated(cell);
    154152            return IterationStatus::Continue;
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.h

    r249175 r252298  
    7878
    7979    static constexpr size_t atomsPerBlock = blockSize / atomSize;
     80
     81    static constexpr size_t numberOfLowerTierCells = 8;
     82    static_assert(numberOfLowerTierCells <= 256);
    8083   
    8184    static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
     
    309312
    310313    static_assert(payloadSize == ((blockSize - sizeof(MarkedBlock::Footer)) & ~(atomSize - 1)), "Payload size computed the alternate way should give the same result");
    311     // Some of JSCell types assume that the last JSCell in a MarkedBlock has a subsequent memory region (Footer) that can still safely accessed.
    312     // For example, JSRopeString assumes that it can safely access up to 2 bytes beyond the JSRopeString cell.
    313     static_assert(sizeof(Footer) >= sizeof(uint16_t));
    314314   
    315315    static MarkedBlock::Handle* tryCreate(Heap&, AlignedMemoryAllocator*);
     
    644644    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
    645645        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
    646         if (functor(cell, kind) == IterationStatus::Done)
     646        if (functor(i, cell, kind) == IterationStatus::Done)
    647647            return IterationStatus::Done;
    648648    }
  • trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp

    r252032 r252298  
    214214    for (LargeAllocation* allocation : m_largeAllocations)
    215215        allocation->destroy();
     216    forEachSubspace([&](Subspace& subspace) {
     217        if (subspace.isIsoSubspace())
     218            static_cast<IsoSubspace&>(subspace).destroyLowerTierFreeList();
     219        return IterationStatus::Continue;
     220    });
    216221}
    217222
     
    225230    for (LargeAllocation* allocation : m_largeAllocations)
    226231        allocation->lastChanceToFinalize();
     232    // We do not call lastChanceToFinalize for lower-tier swept cells since we need nothing to do.
    227233}
    228234
     
    246252        allocation->sweep();
    247253        if (allocation->isEmpty()) {
    248             m_capacity -= allocation->cellSize();
    249             allocation->destroy();
     254            if (auto* set = largeAllocationSet())
     255                set->remove(allocation->cell());
     256            if (allocation->isLowerTier())
     257                static_cast<IsoSubspace*>(allocation->subspace())->sweepLowerTierCell(allocation);
     258            else {
     259                m_capacity -= allocation->cellSize();
     260                allocation->destroy();
     261            }
    250262            continue;
    251263        }
     
    270282        m_largeAllocationsNurseryOffsetForSweep = 0;
    271283    m_largeAllocationsNurseryOffset = m_largeAllocations.size();
     284}
     285
     286void MarkedSpace::enableLargeAllocationTracking()
     287{
     288    m_largeAllocationSet = makeUnique<HashSet<HeapCell*>>();
     289    for (auto* allocation : m_largeAllocations)
     290        m_largeAllocationSet->add(allocation->cell());
    272291}
    273292
  • trunk/Source/JavaScriptCore/heap/MarkedSpace.h

    r248143 r252298  
    4040class CompleteSubspace;
    4141class Heap;
     42class HeapCell;
    4243class HeapIterationScope;
     44class IsoSubspace;
    4345class LLIntOffsetsExtractor;
    4446class Subspace;
     
    154156    unsigned largeAllocationsNurseryOffset() const { return m_largeAllocationsNurseryOffset; }
    155157    unsigned largeAllocationsOffsetForThisCollection() const { return m_largeAllocationsOffsetForThisCollection; }
     158    HashSet<HeapCell*>* largeAllocationSet() const { return m_largeAllocationSet.get(); }
     159
     160    void enableLargeAllocationTracking();
    156161   
    157162    // These are cached pointers and offsets for quickly searching the large allocations that are
     
    184189    friend class WeakSet;
    185190    friend class Subspace;
     191    friend class IsoSubspace;
    186192   
    187193    // Use this version when calling from within the GC where we know that the directories
     
    199205    Vector<Subspace*> m_subspaces;
    200206
     207    std::unique_ptr<HashSet<HeapCell*>> m_largeAllocationSet;
    201208    Vector<LargeAllocation*> m_largeAllocations;
    202209    unsigned m_largeAllocationsNurseryOffset { 0 };
  • trunk/Source/JavaScriptCore/heap/PackedCellPtr.h

    r247843 r252298  
    3434
    3535template<typename T>
    36 class PackedCellPtr : public PackedAlignedPtr<T, MarkedBlock::atomSize> {
     36class PackedCellPtr : public PackedAlignedPtr<T, 8> {
    3737public:
    38     using Base = PackedAlignedPtr<T, MarkedBlock::atomSize>;
     38    using Base = PackedAlignedPtr<T, 8>;
    3939    PackedCellPtr(T* pointer)
    4040        : Base(pointer)
    4141    {
    42         static_assert((sizeof(T) <= MarkedSpace::largeCutoff && std::is_final<T>::value) || isAllocatedFromIsoSubspace<T>::value, "LargeAllocation does not have 16byte alignment");
    43         ASSERT(!(bitwise_cast<uintptr_t>(pointer) & (16 - 1)));
    4442    }
    4543};
  • trunk/Source/JavaScriptCore/heap/Subspace.h

    r240216 r252298  
    102102    virtual void didBeginSweepingToFreeList(MarkedBlock::Handle*);
    103103
     104    bool isIsoSubspace() const { return m_isIsoSubspace; }
     105
    104106protected:
    105107    void initialize(HeapCellType*, AlignedMemoryAllocator*);
     
    112114    BlockDirectory* m_firstDirectory { nullptr };
    113115    BlockDirectory* m_directoryForEmptyAllocation { nullptr }; // Uses the MarkedSpace linked list of blocks.
    114     SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;
     116    SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_largeAllocations;
    115117    Subspace* m_nextSubspaceInAlignedMemoryAllocator { nullptr };
    116118
    117119    CString m_name;
     120
     121    bool m_isIsoSubspace { false };
    118122};
    119123
  • trunk/Source/JavaScriptCore/heap/WeakSet.cpp

    r209570 r252298  
    3939   
    4040    Heap& heap = *this->heap();
    41     WeakBlock* next = 0;
     41    WeakBlock* next = nullptr;
    4242    for (WeakBlock* block = m_blocks.head(); block; block = next) {
    4343        next = block->next();
     
    8484}
    8585
    86 WeakBlock::FreeCell* WeakSet::findAllocator()
     86WeakBlock::FreeCell* WeakSet::findAllocator(CellContainer container)
    8787{
    8888    if (WeakBlock::FreeCell* allocator = tryFindAllocator())
    8989        return allocator;
    9090
    91     return addAllocator();
     91    return addAllocator(container);
    9292}
    9393
     
    106106}
    107107
    108 WeakBlock::FreeCell* WeakSet::addAllocator()
     108WeakBlock::FreeCell* WeakSet::addAllocator(CellContainer container)
    109109{
    110110    if (!isOnList())
    111111        heap()->objectSpace().addActiveWeakSet(this);
    112112   
    113     WeakBlock* block = WeakBlock::create(*heap(), m_container);
     113    WeakBlock* block = WeakBlock::create(*heap(), container);
    114114    heap()->didAllocate(WeakBlock::blockSize);
    115115    m_blocks.append(block);
  • trunk/Source/JavaScriptCore/heap/WeakSet.h

    r251556 r252298  
    4242    static void deallocate(WeakImpl*);
    4343
    44     WeakSet(VM&, CellContainer);
     44    WeakSet(VM&);
    4545    ~WeakSet();
    4646    void lastChanceToFinalize();
    4747   
    48     CellContainer container() const { return m_container; }
    49     void setContainer(CellContainer container) { m_container = container; }
    50 
    5148    Heap* heap() const;
    5249    VM& vm() const;
     
    6360
    6461private:
    65     JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator();
     62    JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator(CellContainer);
    6663    WeakBlock::FreeCell* tryFindAllocator();
    67     WeakBlock::FreeCell* addAllocator();
     64    WeakBlock::FreeCell* addAllocator(CellContainer);
    6865    void removeAllocator(WeakBlock*);
    6966
    70     WeakBlock::FreeCell* m_allocator;
    71     WeakBlock* m_nextAllocator;
     67    WeakBlock::FreeCell* m_allocator { nullptr };
     68    WeakBlock* m_nextAllocator { nullptr };
    7269    DoublyLinkedList<WeakBlock> m_blocks;
    7370    // m_vm must be a pointer (instead of a reference) because the JSCLLIntOffsetsExtractor
    7471    // cannot handle it being a reference.
    7572    VM* m_vm;
    76     CellContainer m_container;
    7773};
    7874
    79 inline WeakSet::WeakSet(VM& vm, CellContainer container)
    80     : m_allocator(0)
    81     , m_nextAllocator(0)
    82     , m_vm(&vm)
    83     , m_container(container)
     75inline WeakSet::WeakSet(VM& vm)
     76    : m_vm(&vm)
    8477{
    8578}
     
    134127inline void WeakSet::resetAllocator()
    135128{
    136     m_allocator = 0;
     129    m_allocator = nullptr;
    137130    m_nextAllocator = m_blocks.head();
    138131}
  • trunk/Source/JavaScriptCore/heap/WeakSetInlines.h

    r206525 r252298  
    3333inline WeakImpl* WeakSet::allocate(JSValue jsValue, WeakHandleOwner* weakHandleOwner, void* context)
    3434{
    35     WeakSet& weakSet = jsValue.asCell()->cellContainer().weakSet();
     35    CellContainer container = jsValue.asCell()->cellContainer();
     36    WeakSet& weakSet = container.weakSet();
    3637    WeakBlock::FreeCell* allocator = weakSet.m_allocator;
    3738    if (UNLIKELY(!allocator))
    38         allocator = weakSet.findAllocator();
     39        allocator = weakSet.findAllocator(container);
    3940    weakSet.m_allocator = allocator->next;
    4041
  • trunk/Source/JavaScriptCore/runtime/InternalFunction.cpp

    r251425 r252298  
    4141    , m_globalObject(vm, this, structure->globalObject())
    4242{
    43     // globalObject->vm() wants callees to not be large allocations.
    44     RELEASE_ASSERT(!isLargeAllocation());
    4543    ASSERT_WITH_MESSAGE(m_functionForCall, "[[Call]] must be implemented");
    4644    ASSERT(m_functionForConstruct);
  • trunk/Source/JavaScriptCore/runtime/JSCallee.cpp

    r217108 r252298  
    4040    , m_scope(vm, this, globalObject)
    4141{
    42     RELEASE_ASSERT(!isLargeAllocation());
    4342}
    4443
  • trunk/Source/JavaScriptCore/runtime/JSString.h

    r251584 r252298  
    287287        {
    288288#if CPU(LITTLE_ENDIAN)
    289             // This access exceeds the sizeof(JSRopeString). But this is OK because JSRopeString is always allocated in MarkedBlock,
    290             // and the last JSRopeString cell in the block has some subsequent bytes which are used for MarkedBlock::Footer.
    291             // So the following access does not step over the page boundary in which the latter page does not have read permission.
    292             return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber2Lower) & addressMask);
     289            return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber1Upper) >> 16);
    293290#else
    294291            return bitwise_cast<JSString*>(static_cast<uintptr_t>(m_fiber2Lower) | (static_cast<uintptr_t>(m_fiber2Upper) << 16));
  • trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp

    r252006 r252298  
    308308
    309309    m_currentFrames.grow(256);
     310    vm.heap.objectSpace().enableLargeAllocationTracking();
    310311}
    311312
     
    469470}
    470471
    471 void SamplingProfiler::processUnverifiedStackTraces()
     472void SamplingProfiler::processUnverifiedStackTraces(const AbstractLocker&)
    472473{
    473474    // This function needs to be called from the JSC execution thread.
     
    952953    {
    953954        HeapIterationScope heapIterationScope(m_vm.heap);
    954         processUnverifiedStackTraces();
     955        processUnverifiedStackTraces(locker);
    955956    }
    956957
     
    963964{
    964965    DeferGC deferGC(m_vm.heap);
    965     LockHolder locker(m_lock);
     966    auto locker = holdLock(m_lock);
    966967
    967968    {
    968969        HeapIterationScope heapIterationScope(m_vm.heap);
    969         processUnverifiedStackTraces();
     970        processUnverifiedStackTraces(locker);
    970971    }
    971972
     
    10391040void SamplingProfiler::reportTopFunctions(PrintStream& out)
    10401041{
    1041     LockHolder locker(m_lock);
     1042    auto locker = holdLock(m_lock);
    10421043    DeferGCForAWhile deferGC(m_vm.heap);
    10431044
    10441045    {
    10451046        HeapIterationScope heapIterationScope(m_vm.heap);
    1046         processUnverifiedStackTraces();
     1047        processUnverifiedStackTraces(locker);
    10471048    }
    10481049
     
    10921093void SamplingProfiler::reportTopBytecodes(PrintStream& out)
    10931094{
    1094     LockHolder locker(m_lock);
     1095    auto locker = holdLock(m_lock);
    10951096    DeferGCForAWhile deferGC(m_vm.heap);
    10961097
    10971098    {
    10981099        HeapIterationScope heapIterationScope(m_vm.heap);
    1099         processUnverifiedStackTraces();
     1100        processUnverifiedStackTraces(locker);
    11001101    }
    11011102
  • trunk/Source/JavaScriptCore/runtime/SamplingProfiler.h

    r251468 r252298  
    181181    JS_EXPORT_PRIVATE void noticeCurrentThreadAsJSCExecutionThread();
    182182    void noticeCurrentThreadAsJSCExecutionThread(const AbstractLocker&);
    183     void processUnverifiedStackTraces(); // You should call this only after acquiring the lock.
     183    void processUnverifiedStackTraces(const AbstractLocker&);
    184184    void setStopWatch(const AbstractLocker&, Ref<Stopwatch>&& stopwatch) { m_stopwatch = WTFMove(stopwatch); }
    185185    void pause(const AbstractLocker&);
  • trunk/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp

    r251886 r252298  
    434434    WebAssemblyFunction* function = new (NotNull, allocateCell<WebAssemblyFunction>(vm.heap)) WebAssemblyFunction(vm, globalObject, structure, jsEntrypoint, wasmToWasmEntrypointLoadLocation, signatureIndex);
    435435    function->finishCreation(vm, executable, length, name, instance);
    436     ASSERT_WITH_MESSAGE(!function->isLargeAllocation(), "WebAssemblyFunction should be allocated not in large allocation since it is JSCallee.");
    437436    return function;
    438437}
Note: See TracChangeset for help on using the changeset viewer.