Changeset 252298 in webkit
- Timestamp:
- Nov 8, 2019 5:37:55 PM (4 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 28 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r252273 r252298 1 2019-11-08 Yusuke Suzuki <ysuzuki@apple.com> 2 3 [JSC] Make IsoSubspace scalable 4 https://bugs.webkit.org/show_bug.cgi?id=201908 5 6 Reviewed by Keith Miller. 7 8 This patch introduces lower-tier into IsoSubspace so that we can avoid allocating MarkedBlock 9 if a certain type of object is not allocated so many. This optimization allows us apply IsoSubspace 10 more aggressively to various types of objects without introducing memory regression even if such a 11 type of object is allocated so frequently. 12 13 We use LargeAllocation for these lower-tier objects. Each IsoSubspace holds up to 8 lower-tier objects 14 allocated via LargeAllocation. We use this special LargeAllocation when we tend to allocate small # of cells 15 for this type. Specifically, what we are doing right now is, (1) first, try to allocate in an existing 16 MarkedBlock (there won't be one to start), and (2) then, try to allocate in LargeAllocation, and if we cannot 17 allocate lower-tier objects, (3) finally we allocate a new MarkedBlock. Once this LargeAllocation is allocated 18 to a certain type, we do not deallocate it until VM is destroyed, so that we can keep IsoSubspace's 19 characteristics: once an address is assigned to a certain type, we continue using this address only for this type. 20 21 To introduce this optimization, we need to remove an restriction that no callee cells can be a LargeAllocation. 22 This also turns out that SamplingProfiler's isValueGCObject is heavily relies on that all the callee is small-sized. 23 isValueGCObject relies on the thing that MarkedSpace::m_largeAllocations is sorted. But this is not true since 24 this vector is sorted only when conservative scan happens. And further, this vector is only partially sorted: we 25 sort only an eden part part of this vector. So we cannot use this vector to implement isValueGCObject in the sampling 26 profiler. Instead we register HeapCell address into a hash-set in MarkedSpace. Since we do not need to find a pointer 27 that is pointing at the middle of the JSCell in sampling profiler, just registering cell address is enough. And we 28 maintain this hash-set only when sampling profiler is enabled to save memory in major cases. 29 30 We also fix the code that is relying on that JSString is always allocated in MarkedBlock. And we also fix PackedCellPtr's 31 assumption that CodeBlock is always allocated in MarkedBlock. 32 33 We also make sizeof(LargeAllocation) small since it is now used for non-large allocations. 34 35 JetStream2 and Speedometer2 are neutral. RAMification shows 0.6% progression on iOS devices. 36 37 * heap/BlockDirectory.cpp: 38 (JSC::BlockDirectory::BlockDirectory): 39 * heap/BlockDirectory.h: 40 * heap/BlockDirectoryInlines.h: 41 (JSC::BlockDirectory::tryAllocateFromLowerTier): 42 * heap/CompleteSubspace.cpp: 43 (JSC::CompleteSubspace::allocatorForSlow): 44 (JSC::CompleteSubspace::tryAllocateSlow): 45 (JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual): 46 * heap/Heap.cpp: 47 (JSC::Heap::dumpHeapStatisticsAtVMDestruction): 48 (JSC::Heap::addCoreConstraints): 49 * heap/HeapUtil.h: 50 (JSC::HeapUtil::isPointerGCObjectJSCell): 51 (JSC::HeapUtil::isValueGCObject): 52 * heap/IsoAlignedMemoryAllocator.cpp: 53 (JSC::IsoAlignedMemoryAllocator::tryAllocateMemory): 54 (JSC::IsoAlignedMemoryAllocator::freeMemory): 55 (JSC::IsoAlignedMemoryAllocator::tryReallocateMemory): 56 * heap/IsoCellSet.cpp: 57 (JSC::IsoCellSet::~IsoCellSet): 58 * heap/IsoCellSet.h: 59 * heap/IsoCellSetInlines.h: 60 (JSC::IsoCellSet::add): 61 (JSC::IsoCellSet::remove): 62 (JSC::IsoCellSet::contains const): 63 (JSC::IsoCellSet::forEachMarkedCell): 64 (JSC::IsoCellSet::forEachMarkedCellInParallel): 65 (JSC::IsoCellSet::forEachLiveCell): 66 (JSC::IsoCellSet::sweepLowerTierCell): 67 * heap/IsoSubspace.cpp: 68 (JSC::IsoSubspace::IsoSubspace): 69 (JSC::IsoSubspace::tryAllocateFromLowerTier): 70 (JSC::IsoSubspace::sweepLowerTierCell): 71 * heap/IsoSubspace.h: 72 * heap/LargeAllocation.cpp: 73 (JSC::LargeAllocation::tryReallocate): 74 (JSC::LargeAllocation::createForLowerTier): 75 (JSC::LargeAllocation::reuseForLowerTier): 76 (JSC::LargeAllocation::LargeAllocation): 77 * heap/LargeAllocation.h: 78 (JSC::LargeAllocation::lowerTierIndex const): 79 (JSC::LargeAllocation::isLowerTier const): 80 * heap/LocalAllocator.cpp: 81 (JSC::LocalAllocator::allocateSlowCase): 82 * heap/MarkedBlock.cpp: 83 (JSC::MarkedBlock::Handle::Handle): 84 (JSC::MarkedBlock::Handle::stopAllocating): 85 * heap/MarkedBlock.h: 86 (JSC::MarkedBlock::Handle::forEachCell): 87 * heap/MarkedSpace.cpp: 88 (JSC::MarkedSpace::freeMemory): 89 (JSC::MarkedSpace::lastChanceToFinalize): 90 (JSC::MarkedSpace::sweepLargeAllocations): 91 (JSC::MarkedSpace::enableLargeAllocationTracking): 92 * heap/MarkedSpace.h: 93 (JSC::MarkedSpace:: const): 94 * heap/PackedCellPtr.h: 95 (JSC::PackedCellPtr::PackedCellPtr): 96 * heap/Subspace.h: 97 * heap/WeakSet.cpp: 98 (JSC::WeakSet::~WeakSet): 99 (JSC::WeakSet::findAllocator): 100 (JSC::WeakSet::addAllocator): 101 * heap/WeakSet.h: 102 (JSC::WeakSet::WeakSet): 103 (JSC::WeakSet::resetAllocator): 104 (JSC::WeakSet::container const): Deleted. 105 (JSC::WeakSet::setContainer): Deleted. 106 * heap/WeakSetInlines.h: 107 (JSC::WeakSet::allocate): 108 * runtime/InternalFunction.cpp: 109 (JSC::InternalFunction::InternalFunction): 110 * runtime/JSCallee.cpp: 111 (JSC::JSCallee::JSCallee): 112 * runtime/JSString.h: 113 * runtime/SamplingProfiler.cpp: 114 (JSC::SamplingProfiler::SamplingProfiler): 115 (JSC::SamplingProfiler::processUnverifiedStackTraces): 116 (JSC::SamplingProfiler::releaseStackTraces): 117 (JSC::SamplingProfiler::stackTracesAsJSON): 118 (JSC::SamplingProfiler::reportTopFunctions): 119 (JSC::SamplingProfiler::reportTopBytecodes): 120 * runtime/SamplingProfiler.h: 121 1 122 2019-11-08 Matt Lewis <jlewis3@apple.com> 2 123 -
trunk/Source/JavaScriptCore/heap/CompleteSubspace.cpp
r249175 r252298 80 80 dataLog("Creating BlockDirectory/LocalAllocator for ", m_name, ", ", attributes(), ", ", sizeClass, ".\n"); 81 81 82 std::unique_ptr<BlockDirectory> uniqueDirectory = 83 makeUnique<BlockDirectory>(m_space.heap(), sizeClass); 82 std::unique_ptr<BlockDirectory> uniqueDirectory = makeUnique<BlockDirectory>(m_space.heap(), sizeClass); 84 83 BlockDirectory* directory = uniqueDirectory.get(); 85 84 m_directories.append(WTFMove(uniqueDirectory)); … … 146 145 147 146 m_space.m_largeAllocations.append(allocation); 147 if (auto* set = m_space.largeAllocationSet()) 148 set->add(allocation->cell()); 148 149 ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1); 149 150 vm.heap.didAllocate(size); … … 195 196 ASSERT(oldIndexInSpace == allocation->indexInSpace()); 196 197 198 // If reallocation changes the address, we should update HashSet. 199 if (oldAllocation != allocation) { 200 if (auto* set = m_space.largeAllocationSet()) { 201 set->remove(oldAllocation->cell()); 202 set->add(allocation->cell()); 203 } 204 } 205 197 206 m_space.m_largeAllocations[oldIndexInSpace] = allocation; 198 207 vm.heap.didAllocate(difference); -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r250285 r252298 373 373 m_objectSpace.forEachBlock([&] (MarkedBlock::Handle* block) { 374 374 unsigned live = 0; 375 block->forEachCell([&] ( HeapCell* cell, HeapCell::Kind) {375 block->forEachCell([&] (size_t, HeapCell* cell, HeapCell::Kind) { 376 376 if (cell->isLive()) 377 377 live++; … … 379 379 }); 380 380 dataLogLn("[", counter++, "] ", block->cellSize(), ", ", live, " / ", block->cellsPerBlock(), " ", static_cast<double>(live) / block->cellsPerBlock() * 100, "% ", block->attributes(), " ", block->subspace()->name()); 381 block->forEachCell([&] ( HeapCell* heapCell, HeapCell::Kind kind) {381 block->forEachCell([&] (size_t, HeapCell* heapCell, HeapCell::Kind kind) { 382 382 if (heapCell->isLive() && kind == HeapCell::Kind::JSCell) { 383 383 auto* cell = static_cast<JSCell*>(heapCell); … … 2775 2775 #if ENABLE(SAMPLING_PROFILER) 2776 2776 if (SamplingProfiler* samplingProfiler = m_vm.samplingProfiler()) { 2777 LockHolder locker(samplingProfiler->getLock());2778 samplingProfiler->processUnverifiedStackTraces( );2777 auto locker = holdLock(samplingProfiler->getLock()); 2778 samplingProfiler->processUnverifiedStackTraces(locker); 2779 2779 samplingProfiler->visit(slotVisitor); 2780 2780 if (Options::logGC() == GCLogging::Verbose) -
trunk/Source/JavaScriptCore/heap/HeapUtil.h
r250005 r252298 129 129 } 130 130 131 static bool isPointerGCObjectJSCell( 132 Heap& heap, TinyBloomFilter filter, const void* pointer) 131 static bool isPointerGCObjectJSCell(Heap& heap, TinyBloomFilter filter, JSCell* pointer) 133 132 { 134 133 // It could point to a large allocation. 135 const Vector<LargeAllocation*>& largeAllocations = heap.objectSpace().largeAllocations(); 136 if (!largeAllocations.isEmpty()) { 137 if (largeAllocations[0]->aboveLowerBound(pointer) 138 && largeAllocations.last()->belowUpperBound(pointer)) { 139 LargeAllocation*const* result = approximateBinarySearch<LargeAllocation*const>( 140 largeAllocations.begin(), largeAllocations.size(), 141 LargeAllocation::fromCell(pointer), 142 [] (LargeAllocation*const* ptr) -> LargeAllocation* { return *ptr; }); 143 if (result) { 144 if (result > largeAllocations.begin() 145 && result[-1]->cell() == pointer 146 && isJSCellKind(result[-1]->attributes().cellKind)) 147 return true; 148 if (result[0]->cell() == pointer 149 && isJSCellKind(result[0]->attributes().cellKind)) 150 return true; 151 if (result + 1 < largeAllocations.end() 152 && result[1]->cell() == pointer 153 && isJSCellKind(result[1]->attributes().cellKind)) 154 return true; 155 } 156 } 134 if (pointer->isLargeAllocation()) { 135 auto* set = heap.objectSpace().largeAllocationSet(); 136 ASSERT(set); 137 if (set->isEmpty()) 138 return false; 139 return set->contains(pointer); 157 140 } 158 141 … … 180 163 } 181 164 165 // This does not find the cell if the pointer is pointing at the middle of a JSCell. 182 166 static bool isValueGCObject( 183 167 Heap& heap, TinyBloomFilter filter, JSValue value) 184 168 { 169 ASSERT(heap.objectSpace().largeAllocationSet()); 185 170 if (!value.isCell()) 186 171 return false; 187 return isPointerGCObjectJSCell(heap, filter, static_cast<void*>(value.asCell()));172 return isPointerGCObjectJSCell(heap, filter, value.asCell()); 188 173 } 189 174 }; -
trunk/Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
r244019 r252298 90 90 } 91 91 92 void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t )92 void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t size) 93 93 { 94 RELEASE_ASSERT_NOT_REACHED();94 return FastMalloc::tryMalloc(size); 95 95 } 96 96 97 void IsoAlignedMemoryAllocator::freeMemory(void* )97 void IsoAlignedMemoryAllocator::freeMemory(void* pointer) 98 98 { 99 RELEASE_ASSERT_NOT_REACHED();99 FastMalloc::free(pointer); 100 100 } 101 101 102 102 void* IsoAlignedMemoryAllocator::tryReallocateMemory(void*, size_t) 103 103 { 104 // In IsoSubspace-managed LargeAllocation, we must not perform realloc. 104 105 RELEASE_ASSERT_NOT_REACHED(); 105 106 } -
trunk/Source/JavaScriptCore/heap/IsoCellSet.cpp
r248846 r252298 44 44 { 45 45 if (isOnList()) 46 BasicRawSentinelNode<IsoCellSet>::remove();46 PackedRawSentinelNode<IsoCellSet>::remove(); 47 47 } 48 48 -
trunk/Source/JavaScriptCore/heap/IsoCellSet.h
r239535 r252298 41 41 // removal. Each such set should be thought of as a 0.8% increase in object size for objects in that 42 42 // IsoSubspace (it's like adding 1 bit every 16 bytes, or 1 bit every 128 bits). 43 class IsoCellSet : public BasicRawSentinelNode<IsoCellSet> {43 class IsoCellSet : public PackedRawSentinelNode<IsoCellSet> { 44 44 public: 45 45 IsoCellSet(IsoSubspace& subspace); … … 73 73 void didRemoveBlock(size_t blockIndex); 74 74 void sweepToFreeList(MarkedBlock::Handle*); 75 void sweepLowerTierCell(unsigned); 75 76 77 Bitmap<MarkedBlock::numberOfLowerTierCells> m_lowerTierBits; 78 76 79 IsoSubspace& m_subspace; 77 80 -
trunk/Source/JavaScriptCore/heap/IsoCellSetInlines.h
r239535 r252298 34 34 inline bool IsoCellSet::add(HeapCell* cell) 35 35 { 36 if (cell->isLargeAllocation()) 37 return !m_lowerTierBits.concurrentTestAndSet(cell->largeAllocation().lowerTierIndex()); 36 38 AtomIndices atomIndices(cell); 37 39 auto& bitsPtrRef = m_bits[atomIndices.blockIndex]; … … 44 46 inline bool IsoCellSet::remove(HeapCell* cell) 45 47 { 48 if (cell->isLargeAllocation()) 49 return !m_lowerTierBits.concurrentTestAndClear(cell->largeAllocation().lowerTierIndex()); 46 50 AtomIndices atomIndices(cell); 47 51 auto& bitsPtrRef = m_bits[atomIndices.blockIndex]; … … 54 58 inline bool IsoCellSet::contains(HeapCell* cell) const 55 59 { 60 if (cell->isLargeAllocation()) 61 return !m_lowerTierBits.get(cell->largeAllocation().lowerTierIndex()); 56 62 AtomIndices atomIndices(cell); 57 63 auto* bits = m_bits[atomIndices.blockIndex].get(); … … 76 82 return IterationStatus::Continue; 77 83 }); 84 }); 85 86 CellAttributes attributes = m_subspace.attributes(); 87 m_subspace.forEachLargeAllocation( 88 [&] (LargeAllocation* allocation) { 89 if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked()) 90 func(allocation->cell(), attributes.cellKind); 78 91 }); 79 92 } … … 103 116 }); 104 117 } 118 119 { 120 auto locker = holdLock(m_lock); 121 if (!m_needToVisitLargeAllocations) 122 return; 123 m_needToVisitLargeAllocations = false; 124 } 125 126 CellAttributes attributes = m_set.m_subspace.attributes(); 127 m_set.m_subspace.forEachLargeAllocation( 128 [&] (LargeAllocation* allocation) { 129 if (m_set.m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isMarked()) 130 m_func(visitor, allocation->cell(), attributes.cellKind); 131 }); 105 132 } 106 133 … … 110 137 Func m_func; 111 138 Lock m_lock; 139 bool m_needToVisitLargeAllocations { true }; 112 140 }; 113 141 … … 123 151 MarkedBlock::Handle* block = directory.m_blocks[blockIndex]; 124 152 125 // FIXME: We could optimize this by checking our bits before querying isLive.126 // OOPS! (need bug URL)127 153 auto* bits = m_bits[blockIndex].get(); 128 block->forEach LiveCell(154 block->forEachCell( 129 155 [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus { 130 if (bits->get(atomNumber) )156 if (bits->get(atomNumber) && block->isLive(cell)) 131 157 func(cell, kind); 132 158 return IterationStatus::Continue; 133 159 }); 134 160 }); 161 162 CellAttributes attributes = m_subspace.attributes(); 163 m_subspace.forEachLargeAllocation( 164 [&] (LargeAllocation* allocation) { 165 if (m_lowerTierBits.get(allocation->lowerTierIndex()) && allocation->isLive()) 166 func(allocation->cell(), attributes.cellKind); 167 }); 168 } 169 170 inline void IsoCellSet::sweepLowerTierCell(unsigned index) 171 { 172 m_lowerTierBits.concurrentTestAndClear(index); 135 173 } 136 174 -
trunk/Source/JavaScriptCore/heap/IsoSubspace.cpp
r248846 r252298 30 30 #include "BlockDirectoryInlines.h" 31 31 #include "IsoAlignedMemoryAllocator.h" 32 #include "IsoCellSetInlines.h" 32 33 #include "IsoSubspaceInlines.h" 33 34 #include "LocalAllocatorInlines.h" … … 42 43 , m_isoAlignedMemoryAllocator(makeUnique<IsoAlignedMemoryAllocator>()) 43 44 { 45 m_isIsoSubspace = true; 44 46 initialize(heapCellType, m_isoAlignedMemoryAllocator.get()); 45 47 … … 89 91 } 90 92 93 void* IsoSubspace::tryAllocateFromLowerTier() 94 { 95 auto revive = [&] (LargeAllocation* allocation) { 96 allocation->setIndexInSpace(m_space.m_largeAllocations.size()); 97 allocation->m_hasValidCell = true; 98 m_space.m_largeAllocations.append(allocation); 99 if (auto* set = m_space.largeAllocationSet()) 100 set->add(allocation->cell()); 101 ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1); 102 m_largeAllocations.append(allocation); 103 return allocation->cell(); 104 }; 105 106 if (!m_lowerTierFreeList.isEmpty()) { 107 LargeAllocation* allocation = m_lowerTierFreeList.begin(); 108 allocation->remove(); 109 return revive(allocation); 110 } 111 if (m_lowerTierCellCount != MarkedBlock::numberOfLowerTierCells) { 112 size_t size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(m_size); 113 LargeAllocation* allocation = LargeAllocation::createForLowerTier(*m_space.heap(), size, this, m_lowerTierCellCount++); 114 return revive(allocation); 115 } 116 return nullptr; 117 } 118 119 void IsoSubspace::sweepLowerTierCell(LargeAllocation* largeAllocation) 120 { 121 unsigned lowerTierIndex = largeAllocation->lowerTierIndex(); 122 largeAllocation = largeAllocation->reuseForLowerTier(); 123 m_lowerTierFreeList.append(largeAllocation); 124 m_cellSets.forEach( 125 [&] (IsoCellSet* set) { 126 set->sweepLowerTierCell(lowerTierIndex); 127 }); 128 } 129 130 void IsoSubspace::destroyLowerTierFreeList() 131 { 132 m_lowerTierFreeList.forEach([&](LargeAllocation* allocation) { 133 allocation->destroy(); 134 }); 135 } 136 91 137 } // namespace JSC 92 138 -
trunk/Source/JavaScriptCore/heap/IsoSubspace.h
r247844 r252298 49 49 void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode); 50 50 51 void sweepLowerTierCell(LargeAllocation*); 52 53 void* tryAllocateFromLowerTier(); 54 void destroyLowerTierFreeList(); 55 51 56 private: 52 57 friend class IsoCellSet; … … 60 65 LocalAllocator m_localAllocator; 61 66 std::unique_ptr<IsoAlignedMemoryAllocator> m_isoAlignedMemoryAllocator; 62 SentinelLinkedList<IsoCellSet, BasicRawSentinelNode<IsoCellSet>> m_cellSets; 67 SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_lowerTierFreeList; 68 SentinelLinkedList<IsoCellSet, PackedRawSentinelNode<IsoCellSet>> m_cellSets; 69 uint8_t m_lowerTierCellCount { 0 }; 63 70 }; 64 71 -
trunk/Source/JavaScriptCore/heap/LargeAllocation.cpp
r249175 r252298 68 68 LargeAllocation* LargeAllocation::tryReallocate(size_t size, Subspace* subspace) 69 69 { 70 ASSERT(!isLowerTier()); 70 71 size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment; 71 72 static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8."); … … 119 120 } 120 121 122 123 LargeAllocation* LargeAllocation::createForLowerTier(Heap& heap, size_t size, Subspace* subspace, uint8_t lowerTierIndex) 124 { 125 if (validateDFGDoesGC) 126 RELEASE_ASSERT(heap.expectDoesGC()); 127 128 size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment; 129 static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8."); 130 131 void* space = subspace->alignedMemoryAllocator()->tryAllocateMemory(adjustedAlignmentAllocationSize); 132 RELEASE_ASSERT(space); 133 134 bool adjustedAlignment = false; 135 if (!isAlignedForLargeAllocation(space)) { 136 space = bitwise_cast<void*>(bitwise_cast<uintptr_t>(space) + halfAlignment); 137 adjustedAlignment = true; 138 ASSERT(isAlignedForLargeAllocation(space)); 139 } 140 141 if (scribbleFreeCells()) 142 scribble(space, size); 143 LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment); 144 largeAllocation->m_lowerTierIndex = lowerTierIndex; 145 return largeAllocation; 146 } 147 148 LargeAllocation* LargeAllocation::reuseForLowerTier() 149 { 150 Heap& heap = *this->heap(); 151 size_t size = m_cellSize; 152 Subspace* subspace = m_subspace; 153 bool adjustedAlignment = m_adjustedAlignment; 154 uint8_t lowerTierIndex = m_lowerTierIndex; 155 156 void* space = this->basePointer(); 157 this->~LargeAllocation(); 158 159 LargeAllocation* largeAllocation = new (NotNull, space) LargeAllocation(heap, size, subspace, 0, adjustedAlignment); 160 largeAllocation->m_lowerTierIndex = lowerTierIndex; 161 largeAllocation->m_hasValidCell = false; 162 return largeAllocation; 163 } 164 121 165 LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace, unsigned indexInSpace, bool adjustedAlignment) 122 : m_ cellSize(size)123 , m_ indexInSpace(indexInSpace)166 : m_indexInSpace(indexInSpace) 167 , m_cellSize(size) 124 168 , m_isNewlyAllocated(true) 125 169 , m_hasValidCell(true) … … 127 171 , m_attributes(subspace->attributes()) 128 172 , m_subspace(subspace) 129 , m_weakSet(heap.vm() , *this)173 , m_weakSet(heap.vm()) 130 174 { 131 175 m_isMarked.store(0); -
trunk/Source/JavaScriptCore/heap/LargeAllocation.h
r251556 r252298 31 31 namespace JSC { 32 32 33 class IsoSubspace; 33 34 class SlotVisitor; 34 35 … … 38 39 // when a HeapCell* is a LargeAllocation because it will have the MarkedBlock::atomSize / 2 bit set. 39 40 40 class LargeAllocation : public BasicRawSentinelNode<LargeAllocation> {41 class LargeAllocation : public PackedRawSentinelNode<LargeAllocation> { 41 42 public: 42 43 friend class LLIntOffsetsExtractor; 44 friend class IsoSubspace; 43 45 44 46 static LargeAllocation* tryCreate(Heap&, size_t, Subspace*, unsigned indexInSpace); 47 48 static LargeAllocation* createForLowerTier(Heap&, size_t, Subspace*, uint8_t lowerTierIndex); 49 LargeAllocation* reuseForLowerTier(); 45 50 46 51 LargeAllocation* tryReallocate(size_t, Subspace*); … … 94 99 95 100 size_t cellSize() const { return m_cellSize; } 101 102 uint8_t lowerTierIndex() const { return m_lowerTierIndex; } 96 103 97 104 bool aboveLowerBound(const void* rawPtr) … … 147 154 148 155 void dump(PrintStream&) const; 156 157 bool isLowerTier() const { return m_lowerTierIndex != UINT8_MAX; } 149 158 150 159 static constexpr unsigned alignment = MarkedBlock::atomSize; … … 157 166 void* basePointer() const; 158 167 168 unsigned m_indexInSpace { 0 }; 159 169 size_t m_cellSize; 160 unsigned m_indexInSpace { 0 };161 170 bool m_isNewlyAllocated : 1; 162 171 bool m_hasValidCell : 1; … … 164 173 Atomic<bool> m_isMarked; 165 174 CellAttributes m_attributes; 175 uint8_t m_lowerTierIndex { UINT8_MAX }; 166 176 Subspace* m_subspace; 167 177 WeakSet m_weakSet; -
trunk/Source/JavaScriptCore/heap/LocalAllocator.cpp
r249175 r252298 134 134 void* result = tryAllocateWithoutCollecting(); 135 135 136 if (LIKELY(result != 0))136 if (LIKELY(result != nullptr)) 137 137 return result; 138 139 Subspace* subspace = m_directory->m_subspace; 140 if (subspace->isIsoSubspace()) { 141 if (void* result = static_cast<IsoSubspace*>(subspace)->tryAllocateFromLowerTier()) 142 return result; 143 } 138 144 139 145 MarkedBlock::Handle* block = m_directory->tryAllocateBlock(); -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r251263 r252298 63 63 MarkedBlock::Handle::Handle(Heap& heap, AlignedMemoryAllocator* alignedMemoryAllocator, void* blockSpace) 64 64 : m_alignedMemoryAllocator(alignedMemoryAllocator) 65 , m_weakSet(heap.vm() , CellContainer())65 , m_weakSet(heap.vm()) 66 66 { 67 67 m_block = new (NotNull, blockSpace) MarkedBlock(heap.vm(), *this); 68 69 m_weakSet.setContainer(*m_block);70 68 71 69 heap.didAllocateBlock(blockSize); … … 150 148 151 149 forEachCell( 152 [&] ( HeapCell* cell, HeapCell::Kind) -> IterationStatus {150 [&] (size_t, HeapCell* cell, HeapCell::Kind) -> IterationStatus { 153 151 block().setNewlyAllocated(cell); 154 152 return IterationStatus::Continue; -
trunk/Source/JavaScriptCore/heap/MarkedBlock.h
r249175 r252298 78 78 79 79 static constexpr size_t atomsPerBlock = blockSize / atomSize; 80 81 static constexpr size_t numberOfLowerTierCells = 8; 82 static_assert(numberOfLowerTierCells <= 256); 80 83 81 84 static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two."); … … 309 312 310 313 static_assert(payloadSize == ((blockSize - sizeof(MarkedBlock::Footer)) & ~(atomSize - 1)), "Payload size computed the alternate way should give the same result"); 311 // Some of JSCell types assume that the last JSCell in a MarkedBlock has a subsequent memory region (Footer) that can still safely accessed.312 // For example, JSRopeString assumes that it can safely access up to 2 bytes beyond the JSRopeString cell.313 static_assert(sizeof(Footer) >= sizeof(uint16_t));314 314 315 315 static MarkedBlock::Handle* tryCreate(Heap&, AlignedMemoryAllocator*); … … 644 644 for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) { 645 645 HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]); 646 if (functor( cell, kind) == IterationStatus::Done)646 if (functor(i, cell, kind) == IterationStatus::Done) 647 647 return IterationStatus::Done; 648 648 } -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r252032 r252298 214 214 for (LargeAllocation* allocation : m_largeAllocations) 215 215 allocation->destroy(); 216 forEachSubspace([&](Subspace& subspace) { 217 if (subspace.isIsoSubspace()) 218 static_cast<IsoSubspace&>(subspace).destroyLowerTierFreeList(); 219 return IterationStatus::Continue; 220 }); 216 221 } 217 222 … … 225 230 for (LargeAllocation* allocation : m_largeAllocations) 226 231 allocation->lastChanceToFinalize(); 232 // We do not call lastChanceToFinalize for lower-tier swept cells since we need nothing to do. 227 233 } 228 234 … … 246 252 allocation->sweep(); 247 253 if (allocation->isEmpty()) { 248 m_capacity -= allocation->cellSize(); 249 allocation->destroy(); 254 if (auto* set = largeAllocationSet()) 255 set->remove(allocation->cell()); 256 if (allocation->isLowerTier()) 257 static_cast<IsoSubspace*>(allocation->subspace())->sweepLowerTierCell(allocation); 258 else { 259 m_capacity -= allocation->cellSize(); 260 allocation->destroy(); 261 } 250 262 continue; 251 263 } … … 270 282 m_largeAllocationsNurseryOffsetForSweep = 0; 271 283 m_largeAllocationsNurseryOffset = m_largeAllocations.size(); 284 } 285 286 void MarkedSpace::enableLargeAllocationTracking() 287 { 288 m_largeAllocationSet = makeUnique<HashSet<HeapCell*>>(); 289 for (auto* allocation : m_largeAllocations) 290 m_largeAllocationSet->add(allocation->cell()); 272 291 } 273 292 -
trunk/Source/JavaScriptCore/heap/MarkedSpace.h
r248143 r252298 40 40 class CompleteSubspace; 41 41 class Heap; 42 class HeapCell; 42 43 class HeapIterationScope; 44 class IsoSubspace; 43 45 class LLIntOffsetsExtractor; 44 46 class Subspace; … … 154 156 unsigned largeAllocationsNurseryOffset() const { return m_largeAllocationsNurseryOffset; } 155 157 unsigned largeAllocationsOffsetForThisCollection() const { return m_largeAllocationsOffsetForThisCollection; } 158 HashSet<HeapCell*>* largeAllocationSet() const { return m_largeAllocationSet.get(); } 159 160 void enableLargeAllocationTracking(); 156 161 157 162 // These are cached pointers and offsets for quickly searching the large allocations that are … … 184 189 friend class WeakSet; 185 190 friend class Subspace; 191 friend class IsoSubspace; 186 192 187 193 // Use this version when calling from within the GC where we know that the directories … … 199 205 Vector<Subspace*> m_subspaces; 200 206 207 std::unique_ptr<HashSet<HeapCell*>> m_largeAllocationSet; 201 208 Vector<LargeAllocation*> m_largeAllocations; 202 209 unsigned m_largeAllocationsNurseryOffset { 0 }; -
trunk/Source/JavaScriptCore/heap/PackedCellPtr.h
r247843 r252298 34 34 35 35 template<typename T> 36 class PackedCellPtr : public PackedAlignedPtr<T, MarkedBlock::atomSize> {36 class PackedCellPtr : public PackedAlignedPtr<T, 8> { 37 37 public: 38 using Base = PackedAlignedPtr<T, MarkedBlock::atomSize>;38 using Base = PackedAlignedPtr<T, 8>; 39 39 PackedCellPtr(T* pointer) 40 40 : Base(pointer) 41 41 { 42 static_assert((sizeof(T) <= MarkedSpace::largeCutoff && std::is_final<T>::value) || isAllocatedFromIsoSubspace<T>::value, "LargeAllocation does not have 16byte alignment");43 ASSERT(!(bitwise_cast<uintptr_t>(pointer) & (16 - 1)));44 42 } 45 43 }; -
trunk/Source/JavaScriptCore/heap/Subspace.h
r240216 r252298 102 102 virtual void didBeginSweepingToFreeList(MarkedBlock::Handle*); 103 103 104 bool isIsoSubspace() const { return m_isIsoSubspace; } 105 104 106 protected: 105 107 void initialize(HeapCellType*, AlignedMemoryAllocator*); … … 112 114 BlockDirectory* m_firstDirectory { nullptr }; 113 115 BlockDirectory* m_directoryForEmptyAllocation { nullptr }; // Uses the MarkedSpace linked list of blocks. 114 SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;116 SentinelLinkedList<LargeAllocation, PackedRawSentinelNode<LargeAllocation>> m_largeAllocations; 115 117 Subspace* m_nextSubspaceInAlignedMemoryAllocator { nullptr }; 116 118 117 119 CString m_name; 120 121 bool m_isIsoSubspace { false }; 118 122 }; 119 123 -
trunk/Source/JavaScriptCore/heap/WeakSet.cpp
r209570 r252298 39 39 40 40 Heap& heap = *this->heap(); 41 WeakBlock* next = 0;41 WeakBlock* next = nullptr; 42 42 for (WeakBlock* block = m_blocks.head(); block; block = next) { 43 43 next = block->next(); … … 84 84 } 85 85 86 WeakBlock::FreeCell* WeakSet::findAllocator( )86 WeakBlock::FreeCell* WeakSet::findAllocator(CellContainer container) 87 87 { 88 88 if (WeakBlock::FreeCell* allocator = tryFindAllocator()) 89 89 return allocator; 90 90 91 return addAllocator( );91 return addAllocator(container); 92 92 } 93 93 … … 106 106 } 107 107 108 WeakBlock::FreeCell* WeakSet::addAllocator( )108 WeakBlock::FreeCell* WeakSet::addAllocator(CellContainer container) 109 109 { 110 110 if (!isOnList()) 111 111 heap()->objectSpace().addActiveWeakSet(this); 112 112 113 WeakBlock* block = WeakBlock::create(*heap(), m_container);113 WeakBlock* block = WeakBlock::create(*heap(), container); 114 114 heap()->didAllocate(WeakBlock::blockSize); 115 115 m_blocks.append(block); -
trunk/Source/JavaScriptCore/heap/WeakSet.h
r251556 r252298 42 42 static void deallocate(WeakImpl*); 43 43 44 WeakSet(VM& , CellContainer);44 WeakSet(VM&); 45 45 ~WeakSet(); 46 46 void lastChanceToFinalize(); 47 47 48 CellContainer container() const { return m_container; }49 void setContainer(CellContainer container) { m_container = container; }50 51 48 Heap* heap() const; 52 49 VM& vm() const; … … 63 60 64 61 private: 65 JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator( );62 JS_EXPORT_PRIVATE WeakBlock::FreeCell* findAllocator(CellContainer); 66 63 WeakBlock::FreeCell* tryFindAllocator(); 67 WeakBlock::FreeCell* addAllocator( );64 WeakBlock::FreeCell* addAllocator(CellContainer); 68 65 void removeAllocator(WeakBlock*); 69 66 70 WeakBlock::FreeCell* m_allocator ;71 WeakBlock* m_nextAllocator ;67 WeakBlock::FreeCell* m_allocator { nullptr }; 68 WeakBlock* m_nextAllocator { nullptr }; 72 69 DoublyLinkedList<WeakBlock> m_blocks; 73 70 // m_vm must be a pointer (instead of a reference) because the JSCLLIntOffsetsExtractor 74 71 // cannot handle it being a reference. 75 72 VM* m_vm; 76 CellContainer m_container;77 73 }; 78 74 79 inline WeakSet::WeakSet(VM& vm, CellContainer container) 80 : m_allocator(0) 81 , m_nextAllocator(0) 82 , m_vm(&vm) 83 , m_container(container) 75 inline WeakSet::WeakSet(VM& vm) 76 : m_vm(&vm) 84 77 { 85 78 } … … 134 127 inline void WeakSet::resetAllocator() 135 128 { 136 m_allocator = 0;129 m_allocator = nullptr; 137 130 m_nextAllocator = m_blocks.head(); 138 131 } -
trunk/Source/JavaScriptCore/heap/WeakSetInlines.h
r206525 r252298 33 33 inline WeakImpl* WeakSet::allocate(JSValue jsValue, WeakHandleOwner* weakHandleOwner, void* context) 34 34 { 35 WeakSet& weakSet = jsValue.asCell()->cellContainer().weakSet(); 35 CellContainer container = jsValue.asCell()->cellContainer(); 36 WeakSet& weakSet = container.weakSet(); 36 37 WeakBlock::FreeCell* allocator = weakSet.m_allocator; 37 38 if (UNLIKELY(!allocator)) 38 allocator = weakSet.findAllocator( );39 allocator = weakSet.findAllocator(container); 39 40 weakSet.m_allocator = allocator->next; 40 41 -
trunk/Source/JavaScriptCore/runtime/InternalFunction.cpp
r251425 r252298 41 41 , m_globalObject(vm, this, structure->globalObject()) 42 42 { 43 // globalObject->vm() wants callees to not be large allocations.44 RELEASE_ASSERT(!isLargeAllocation());45 43 ASSERT_WITH_MESSAGE(m_functionForCall, "[[Call]] must be implemented"); 46 44 ASSERT(m_functionForConstruct); -
trunk/Source/JavaScriptCore/runtime/JSCallee.cpp
r217108 r252298 40 40 , m_scope(vm, this, globalObject) 41 41 { 42 RELEASE_ASSERT(!isLargeAllocation());43 42 } 44 43 -
trunk/Source/JavaScriptCore/runtime/JSString.h
r251584 r252298 287 287 { 288 288 #if CPU(LITTLE_ENDIAN) 289 // This access exceeds the sizeof(JSRopeString). But this is OK because JSRopeString is always allocated in MarkedBlock, 290 // and the last JSRopeString cell in the block has some subsequent bytes which are used for MarkedBlock::Footer. 291 // So the following access does not step over the page boundary in which the latter page does not have read permission. 292 return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber2Lower) & addressMask); 289 return bitwise_cast<JSString*>(WTF::unalignedLoad<uintptr_t>(&m_fiber1Upper) >> 16); 293 290 #else 294 291 return bitwise_cast<JSString*>(static_cast<uintptr_t>(m_fiber2Lower) | (static_cast<uintptr_t>(m_fiber2Upper) << 16)); -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
r252006 r252298 308 308 309 309 m_currentFrames.grow(256); 310 vm.heap.objectSpace().enableLargeAllocationTracking(); 310 311 } 311 312 … … 469 470 } 470 471 471 void SamplingProfiler::processUnverifiedStackTraces( )472 void SamplingProfiler::processUnverifiedStackTraces(const AbstractLocker&) 472 473 { 473 474 // This function needs to be called from the JSC execution thread. … … 952 953 { 953 954 HeapIterationScope heapIterationScope(m_vm.heap); 954 processUnverifiedStackTraces( );955 processUnverifiedStackTraces(locker); 955 956 } 956 957 … … 963 964 { 964 965 DeferGC deferGC(m_vm.heap); 965 LockHolder locker(m_lock);966 auto locker = holdLock(m_lock); 966 967 967 968 { 968 969 HeapIterationScope heapIterationScope(m_vm.heap); 969 processUnverifiedStackTraces( );970 processUnverifiedStackTraces(locker); 970 971 } 971 972 … … 1039 1040 void SamplingProfiler::reportTopFunctions(PrintStream& out) 1040 1041 { 1041 LockHolder locker(m_lock);1042 auto locker = holdLock(m_lock); 1042 1043 DeferGCForAWhile deferGC(m_vm.heap); 1043 1044 1044 1045 { 1045 1046 HeapIterationScope heapIterationScope(m_vm.heap); 1046 processUnverifiedStackTraces( );1047 processUnverifiedStackTraces(locker); 1047 1048 } 1048 1049 … … 1092 1093 void SamplingProfiler::reportTopBytecodes(PrintStream& out) 1093 1094 { 1094 LockHolder locker(m_lock);1095 auto locker = holdLock(m_lock); 1095 1096 DeferGCForAWhile deferGC(m_vm.heap); 1096 1097 1097 1098 { 1098 1099 HeapIterationScope heapIterationScope(m_vm.heap); 1099 processUnverifiedStackTraces( );1100 processUnverifiedStackTraces(locker); 1100 1101 } 1101 1102 -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.h
r251468 r252298 181 181 JS_EXPORT_PRIVATE void noticeCurrentThreadAsJSCExecutionThread(); 182 182 void noticeCurrentThreadAsJSCExecutionThread(const AbstractLocker&); 183 void processUnverifiedStackTraces( ); // You should call this only after acquiring the lock.183 void processUnverifiedStackTraces(const AbstractLocker&); 184 184 void setStopWatch(const AbstractLocker&, Ref<Stopwatch>&& stopwatch) { m_stopwatch = WTFMove(stopwatch); } 185 185 void pause(const AbstractLocker&); -
trunk/Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp
r251886 r252298 434 434 WebAssemblyFunction* function = new (NotNull, allocateCell<WebAssemblyFunction>(vm.heap)) WebAssemblyFunction(vm, globalObject, structure, jsEntrypoint, wasmToWasmEntrypointLoadLocation, signatureIndex); 435 435 function->finishCreation(vm, executable, length, name, instance); 436 ASSERT_WITH_MESSAGE(!function->isLargeAllocation(), "WebAssemblyFunction should be allocated not in large allocation since it is JSCallee.");437 436 return function; 438 437 }
Note: See TracChangeset
for help on using the changeset viewer.