Changeset 227617 in webkit
- Timestamp:
- Jan 25, 2018 11:32:00 AM (6 years ago)
- Location:
- trunk/Source
- Files:
-
- 12 added
- 58 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r227609 r227617 1 2018-01-25 Filip Pizlo <fpizlo@apple.com> 2 3 JSC GC should support TLCs (thread local caches) 4 https://bugs.webkit.org/show_bug.cgi?id=181559 5 6 Reviewed by Mark Lam and Saam Barati. 7 8 This is a big step towards object distancing by site origin. This patch implements TLCs, or 9 thread-local caches, which allow each thread to allocate from its own free lists. It also 10 means that any given thread can context-switch TLCs. This will allow us to do separate 11 allocation for separate site origins. Eventually, once we reshape how MarkedBlock looks, this 12 will allow us to have a hard distancing constraint between objects from different origins. 13 14 In this new design, every "size class" is represented as a BlockDirectory (formerly known as 15 MarkedAllocator, prior to r226822). This contains a bag of blocks allocated using some 16 aligned memory allocator (which roughly represents which cage you came out of), and anyone 17 using the same allocator can share those blocks - but so long as they are in that 18 BlockDirectory, they will have the size and type of that directory. Previously, each 19 BlockDirectory had exactly one FreeList. Now, each BlockDirectory has a double-linked-list of 20 LocalAllocators, each of which has a FreeList. 21 22 To decide which LocalAllocator to allocate out of, we need a ThreadLocalCache and a 23 BlockDirectory. The directory gives us an offset-within-the-ThreadLocalCache, which we simply 24 call the Allocator (which is just a POD type that contains a 32-bit offset). Each allocation 25 starts by figuring out what Allocator it wants (often we have this information at JIT time). 26 Then the allocation loads its ThreadLocalCache::Data from a fast TLS slot. Then we add the 27 Allocator offset to the ThreadLocalCache::Data to get the LocalAllocator. Note that we use 28 offsets as opposed to indices to make it easy to do the math on each allocation (if 29 LocalAllocator had a weird size then every allocation would have to do an imul). 30 31 This is a definite slow-down on GC-heavy benchmarks, but by a small margin, and only on 32 unusually heavy tests. For example, boyer and splay are both 3% regressed, but the Octane 33 geomean is just fine. The JetStream score regressed by 0.5% with p = 0.08 (so maybe there is 34 something there, but it's not significant according to our threshold). 35 36 Relanding after fixing ARM64 bug in AssemblyHelpers::emitAllocateWithNonNullAllocator(). That 37 function needs to be careful to avoid using the scratch register because the FTL will call it 38 in disallow-scratch-register mode. 39 40 * JavaScriptCore.xcodeproj/project.pbxproj: 41 * Sources.txt: 42 * b3/B3LowerToAir.cpp: 43 * b3/B3PatchpointSpecial.cpp: 44 (JSC::B3::PatchpointSpecial::admitsStack): 45 * b3/B3StackmapSpecial.cpp: 46 (JSC::B3::StackmapSpecial::forEachArgImpl): 47 (JSC::B3::StackmapSpecial::isArgValidForRep): 48 * b3/B3StackmapValue.cpp: 49 (JSC::B3::StackmapValue::appendSomeRegisterWithClobber): 50 * b3/B3StackmapValue.h: 51 * b3/B3Validate.cpp: 52 * b3/B3ValueRep.cpp: 53 (JSC::B3::ValueRep::addUsedRegistersTo const): 54 (JSC::B3::ValueRep::dump const): 55 (WTF::printInternal): 56 * b3/B3ValueRep.h: 57 (JSC::B3::ValueRep::ValueRep): 58 * bytecode/AccessCase.cpp: 59 (JSC::AccessCase::generateImpl): 60 * bytecode/ObjectAllocationProfile.h: 61 (JSC::ObjectAllocationProfile::ObjectAllocationProfile): 62 (JSC::ObjectAllocationProfile::clear): 63 * bytecode/ObjectAllocationProfileInlines.h: 64 (JSC::ObjectAllocationProfile::initializeProfile): 65 * dfg/DFGSpeculativeJIT.cpp: 66 (JSC::DFG::SpeculativeJIT::emitAllocateRawObject): 67 (JSC::DFG::SpeculativeJIT::compileMakeRope): 68 (JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage): 69 (JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage): 70 (JSC::DFG::SpeculativeJIT::compileCreateThis): 71 (JSC::DFG::SpeculativeJIT::compileNewObject): 72 * dfg/DFGSpeculativeJIT.h: 73 (JSC::DFG::SpeculativeJIT::emitAllocateJSCell): 74 (JSC::DFG::SpeculativeJIT::emitAllocateJSObject): 75 * ftl/FTLAbstractHeapRepository.h: 76 * ftl/FTLLowerDFGToB3.cpp: 77 (JSC::FTL::DFG::LowerDFGToB3::compileMakeRope): 78 (JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject): 79 (JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl): 80 (JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell): 81 (JSC::FTL::DFG::LowerDFGToB3::allocateObject): 82 (JSC::FTL::DFG::LowerDFGToB3::allocatorForSize): 83 (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject): 84 (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedCell): 85 * heap/Allocator.cpp: Added. 86 (JSC::Allocator::cellSize const): 87 * heap/Allocator.h: Added. 88 (JSC::Allocator::Allocator): 89 (JSC::Allocator::offset const): 90 (JSC::Allocator::operator== const): 91 (JSC::Allocator::operator!= const): 92 (JSC::Allocator::operator bool const): 93 * heap/AllocatorInlines.h: Added. 94 (JSC::Allocator::allocate const): 95 (JSC::Allocator::tryAllocate const): 96 * heap/BlockDirectory.cpp: 97 (JSC::BlockDirectory::BlockDirectory): 98 (JSC::BlockDirectory::findBlockForAllocation): 99 (JSC::BlockDirectory::stopAllocating): 100 (JSC::BlockDirectory::prepareForAllocation): 101 (JSC::BlockDirectory::stopAllocatingForGood): 102 (JSC::BlockDirectory::resumeAllocating): 103 (JSC::BlockDirectory::endMarking): 104 (JSC::BlockDirectory::isFreeListedCell): 105 (JSC::BlockDirectory::didConsumeFreeList): Deleted. 106 (JSC::BlockDirectory::tryAllocateWithoutCollecting): Deleted. 107 (JSC::BlockDirectory::allocateIn): Deleted. 108 (JSC::BlockDirectory::tryAllocateIn): Deleted. 109 (JSC::BlockDirectory::doTestCollectionsIfNeeded): Deleted. 110 (JSC::BlockDirectory::allocateSlowCase): Deleted. 111 * heap/BlockDirectory.h: 112 (JSC::BlockDirectory::cellKind const): 113 (JSC::BlockDirectory::allocator const): 114 (JSC::BlockDirectory::freeList const): Deleted. 115 (JSC::BlockDirectory::offsetOfFreeList): Deleted. 116 (JSC::BlockDirectory::offsetOfCellSize): Deleted. 117 * heap/BlockDirectoryInlines.h: 118 (JSC::BlockDirectory::isFreeListedCell const): Deleted. 119 (JSC::BlockDirectory::allocate): Deleted. 120 * heap/CompleteSubspace.cpp: 121 (JSC::CompleteSubspace::CompleteSubspace): 122 (JSC::CompleteSubspace::allocatorFor): 123 (JSC::CompleteSubspace::allocate): 124 (JSC::CompleteSubspace::allocateNonVirtual): 125 (JSC::CompleteSubspace::allocatorForSlow): 126 (JSC::CompleteSubspace::allocateSlow): 127 (JSC::CompleteSubspace::tryAllocateSlow): 128 * heap/CompleteSubspace.h: 129 (JSC::CompleteSubspace::allocatorForSizeStep): 130 (JSC::CompleteSubspace::allocatorForNonVirtual): 131 * heap/FreeList.h: 132 * heap/GCDeferralContext.h: 133 * heap/Heap.cpp: 134 (JSC::Heap::Heap): 135 (JSC::Heap::lastChanceToFinalize): 136 * heap/Heap.h: 137 (JSC::Heap::threadLocalCacheLayout): 138 * heap/IsoCellSet.h: 139 * heap/IsoSubspace.cpp: 140 (JSC::IsoSubspace::IsoSubspace): 141 (JSC::IsoSubspace::allocatorFor): 142 (JSC::IsoSubspace::allocate): 143 (JSC::IsoSubspace::allocateNonVirtual): 144 * heap/IsoSubspace.h: 145 (JSC::IsoSubspace::allocatorForNonVirtual): 146 * heap/LocalAllocator.cpp: Added. 147 (JSC::LocalAllocator::LocalAllocator): 148 (JSC::LocalAllocator::reset): 149 (JSC::LocalAllocator::~LocalAllocator): 150 (JSC::LocalAllocator::stopAllocating): 151 (JSC::LocalAllocator::resumeAllocating): 152 (JSC::LocalAllocator::prepareForAllocation): 153 (JSC::LocalAllocator::stopAllocatingForGood): 154 (JSC::LocalAllocator::allocateSlowCase): 155 (JSC::LocalAllocator::didConsumeFreeList): 156 (JSC::LocalAllocator::tryAllocateWithoutCollecting): 157 (JSC::LocalAllocator::allocateIn): 158 (JSC::LocalAllocator::tryAllocateIn): 159 (JSC::LocalAllocator::doTestCollectionsIfNeeded): 160 (JSC::LocalAllocator::isFreeListedCell const): 161 * heap/LocalAllocator.h: Added. 162 (JSC::LocalAllocator::offsetOfFreeList): 163 (JSC::LocalAllocator::offsetOfCellSize): 164 * heap/LocalAllocatorInlines.h: Added. 165 (JSC::LocalAllocator::allocate): 166 * heap/MarkedSpace.cpp: 167 (JSC::MarkedSpace::stopAllocatingForGood): 168 * heap/MarkedSpace.h: 169 * heap/SlotVisitor.cpp: 170 * heap/SlotVisitor.h: 171 * heap/Subspace.h: 172 * heap/ThreadLocalCache.cpp: Added. 173 (JSC::ThreadLocalCache::create): 174 (JSC::ThreadLocalCache::ThreadLocalCache): 175 (JSC::ThreadLocalCache::~ThreadLocalCache): 176 (JSC::ThreadLocalCache::allocateData): 177 (JSC::ThreadLocalCache::destroyData): 178 (JSC::ThreadLocalCache::installSlow): 179 (JSC::ThreadLocalCache::installData): 180 (JSC::ThreadLocalCache::allocatorSlow): 181 (JSC::ThreadLocalCache::destructor): 182 * heap/ThreadLocalCache.h: Added. 183 (JSC::ThreadLocalCache::offsetOfSize): 184 (JSC::ThreadLocalCache::offsetOfFirstAllocator): 185 * heap/ThreadLocalCacheInlines.h: Added. 186 (JSC::ThreadLocalCache::getImpl): 187 (JSC::ThreadLocalCache::get): 188 (JSC::ThreadLocalCache::install): 189 (JSC::ThreadLocalCache::allocator): 190 (JSC::ThreadLocalCache::tryGetAllocator): 191 * heap/ThreadLocalCacheLayout.cpp: Added. 192 (JSC::ThreadLocalCacheLayout::ThreadLocalCacheLayout): 193 (JSC::ThreadLocalCacheLayout::~ThreadLocalCacheLayout): 194 (JSC::ThreadLocalCacheLayout::allocateOffset): 195 (JSC::ThreadLocalCacheLayout::snapshot): 196 (JSC::ThreadLocalCacheLayout::directory): 197 * heap/ThreadLocalCacheLayout.h: Added. 198 * jit/AssemblyHelpers.cpp: 199 (JSC::AssemblyHelpers::emitAllocateWithNonNullAllocator): 200 (JSC::AssemblyHelpers::emitAllocate): 201 (JSC::AssemblyHelpers::emitAllocateVariableSized): 202 * jit/AssemblyHelpers.h: 203 (JSC::AssemblyHelpers::vm): 204 (JSC::AssemblyHelpers::emitAllocateJSCell): 205 (JSC::AssemblyHelpers::emitAllocateJSObject): 206 (JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize): 207 (JSC::AssemblyHelpers::emitAllocateWithNonNullAllocator): Deleted. 208 (JSC::AssemblyHelpers::emitAllocate): Deleted. 209 (JSC::AssemblyHelpers::emitAllocateVariableSized): Deleted. 210 * jit/JITOpcodes.cpp: 211 (JSC::JIT::emit_op_new_object): 212 (JSC::JIT::emit_op_create_this): 213 * jit/JITOpcodes32_64.cpp: 214 (JSC::JIT::emit_op_new_object): 215 (JSC::JIT::emit_op_create_this): 216 * runtime/ButterflyInlines.h: 217 (JSC::Butterfly::createUninitialized): 218 (JSC::Butterfly::tryCreate): 219 (JSC::Butterfly::growArrayRight): 220 * runtime/DirectArguments.cpp: 221 (JSC::DirectArguments::overrideThings): 222 * runtime/GenericArgumentsInlines.h: 223 (JSC::GenericArguments<Type>::initModifiedArgumentsDescriptor): 224 * runtime/HashMapImpl.h: 225 (JSC::HashMapBuffer::create): 226 * runtime/JSArray.cpp: 227 (JSC::JSArray::tryCreateUninitializedRestricted): 228 (JSC::JSArray::unshiftCountSlowCase): 229 * runtime/JSArray.h: 230 (JSC::JSArray::tryCreate): 231 * runtime/JSArrayBufferView.cpp: 232 (JSC::JSArrayBufferView::ConstructionContext::ConstructionContext): 233 * runtime/JSCellInlines.h: 234 (JSC::tryAllocateCellHelper): 235 * runtime/JSGlobalObject.cpp: 236 (JSC::JSGlobalObject::JSGlobalObject): 237 * runtime/JSGlobalObject.h: 238 (JSC::JSGlobalObject::threadLocalCache const): 239 * runtime/JSLock.cpp: 240 (JSC::JSLock::didAcquireLock): 241 * runtime/Options.h: 242 * runtime/RegExpMatchesArray.h: 243 (JSC::tryCreateUninitializedRegExpMatchesArray): 244 * runtime/VM.cpp: 245 (JSC::VM::VM): 246 * runtime/VM.h: 247 * runtime/VMEntryScope.cpp: 248 (JSC::VMEntryScope::VMEntryScope): 249 1 250 2018-01-25 Commit Queue <commit-queue@webkit.org> 2 251 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r227609 r227617 390 390 0F725CB01C506D3B00AD943A /* B3FoldPathConstants.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F725CAE1C506D3B00AD943A /* B3FoldPathConstants.h */; }; 391 391 0F74B93B1F89614800B935D3 /* PrototypeKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F74B93A1F89614500B935D3 /* PrototypeKey.h */; settings = {ATTRIBUTES = (Private, ); }; }; 392 0F75A05E200D25F60038E2CF /* ThreadLocalCache.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A055200D25EF0038E2CF /* ThreadLocalCache.h */; settings = {ATTRIBUTES = (Private, ); }; }; 393 0F75A060200D260B0038E2CF /* LocalAllocatorInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A05A200D25F00038E2CF /* LocalAllocatorInlines.h */; }; 394 0F75A061200D26180038E2CF /* LocalAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A057200D25F00038E2CF /* LocalAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; }; 395 0F75A062200D261D0038E2CF /* AllocatorInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A05D200D25F10038E2CF /* AllocatorInlines.h */; }; 396 0F75A063200D261F0038E2CF /* Allocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A054200D25EF0038E2CF /* Allocator.h */; settings = {ATTRIBUTES = (Private, ); }; }; 397 0F75A064200D26280038E2CF /* ThreadLocalCacheLayout.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A05C200D25F10038E2CF /* ThreadLocalCacheLayout.h */; settings = {ATTRIBUTES = (Private, ); }; }; 398 0F75A0662013E4F10038E2CF /* JITAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F75A0652013E4EF0038E2CF /* JITAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; }; 392 399 0F766D2C15A8CC3A008F363E /* JITStubRoutineSet.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F766D2A15A8CC34008F363E /* JITStubRoutineSet.h */; settings = {ATTRIBUTES = (Private, ); }; }; 393 400 0F766D3015A8DCE2008F363E /* GCAwareJITStubRoutine.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F766D2E15A8DCDD008F363E /* GCAwareJITStubRoutine.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2420 2427 0F725CAE1C506D3B00AD943A /* B3FoldPathConstants.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3FoldPathConstants.h; path = b3/B3FoldPathConstants.h; sourceTree = "<group>"; }; 2421 2428 0F74B93A1F89614500B935D3 /* PrototypeKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PrototypeKey.h; sourceTree = "<group>"; }; 2429 0F75A054200D25EF0038E2CF /* Allocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Allocator.h; sourceTree = "<group>"; }; 2430 0F75A055200D25EF0038E2CF /* ThreadLocalCache.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadLocalCache.h; sourceTree = "<group>"; }; 2431 0F75A056200D25EF0038E2CF /* ThreadLocalCacheInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadLocalCacheInlines.h; sourceTree = "<group>"; }; 2432 0F75A057200D25F00038E2CF /* LocalAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LocalAllocator.h; sourceTree = "<group>"; }; 2433 0F75A058200D25F00038E2CF /* ThreadLocalCache.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ThreadLocalCache.cpp; sourceTree = "<group>"; }; 2434 0F75A059200D25F00038E2CF /* LocalAllocator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LocalAllocator.cpp; sourceTree = "<group>"; }; 2435 0F75A05A200D25F00038E2CF /* LocalAllocatorInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LocalAllocatorInlines.h; sourceTree = "<group>"; }; 2436 0F75A05B200D25F10038E2CF /* ThreadLocalCacheLayout.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ThreadLocalCacheLayout.cpp; sourceTree = "<group>"; }; 2437 0F75A05C200D25F10038E2CF /* ThreadLocalCacheLayout.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadLocalCacheLayout.h; sourceTree = "<group>"; }; 2438 0F75A05D200D25F10038E2CF /* AllocatorInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocatorInlines.h; sourceTree = "<group>"; }; 2439 0F75A0652013E4EF0038E2CF /* JITAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAllocator.h; sourceTree = "<group>"; }; 2422 2440 0F766D1C15A5028D008F363E /* JITStubRoutine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITStubRoutine.h; sourceTree = "<group>"; }; 2423 2441 0F766D2615A8CC1B008F363E /* JITStubRoutine.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITStubRoutine.cpp; sourceTree = "<group>"; }; … … 5448 5466 FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */, 5449 5467 FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */, 5468 0F75A0652013E4EF0038E2CF /* JITAllocator.h */, 5450 5469 86A90ECF0EE7D51F00AB350D /* JITArithmetic.cpp */, 5451 5470 A75706DD118A2BCF0057F88F /* JITArithmetic32_64.cpp */, … … 5541 5560 isa = PBXGroup; 5542 5561 children = ( 5562 0F75A054200D25EF0038E2CF /* Allocator.h */, 5563 0F75A05D200D25F10038E2CF /* AllocatorInlines.h */, 5564 0F75A059200D25F00038E2CF /* LocalAllocator.cpp */, 5565 0F75A057200D25F00038E2CF /* LocalAllocator.h */, 5566 0F75A05A200D25F00038E2CF /* LocalAllocatorInlines.h */, 5567 0F75A058200D25F00038E2CF /* ThreadLocalCache.cpp */, 5568 0F75A055200D25EF0038E2CF /* ThreadLocalCache.h */, 5569 0F75A056200D25EF0038E2CF /* ThreadLocalCacheInlines.h */, 5570 0F75A05B200D25F10038E2CF /* ThreadLocalCacheLayout.cpp */, 5571 0F75A05C200D25F10038E2CF /* ThreadLocalCacheLayout.h */, 5543 5572 0FEC3C501F33A41600F59B6C /* AlignedMemoryAllocator.cpp */, 5544 5573 0FEC3C511F33A41600F59B6C /* AlignedMemoryAllocator.h */, … … 8049 8078 0FEC85771BDACDC70080FF74 /* AirFrequentedBlock.h in Headers */, 8050 8079 0FEC85791BDACDC70080FF74 /* AirGenerate.h in Headers */, 8080 0F75A061200D26180038E2CF /* LocalAllocator.h in Headers */, 8051 8081 0FEC857A1BDACDC70080FF74 /* AirGenerationContext.h in Headers */, 8052 8082 0FEC857C1BDACDC70080FF74 /* AirHandleCalleeSaves.h in Headers */, … … 8214 8244 0F33FCF81C136E2500323F67 /* B3StackmapGenerationParams.h in Headers */, 8215 8245 0FEC85311BDACDAC0080FF74 /* B3StackmapSpecial.h in Headers */, 8246 0F75A064200D26280038E2CF /* ThreadLocalCacheLayout.h in Headers */, 8216 8247 0F338DF21BE93AD10013C88F /* B3StackmapValue.h in Headers */, 8217 8248 0F9495881C57F47500413A48 /* B3StackSlot.h in Headers */, … … 8383 8414 0F2DD8121AB3D8BE00BBB8E8 /* DFGArgumentsEliminationPhase.h in Headers */, 8384 8415 0F2DD8141AB3D8BE00BBB8E8 /* DFGArgumentsUtilities.h in Headers */, 8416 0F75A063200D261F0038E2CF /* Allocator.h in Headers */, 8385 8417 0F485322187750560083B687 /* DFGArithMode.h in Headers */, 8386 8418 0F05C3B41683CF9200BAF45B /* DFGArrayifySlowPathGenerator.h in Headers */, … … 8663 8695 2AACE63D18CA5A0300ED0191 /* GCActivityCallback.h in Headers */, 8664 8696 BCBE2CAE14E985AA000593AD /* GCAssertions.h in Headers */, 8697 0F75A060200D260B0038E2CF /* LocalAllocatorInlines.h in Headers */, 8665 8698 0F766D3015A8DCE2008F363E /* GCAwareJITStubRoutine.h in Headers */, 8666 8699 0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */, … … 8747 8780 0FB7F39B15ED8E4600F167B2 /* IndexingType.h in Headers */, 8748 8781 14386A791DD6989C008652C4 /* IndirectEvalExecutable.h in Headers */, 8782 0F75A062200D261D0038E2CF /* AllocatorInlines.h in Headers */, 8749 8783 0FBF92B91FD76FFF00AC28A8 /* InferredStructure.h in Headers */, 8750 8784 0FBF92BA1FD7700400AC28A8 /* InferredStructureWatchpoint.h in Headers */, … … 8932 8966 E33F50871B8449EF00413856 /* JSInternalPromiseConstructor.lut.h in Headers */, 8933 8967 E33F50851B8437A000413856 /* JSInternalPromiseDeferred.h in Headers */, 8968 0F75A05E200D25F60038E2CF /* ThreadLocalCache.h in Headers */, 8934 8969 E33F50751B8421C000413856 /* JSInternalPromisePrototype.h in Headers */, 8935 8970 A503FA1E188E0FB000110F14 /* JSJavaScriptCallFramePrototype.h in Headers */, … … 9431 9466 14BE7D3317135CF400D1807A /* WeakInlines.h in Headers */, 9432 9467 A7CA3AE417DA41AE006538AF /* WeakMapConstructor.h in Headers */, 9468 0F75A0662013E4F10038E2CF /* JITAllocator.h in Headers */, 9433 9469 E3A32BC71FC83147007D7E76 /* WeakMapImpl.h in Headers */, 9434 9470 E393ADD81FE702D00022D681 /* WeakMapImplInlines.h in Headers */, -
trunk/Source/JavaScriptCore/Sources.txt
r227609 r227617 467 467 468 468 heap/AlignedMemoryAllocator.cpp 469 heap/Allocator.cpp 469 470 heap/BlockDirectory.cpp 470 471 heap/CellAttributes.cpp … … 501 502 heap/JITStubRoutineSet.cpp 502 503 heap/LargeAllocation.cpp 504 heap/LocalAllocator.cpp 503 505 heap/MachineStackMarker.cpp 504 506 heap/MarkStack.cpp … … 519 521 heap/SynchronousStopTheWorldMutatorScheduler.cpp 520 522 heap/Synchronousness.cpp 523 heap/ThreadLocalCache.cpp 524 heap/ThreadLocalCacheLayout.cpp 521 525 heap/VisitRaceKey.cpp 522 526 heap/Weak.cpp -
trunk/Source/JavaScriptCore/b3/B3LowerToAir.cpp
r227609 r227617 1278 1278 arg = tmp(value.value()); 1279 1279 break; 1280 case ValueRep::SomeRegisterWithClobber: { 1281 Tmp dstTmp = m_code.newTmp(value.value()->resultBank()); 1282 append(relaxedMoveForType(value.value()->type()), immOrTmp(value.value()), dstTmp); 1283 arg = dstTmp; 1284 break; 1285 } 1280 1286 case ValueRep::LateRegister: 1281 1287 case ValueRep::Register: -
trunk/Source/JavaScriptCore/b3/B3PatchpointSpecial.cpp
r227609 r227617 119 119 return true; 120 120 case ValueRep::SomeRegister: 121 case ValueRep::SomeRegisterWithClobber: 121 122 case ValueRep::SomeEarlyRegister: 122 123 case ValueRep::Register: -
trunk/Source/JavaScriptCore/b3/B3StackmapSpecial.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 111 111 role = Arg::Use; 112 112 break; 113 case ValueRep::SomeRegisterWithClobber: 114 role = Arg::UseDef; 115 break; 113 116 case ValueRep::LateRegister: 114 117 role = Arg::LateUse; … … 129 132 // original stackmap value across the Special operation. 130 133 if (!Arg::isLateUse(role) && optionalDefArgWidth && *optionalDefArgWidth < child.value()->resultWidth()) { 134 // The role can only be some kind of def if we did SomeRegisterWithClobber, which is 135 // only allowed for patchpoints. Patchpoints don't use the defArgWidth feature. 136 RELEASE_ASSERT(!Arg::isAnyDef(role)); 137 131 138 if (Arg::isWarmUse(role)) 132 139 role = Arg::LateUse; … … 246 253 return true; 247 254 case ValueRep::SomeRegister: 255 case ValueRep::SomeRegisterWithClobber: 248 256 case ValueRep::SomeEarlyRegister: 249 257 return arg.isTmp(); -
trunk/Source/JavaScriptCore/b3/B3StackmapValue.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 54 54 } 55 55 56 void StackmapValue::appendSomeRegisterWithClobber(Value* value) 57 { 58 append(ConstrainedValue(value, ValueRep::SomeRegisterWithClobber)); 59 } 60 56 61 void StackmapValue::setConstrainedChild(unsigned index, const ConstrainedValue& constrainedValue) 57 62 { -
trunk/Source/JavaScriptCore/b3/B3StackmapValue.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 100 100 // to SomeRegister. 101 101 void appendSomeRegister(Value*); 102 102 void appendSomeRegisterWithClobber(Value*); 103 103 104 const Vector<ValueRep>& reps() const { return m_reps; } 104 105 -
trunk/Source/JavaScriptCore/b3/B3Validate.cpp
r227609 r227617 433 433 if (value->type() == Void) 434 434 VALIDATE(value->as<PatchpointValue>()->resultConstraint == ValueRep::WarmAny, ("At ", *value)); 435 else { 436 switch (value->as<PatchpointValue>()->resultConstraint.kind()) { 437 case ValueRep::WarmAny: 438 case ValueRep::SomeRegister: 439 case ValueRep::SomeEarlyRegister: 440 case ValueRep::Register: 441 case ValueRep::StackArgument: 442 break; 443 default: 444 VALIDATE(false, ("At ", *value)); 445 break; 446 } 447 435 else 448 436 validateStackmapConstraint(value, ConstrainedValue(value, value->as<PatchpointValue>()->resultConstraint), ConstraintRole::Def); 449 }450 437 validateStackmap(value); 451 438 break; … … 572 559 switch (value.rep().kind()) { 573 560 case ValueRep::WarmAny: 574 case ValueRep::ColdAny:575 case ValueRep::LateColdAny:576 561 case ValueRep::SomeRegister: 577 562 case ValueRep::StackArgument: 563 break; 564 case ValueRep::LateColdAny: 565 case ValueRep::ColdAny: 566 VALIDATE(role == ConstraintRole::Use, ("At ", *context, ": ", value)); 567 break; 568 case ValueRep::SomeRegisterWithClobber: 569 VALIDATE(role == ConstraintRole::Use, ("At ", *context, ": ", value)); 570 VALIDATE(context->as<PatchpointValue>(), ("At ", *context)); 578 571 break; 579 572 case ValueRep::SomeEarlyRegister: … … 582 575 case ValueRep::Register: 583 576 case ValueRep::LateRegister: 577 if (value.rep().kind() == ValueRep::LateRegister) 578 VALIDATE(role == ConstraintRole::Use, ("At ", *context, ": ", value)); 584 579 if (value.rep().reg().isGPR()) 585 580 VALIDATE(isInt(value.value()->type()), ("At ", *context, ": ", value)); -
trunk/Source/JavaScriptCore/b3/B3ValueRep.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 41 41 case LateColdAny: 42 42 case SomeRegister: 43 case SomeRegisterWithClobber: 43 44 case SomeEarlyRegister: 44 45 case Constant: … … 72 73 case LateColdAny: 73 74 case SomeRegister: 75 case SomeRegisterWithClobber: 74 76 case SomeEarlyRegister: 75 77 return; … … 176 178 out.print("SomeRegister"); 177 179 return; 180 case ValueRep::SomeRegisterWithClobber: 181 out.print("SomeRegisterWithClobber"); 182 return; 178 183 case ValueRep::SomeEarlyRegister: 179 184 out.print("SomeEarlyRegister"); -
trunk/Source/JavaScriptCore/b3/B3ValueRep.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 66 66 // register that this claims to clobber! 67 67 SomeRegister, 68 69 // As an input representation, this means that B3 should pick some register but that this 70 // register is then cobbered with garbage. This only works for patchpoints. 71 SomeRegisterWithClobber, 68 72 69 73 // As an input representation, this tells us that B3 should pick some register, but implies … … 108 112 : m_kind(kind) 109 113 { 110 ASSERT(kind == WarmAny || kind == ColdAny || kind == LateColdAny || kind == SomeRegister || kind == Some EarlyRegister);114 ASSERT(kind == WarmAny || kind == ColdAny || kind == LateColdAny || kind == SomeRegister || kind == SomeRegisterWithClobber || kind == SomeEarlyRegister); 111 115 } 112 116 -
trunk/Source/JavaScriptCore/bytecode/AccessCase.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2017 Apple Inc. All rights reserved.2 * Copyright (C) 2017-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 956 956 957 957 if (allocatingInline) { 958 BlockDirectory* allocator = vm.jsValueGigacageAuxiliarySpace.allocatorFor(newSize, AllocatorForMode::AllocatorIfExists); 959 960 if (!allocator) { 961 // Yuck, this case would suck! 962 slowPath.append(jit.jump()); 963 } 964 965 jit.move(CCallHelpers::TrustedImmPtr(allocator), scratchGPR2); 966 jit.emitAllocate(scratchGPR, allocator, scratchGPR2, scratchGPR3, slowPath); 958 Allocator allocator = vm.jsValueGigacageAuxiliarySpace.allocatorFor(newSize, AllocatorForMode::AllocatorIfExists); 959 960 jit.emitAllocate(scratchGPR, JITAllocator::constant(allocator), scratchGPR2, scratchGPR3, slowPath); 967 961 jit.addPtr(CCallHelpers::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR); 968 962 -
trunk/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
r227609 r227617 44 44 45 45 ObjectAllocationProfile() 46 : m_allocator(0) 47 , m_inlineCapacity(0) 46 : m_inlineCapacity(0) 48 47 { 49 48 } … … 64 63 void clear() 65 64 { 66 m_allocator = nullptr;65 m_allocator = Allocator(); 67 66 m_structure.clear(); 68 67 m_inlineCapacity = 0; … … 78 77 unsigned possibleDefaultPropertyCount(VM&, JSObject* prototype); 79 78 80 BlockDirectory*m_allocator; // Precomputed to make things easier for generated code.79 Allocator m_allocator; // Precomputed to make things easier for generated code. 81 80 WriteBarrier<Structure> m_structure; 82 81 unsigned m_inlineCapacity; -
trunk/Source/JavaScriptCore/bytecode/ObjectAllocationProfileInlines.h
r227609 r227617 54 54 if (Structure* structure = executable->cachedPolyProtoStructure()) { 55 55 RELEASE_ASSERT(structure->typeInfo().type() == FinalObjectType); 56 m_allocator = nullptr;56 m_allocator = Allocator(); 57 57 m_structure.set(vm, owner, structure); 58 58 m_inlineCapacity = structure->inlineCapacity(); … … 100 100 101 101 size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity); 102 BlockDirectory*allocator = vm.cellSpace.allocatorForNonVirtual(allocationSize, AllocatorForMode::EnsureAllocator);102 Allocator allocator = vm.cellSpace.allocatorForNonVirtual(allocationSize, AllocatorForMode::EnsureAllocator); 103 103 104 104 // Take advantage of extra inline capacity available in the size class. 105 105 if (allocator) { 106 size_t slop = (allocator ->cellSize() - allocationSize) / sizeof(WriteBarrier<Unknown>);106 size_t slop = (allocator.cellSize(vm.heap) - allocationSize) / sizeof(WriteBarrier<Unknown>); 107 107 inlineCapacity += slop; 108 108 if (inlineCapacity > JSFinalObject::maxInlineCapacity()) … … 114 114 if (isPolyProto) { 115 115 ASSERT(structure->hasPolyProto()); 116 m_allocator = nullptr;116 m_allocator = Allocator(); 117 117 executable->setCachedPolyProtoStructure(vm, structure); 118 118 } else { -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r227609 r227617 114 114 115 115 if (size) { 116 if (BlockDirectory* allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists)) { 117 m_jit.move(TrustedImmPtr(allocator), scratchGPR); 118 m_jit.emitAllocate(storageGPR, allocator, scratchGPR, scratch2GPR, slowCases); 116 if (Allocator allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists)) { 117 m_jit.emitAllocate(storageGPR, JITAllocator::constant(allocator), scratchGPR, scratch2GPR, slowCases); 119 118 120 119 m_jit.addPtr( … … 129 128 130 129 size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity); 131 BlockDirectory* allocatorPtr = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 132 if (allocatorPtr) { 133 m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR); 130 Allocator allocator = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 131 if (allocator) { 134 132 uint32_t mask = WTF::computeIndexingMask(vectorLength); 135 emitAllocateJSObject(resultGPR, allocatorPtr, scratchGPR, TrustedImmPtr(structure), storageGPR, TrustedImm32(mask), scratch2GPR, slowCases);133 emitAllocateJSObject(resultGPR, JITAllocator::constant(allocator), scratchGPR, TrustedImmPtr(structure), storageGPR, TrustedImm32(mask), scratch2GPR, slowCases); 136 134 m_jit.emitInitializeInlineStorage(resultGPR, structure->inlineCapacity()); 137 135 } else … … 4209 4207 4210 4208 JITCompiler::JumpList slowPath; 4211 BlockDirectory* blockDirectory = subspaceFor<JSRopeString>(*m_jit.vm())->allocatorForNonVirtual(sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); 4212 m_jit.move(TrustedImmPtr(blockDirectory), allocatorGPR); 4213 emitAllocateJSCell(resultGPR, blockDirectory, allocatorGPR, TrustedImmPtr(m_jit.graph().registerStructure(m_jit.vm()->stringStructure.get())), scratchGPR, slowPath); 4209 Allocator allocatorValue = subspaceFor<JSRopeString>(*m_jit.vm())->allocatorForNonVirtual(sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); 4210 emitAllocateJSCell(resultGPR, JITAllocator::constant(allocatorValue), allocatorGPR, TrustedImmPtr(m_jit.graph().registerStructure(m_jit.vm()->stringStructure.get())), scratchGPR, slowPath); 4214 4211 4215 4212 m_jit.storePtr(TrustedImmPtr(0), JITCompiler::Address(resultGPR, JSString::offsetOfValue())); … … 8385 8382 size_t size = initialOutOfLineCapacity * sizeof(JSValue); 8386 8383 8387 BlockDirectory*allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists);8384 Allocator allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists); 8388 8385 8389 8386 if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) { … … 8410 8407 GPRReg scratchGPR3 = scratch3.gpr(); 8411 8408 8412 m_jit.move(TrustedImmPtr(allocator), scratchGPR2);8413 8409 JITCompiler::JumpList slowPath; 8414 m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath);8410 m_jit.emitAllocate(scratchGPR1, JITAllocator::constant(allocator), scratchGPR2, scratchGPR3, slowPath); 8415 8411 m_jit.addPtr(JITCompiler::TrustedImm32(size + sizeof(IndexingHeader)), scratchGPR1); 8416 8412 … … 8430 8426 ASSERT(newSize == node->transition()->next->outOfLineCapacity() * sizeof(JSValue)); 8431 8427 8432 BlockDirectory*allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(newSize, AllocatorForMode::AllocatorIfExists);8428 Allocator allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(newSize, AllocatorForMode::AllocatorIfExists); 8433 8429 8434 8430 if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) { … … 8458 8454 8459 8455 JITCompiler::JumpList slowPath; 8460 m_jit.move(TrustedImmPtr(allocator), scratchGPR2); 8461 m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath); 8456 m_jit.emitAllocate(scratchGPR1, JITAllocator::constant(allocator), scratchGPR2, scratchGPR3, slowPath); 8462 8457 8463 8458 m_jit.addPtr(JITCompiler::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR1); … … 11449 11444 m_jit.loadPtr(JITCompiler::Address(calleeGPR, JSFunction::offsetOfRareData()), rareDataGPR); 11450 11445 slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, rareDataGPR)); 11451 m_jit.load Ptr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR);11446 m_jit.load32(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR); 11452 11447 m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR); 11453 11448 11454 slowPath.append(m_jit.branch TestPtr(MacroAssembler::Zero, allocatorGPR));11449 slowPath.append(m_jit.branch32(MacroAssembler::Equal, allocatorGPR, TrustedImm32(Allocator().offset()))); 11455 11450 11456 11451 auto butterfly = TrustedImmPtr(nullptr); 11457 11452 auto mask = TrustedImm32(0); 11458 emitAllocateJSObject(resultGPR, nullptr, allocatorGPR, structureGPR, butterfly, mask, scratchGPR, slowPath);11453 emitAllocateJSObject(resultGPR, JITAllocator::variable(), allocatorGPR, structureGPR, butterfly, mask, scratchGPR, slowPath); 11459 11454 11460 11455 m_jit.loadPtr(JITCompiler::Address(calleeGPR, JSFunction::offsetOfRareData()), rareDataGPR); … … 11482 11477 RegisteredStructure structure = node->structure(); 11483 11478 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); 11484 BlockDirectory* allocatorPtr = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 11485 11486 m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR); 11487 auto butterfly = TrustedImmPtr(nullptr); 11488 auto mask = TrustedImm32(0); 11489 emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), butterfly, mask, scratchGPR, slowPath); 11490 m_jit.emitInitializeInlineStorage(resultGPR, structure->inlineCapacity()); 11491 m_jit.mutatorFence(*m_jit.vm()); 11479 Allocator allocatorValue = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 11480 11481 if (!allocatorValue) 11482 slowPath.append(m_jit.jump()); 11483 else { 11484 auto butterfly = TrustedImmPtr(nullptr); 11485 auto mask = TrustedImm32(0); 11486 emitAllocateJSObject(resultGPR, JITAllocator::constant(allocatorValue), allocatorGPR, TrustedImmPtr(structure), butterfly, mask, scratchGPR, slowPath); 11487 m_jit.emitInitializeInlineStorage(resultGPR, structure->inlineCapacity()); 11488 m_jit.mutatorFence(*m_jit.vm()); 11489 } 11492 11490 11493 11491 addSlowPathGenerator(slowPathCall(slowPath, this, operationNewObject, resultGPR, structure)); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2011-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2011-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 3143 3143 template <typename StructureType> // StructureType can be GPR or ImmPtr. 3144 3144 void emitAllocateJSCell( 3145 GPRReg resultGPR, BlockDirectory*allocator, GPRReg allocatorGPR, StructureType structure,3145 GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, StructureType structure, 3146 3146 GPRReg scratchGPR, MacroAssembler::JumpList& slowPath) 3147 3147 { … … 3152 3152 template <typename StructureType, typename StorageType, typename MaskType> // StructureType, StorageType and, MaskType can be GPR or ImmPtr. 3153 3153 void emitAllocateJSObject( 3154 GPRReg resultGPR, BlockDirectory*allocator, GPRReg allocatorGPR, StructureType structure,3154 GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, StructureType structure, 3155 3155 StorageType storage, MaskType mask, GPRReg scratchGPR, MacroAssembler::JumpList& slowPath) 3156 3156 { -
trunk/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2013-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2013-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 132 132 #define FOR_EACH_INDEXED_ABSTRACT_HEAP(macro) \ 133 133 macro(ArrayStorage_vector, ArrayStorage::vectorOffset(), sizeof(WriteBarrier<Unknown>)) \ 134 macro(CompleteSubspace_allocatorForSizeStep, CompleteSubspace::offsetOfAllocatorForSizeStep(), sizeof( BlockDirectory*)) \134 macro(CompleteSubspace_allocatorForSizeStep, CompleteSubspace::offsetOfAllocatorForSizeStep(), sizeof(Allocator)) \ 135 135 macro(DirectArguments_storage, DirectArguments::storageOffset(), sizeof(EncodedJSValue)) \ 136 136 macro(JSLexicalEnvironment_variables, JSLexicalEnvironment::offsetOfVariables(), sizeof(EncodedJSValue)) \ -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r227609 r227617 5877 5877 LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath); 5878 5878 5879 BlockDirectory*allocator = subspaceFor<JSRopeString>(vm())->allocatorForNonVirtual(sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists);5879 Allocator allocator = subspaceFor<JSRopeString>(vm())->allocatorForNonVirtual(sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); 5880 5880 5881 5881 LValue result = allocateCell( 5882 m_out.constInt Ptr(allocator), vm().stringStructure.get(), slowPath);5882 m_out.constInt32(allocator.offset()), vm().stringStructure.get(), slowPath); 5883 5883 5884 5884 m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value); … … 9910 9910 if (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType())) { 9911 9911 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); 9912 BlockDirectory*cellAllocator = subspaceFor<JSFinalObject>(vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists);9912 Allocator cellAllocator = subspaceFor<JSFinalObject>(vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 9913 9913 9914 9914 bool hasIndexingHeader = hasIndexedProperties(structure->indexingType()); … … 9969 9969 LValue mask = computeButterflyIndexingMask(vectorLength); 9970 9970 LValue fastObjectValue = allocateObject( 9971 m_out.constInt Ptr(cellAllocator), structure, fastButterflyValue, mask, slowPath);9971 m_out.constInt32(cellAllocator.offset()), structure, fastButterflyValue, mask, slowPath); 9972 9972 9973 9973 ValueFromBlock fastObject = m_out.anchor(fastObjectValue); … … 10956 10956 10957 10957 size_t sizeInBytes = sizeInValues * sizeof(JSValue); 10958 BlockDirectory*allocator = vm().jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(sizeInBytes, AllocatorForMode::AllocatorIfExists);10959 LValue startOfStorage = allocateHeapCell(m_out.constInt Ptr(allocator), slowPath);10958 Allocator allocator = vm().jsValueGigacageAuxiliarySpace.allocatorForNonVirtual(sizeInBytes, AllocatorForMode::AllocatorIfExists); 10959 LValue startOfStorage = allocateHeapCell(m_out.constInt32(allocator.offset()), slowPath); 10960 10960 ValueFromBlock fastButterfly = m_out.anchor( 10961 10961 m_out.add(m_out.constIntPtr(sizeInBytes + sizeof(IndexingHeader)), startOfStorage)); … … 11989 11989 LValue allocateHeapCell(LValue allocator, LBasicBlock slowPath) 11990 11990 { 11991 BlockDirectory* actualAllocator = nullptr; 11992 if (allocator->hasIntPtr()) 11993 actualAllocator = bitwise_cast<BlockDirectory*>(allocator->asIntPtr()); 11994 11995 if (!actualAllocator) { 11991 JITAllocator actualAllocator; 11992 if (allocator->hasInt32()) 11993 actualAllocator = JITAllocator::constant(Allocator(allocator->asInt32())); 11994 else 11995 actualAllocator = JITAllocator::variable(); 11996 11997 if (actualAllocator.isConstant()) { 11998 if (!actualAllocator.allocator()) { 11999 LBasicBlock haveAllocator = m_out.newBlock(); 12000 LBasicBlock lastNext = m_out.insertNewBlocksBefore(haveAllocator); 12001 m_out.jump(slowPath); 12002 m_out.appendTo(haveAllocator, lastNext); 12003 return m_out.intPtrZero; 12004 } 12005 } else { 11996 12006 // This means that either we know that the allocator is null or we don't know what the 11997 12007 // allocator is. In either case, we need the null check. 11998 12008 LBasicBlock haveAllocator = m_out.newBlock(); 11999 12009 LBasicBlock lastNext = m_out.insertNewBlocksBefore(haveAllocator); 12000 m_out.branch(allocator, usually(haveAllocator), rarely(slowPath)); 12010 m_out.branch( 12011 m_out.notEqual(allocator, m_out.constInt32(Allocator().offset())), 12012 usually(haveAllocator), rarely(slowPath)); 12001 12013 m_out.appendTo(haveAllocator, lastNext); 12002 12014 } … … 12008 12020 PatchpointValue* patchpoint = m_out.patchpoint(pointerType()); 12009 12021 patchpoint->effects.terminal = true; 12010 patchpoint->appendSomeRegister(allocator); 12022 if (actualAllocator.isConstant()) 12023 patchpoint->numGPScratchRegisters++; 12024 else 12025 patchpoint->appendSomeRegisterWithClobber(allocator); 12011 12026 patchpoint->numGPScratchRegisters++; 12012 12027 patchpoint->resultConstraint = ValueRep::SomeEarlyRegister; … … 12018 12033 [=] (CCallHelpers& jit, const StackmapGenerationParams& params) { 12019 12034 CCallHelpers::JumpList jumpToSlowPath; 12035 12036 GPRReg allocatorGPR; 12037 if (actualAllocator.isConstant()) 12038 allocatorGPR = params.gpScratch(1); 12039 else 12040 allocatorGPR = params[1].gpr(); 12020 12041 12021 12042 // We use a patchpoint to emit the allocation path because whenever we mess with … … 12026 12047 // all of the compiler tiers. 12027 12048 jit.emitAllocateWithNonNullAllocator( 12028 params[0].gpr(), actualAllocator, params[1].gpr(), params.gpScratch(0),12049 params[0].gpr(), actualAllocator, allocatorGPR, params.gpScratch(0), 12029 12050 jumpToSlowPath); 12030 12051 … … 12118 12139 size_t size, StructureType structure, LValue butterfly, LValue indexingMask, LBasicBlock slowPath) 12119 12140 { 12120 BlockDirectory*allocator = subspaceFor<ClassType>(vm())->allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists);12121 return allocateObject(m_out.constInt Ptr(allocator), structure, butterfly, indexingMask, slowPath);12141 Allocator allocator = subspaceFor<ClassType>(vm())->allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists); 12142 return allocateObject(m_out.constInt32(allocator.offset()), structure, butterfly, indexingMask, slowPath); 12122 12143 } 12123 12144 … … 12138 12159 size_t actualSize = size->asIntPtr(); 12139 12160 12140 BlockDirectory*actualAllocator = actualSubspace->allocatorForNonVirtual(actualSize, AllocatorForMode::AllocatorIfExists);12161 Allocator actualAllocator = actualSubspace->allocatorForNonVirtual(actualSize, AllocatorForMode::AllocatorIfExists); 12141 12162 if (!actualAllocator) { 12142 12163 LBasicBlock continuation = m_out.newBlock(); … … 12144 12165 m_out.jump(slowPath); 12145 12166 m_out.appendTo(continuation, lastNext); 12146 return m_out.int PtrZero;12167 return m_out.int32Zero; 12147 12168 } 12148 12169 12149 return m_out.constInt Ptr(actualAllocator);12170 return m_out.constInt32(actualAllocator.offset()); 12150 12171 } 12151 12172 … … 12166 12187 m_out.appendTo(continuation, lastNext); 12167 12188 12168 return m_out.load Ptr(12189 return m_out.load32( 12169 12190 m_out.baseIndex( 12170 12191 m_heaps.CompleteSubspace_allocatorForSizeStep, … … 12181 12202 LValue size, RegisteredStructure structure, LValue butterfly, LValue butterflyIndexingMask, LBasicBlock slowPath) 12182 12203 { 12183 LValue allocator = allocatorForSize( 12184 *subspaceFor<ClassType>(vm()), size, slowPath); 12204 LValue allocator = allocatorForSize(*subspaceFor<ClassType>(vm()), size, slowPath); 12185 12205 return allocateObject(allocator, structure, butterfly, butterflyIndexingMask, slowPath); 12186 12206 } … … 12190 12210 LValue size, Structure* structure, LBasicBlock slowPath) 12191 12211 { 12192 LValue allocator = allocatorForSize( 12193 *subspaceFor<ClassType>(vm()), size, slowPath); 12212 LValue allocator = allocatorForSize(*subspaceFor<ClassType>(vm()), size, slowPath); 12194 12213 return allocateCell(allocator, structure, slowPath); 12195 12214 } … … 12198 12217 { 12199 12218 size_t allocationSize = JSFinalObject::allocationSize(structure.get()->inlineCapacity()); 12200 BlockDirectory*allocator = subspaceFor<JSFinalObject>(vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists);12219 Allocator allocator = subspaceFor<JSFinalObject>(vm())->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 12201 12220 12202 12221 // FIXME: If the allocator is null, we could simply emit a normal C call to the allocator … … 12210 12229 12211 12230 ValueFromBlock fastResult = m_out.anchor(allocateObject( 12212 m_out.constInt Ptr(allocator), structure, m_out.intPtrZero, m_out.int32Zero, slowPath));12231 m_out.constInt32(allocator.offset()), structure, m_out.intPtrZero, m_out.int32Zero, slowPath)); 12213 12232 12214 12233 m_out.jump(continuation); -
trunk/Source/JavaScriptCore/heap/BlockDirectory.cpp
r227609 r227617 27 27 #include "BlockDirectory.h" 28 28 29 #include "AllocatingScope.h"30 29 #include "BlockDirectoryInlines.h" 31 30 #include "GCActivityCallback.h" … … 40 39 namespace JSC { 41 40 42 static constexpr bool tradeDestructorBlocks = true;43 44 41 BlockDirectory::BlockDirectory(Heap* heap, size_t cellSize) 45 : m_freeList(cellSize) 46 , m_currentBlock(0) 47 , m_lastActiveBlock(0) 48 , m_cellSize(static_cast<unsigned>(cellSize)) 42 : m_cellSize(static_cast<unsigned>(cellSize)) 49 43 , m_heap(heap) 50 44 { 45 heap->threadLocalCacheLayout().allocateOffset(this); 51 46 } 52 47 … … 82 77 } 83 78 84 void BlockDirectory::didConsumeFreeList() 85 { 86 if (m_currentBlock) 87 m_currentBlock->didConsumeFreeList(); 88 89 m_freeList.clear(); 90 m_currentBlock = nullptr; 91 } 92 93 void* BlockDirectory::tryAllocateWithoutCollecting() 94 { 95 SuperSamplerScope superSamplerScope(false); 96 97 ASSERT(!m_currentBlock); 98 ASSERT(m_freeList.allocationWillFail()); 99 100 for (;;) { 101 m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true); 102 if (m_allocationCursor >= m_blocks.size()) 103 break; 104 105 setIsCanAllocateButNotEmpty(NoLockingNecessary, m_allocationCursor, false); 106 107 if (void* result = tryAllocateIn(m_blocks[m_allocationCursor])) 108 return result; 109 } 110 111 if (Options::stealEmptyBlocksFromOtherAllocators() 112 && (tradeDestructorBlocks || !needsDestruction())) { 113 if (MarkedBlock::Handle* block = m_subspace->findEmptyBlockToSteal()) { 114 RELEASE_ASSERT(block->alignedMemoryAllocator() == m_subspace->alignedMemoryAllocator()); 115 116 block->sweep(nullptr); 117 118 // It's good that this clears canAllocateButNotEmpty as well as all other bits, 119 // because there is a remote chance that a block may have both canAllocateButNotEmpty 120 // and empty set at the same time. 121 block->removeFromDirectory(); 122 addBlock(block); 123 return allocateIn(block); 124 } 125 } 126 127 return nullptr; 128 } 129 130 void* BlockDirectory::allocateIn(MarkedBlock::Handle* block) 131 { 132 void* result = tryAllocateIn(block); 133 RELEASE_ASSERT(result); 134 return result; 135 } 136 137 void* BlockDirectory::tryAllocateIn(MarkedBlock::Handle* block) 138 { 139 ASSERT(block); 140 ASSERT(!block->isFreeListed()); 141 142 block->sweep(&m_freeList); 143 144 // It's possible to stumble on a completely full block. Marking tries to retire these, but 145 // that algorithm is racy and may forget to do it sometimes. 146 if (m_freeList.allocationWillFail()) { 147 ASSERT(block->isFreeListed()); 148 block->unsweepWithNoNewlyAllocated(); 149 ASSERT(!block->isFreeListed()); 150 ASSERT(!isEmpty(NoLockingNecessary, block)); 151 ASSERT(!isCanAllocateButNotEmpty(NoLockingNecessary, block)); 79 MarkedBlock::Handle* BlockDirectory::findBlockForAllocation() 80 { 81 m_allocationCursor = (m_canAllocateButNotEmpty | m_empty).findBit(m_allocationCursor, true); 82 if (m_allocationCursor >= m_blocks.size()) 152 83 return nullptr; 153 } 154 155 m_currentBlock = block; 156 157 void* result = m_freeList.allocate( 158 [] () -> HeapCell* { 159 RELEASE_ASSERT_NOT_REACHED(); 160 return nullptr; 161 }); 162 setIsEden(NoLockingNecessary, m_currentBlock, true); 163 markedSpace().didAllocateInBlock(m_currentBlock); 164 return result; 165 } 166 167 ALWAYS_INLINE void BlockDirectory::doTestCollectionsIfNeeded(GCDeferralContext* deferralContext) 168 { 169 if (!Options::slowPathAllocsBetweenGCs()) 170 return; 171 172 static unsigned allocationCount = 0; 173 if (!allocationCount) { 174 if (!m_heap->isDeferred()) { 175 if (deferralContext) 176 deferralContext->m_shouldGC = true; 177 else 178 m_heap->collectNow(Sync, CollectionScope::Full); 179 } 180 } 181 if (++allocationCount >= Options::slowPathAllocsBetweenGCs()) 182 allocationCount = 0; 183 } 184 185 void* BlockDirectory::allocateSlowCase(GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 186 { 187 SuperSamplerScope superSamplerScope(false); 188 ASSERT(m_heap->vm()->currentThreadIsHoldingAPILock()); 189 doTestCollectionsIfNeeded(deferralContext); 190 191 ASSERT(!markedSpace().isIterating()); 192 m_heap->didAllocate(m_freeList.originalSize()); 193 194 didConsumeFreeList(); 195 196 AllocatingScope helpingHeap(*m_heap); 197 198 m_heap->collectIfNecessaryOrDefer(deferralContext); 199 200 // Goofy corner case: the GC called a callback and now this directory has a currentBlock. This only 201 // happens when running WebKit tests, which inject a callback into the GC's finalization. 202 if (UNLIKELY(m_currentBlock)) 203 return allocate(deferralContext, failureMode); 204 205 void* result = tryAllocateWithoutCollecting(); 206 207 if (LIKELY(result != 0)) 208 return result; 209 210 MarkedBlock::Handle* block = tryAllocateBlock(); 211 if (!block) { 212 if (failureMode == AllocationFailureMode::Assert) 213 RELEASE_ASSERT_NOT_REACHED(); 214 else 215 return nullptr; 216 } 217 addBlock(block); 218 result = allocateIn(block); 219 ASSERT(result); 220 return result; 84 85 setIsCanAllocateButNotEmpty(NoLockingNecessary, m_allocationCursor, false); 86 return m_blocks[m_allocationCursor]; 221 87 } 222 88 … … 314 180 if (false) 315 181 dataLog(RawPointer(this), ": BlockDirectory::stopAllocating!\n"); 316 ASSERT(!m_lastActiveBlock); 317 if (!m_currentBlock) { 318 ASSERT(m_freeList.allocationWillFail()); 319 return; 320 } 321 322 m_currentBlock->stopAllocating(m_freeList); 323 m_lastActiveBlock = m_currentBlock; 324 m_currentBlock = 0; 325 m_freeList.clear(); 182 m_localAllocators.forEach( 183 [&] (LocalAllocator* allocator) { 184 allocator->stopAllocating(); 185 }); 326 186 } 327 187 328 188 void BlockDirectory::prepareForAllocation() 329 189 { 330 m_lastActiveBlock = nullptr; 331 m_currentBlock = nullptr; 332 m_freeList.clear(); 333 190 m_localAllocators.forEach( 191 [&] (LocalAllocator* allocator) { 192 allocator->prepareForAllocation(); 193 }); 194 334 195 m_allocationCursor = 0; 335 196 m_emptyCursor = 0; … … 345 206 } 346 207 208 void BlockDirectory::stopAllocatingForGood() 209 { 210 if (false) 211 dataLog(RawPointer(this), ": BlockDirectory::stopAllocatingForGood!\n"); 212 213 m_localAllocators.forEach( 214 [&] (LocalAllocator* allocator) { 215 allocator->stopAllocatingForGood(); 216 }); 217 218 auto locker = holdLock(m_localAllocatorsLock); 219 while (!m_localAllocators.isEmpty()) 220 m_localAllocators.begin()->remove(); 221 } 222 347 223 void BlockDirectory::lastChanceToFinalize() 348 224 { … … 355 231 void BlockDirectory::resumeAllocating() 356 232 { 357 if (!m_lastActiveBlock) 358 return; 359 360 m_lastActiveBlock->resumeAllocating(m_freeList); 361 m_currentBlock = m_lastActiveBlock; 362 m_lastActiveBlock = nullptr; 233 m_localAllocators.forEach( 234 [&] (LocalAllocator* allocator) { 235 allocator->resumeAllocating(); 236 }); 363 237 } 364 238 … … 380 254 // vectors. 381 255 382 if (! tradeDestructorBlocks&& needsDestruction()) {256 if (!Options::tradeDestructorBlocks() && needsDestruction()) { 383 257 ASSERT(m_empty.isEmpty()); 384 258 m_canAllocateButNotEmpty = m_live & ~m_markingRetired; … … 513 387 } 514 388 389 bool BlockDirectory::isFreeListedCell(const void* target) 390 { 391 bool result = false; 392 m_localAllocators.forEach( 393 [&] (LocalAllocator* allocator) { 394 result |= allocator->isFreeListedCell(target); 395 }); 396 return result; 397 } 398 515 399 } // namespace JSC 516 400 -
trunk/Source/JavaScriptCore/heap/BlockDirectory.h
r227609 r227617 27 27 28 28 #include "AllocationFailureMode.h" 29 #include "Allocator.h" 29 30 #include "CellAttributes.h" 30 31 #include "FreeList.h" 32 #include "LocalAllocator.h" 31 33 #include "MarkedBlock.h" 32 34 #include <wtf/DataLog.h> … … 42 44 class MarkedSpace; 43 45 class LLIntOffsetsExtractor; 46 class ThreadLocalCacheLayout; 44 47 45 48 #define FOR_EACH_BLOCK_DIRECTORY_BIT(macro) \ … … 77 80 78 81 public: 79 static ptrdiff_t offsetOfFreeList();80 static ptrdiff_t offsetOfCellSize();81 82 82 BlockDirectory(Heap*, size_t cellSize); 83 83 void setSubspace(Subspace*); … … 85 85 void prepareForAllocation(); 86 86 void stopAllocating(); 87 void stopAllocatingForGood(); 87 88 void resumeAllocating(); 88 89 void beginMarkingForFullCollection(); … … 98 99 DestructionMode destruction() const { return m_attributes.destruction; } 99 100 HeapCell::Kind cellKind() const { return m_attributes.cellKind; } 100 void* allocate(GCDeferralContext*, AllocationFailureMode);101 101 Heap* heap() { return m_heap; } 102 102 103 bool isFreeListedCell(const void* target) const;103 bool isFreeListedCell(const void* target); 104 104 105 105 template<typename Functor> void forEachBlock(const Functor&); … … 158 158 MarkedSpace& markedSpace() const; 159 159 160 const FreeList& freeList() const { return m_freeList; }160 Allocator allocator() const { return Allocator(m_tlcOffset); } 161 161 162 162 void dump(PrintStream&) const; … … 164 164 165 165 private: 166 friend class LocalAllocator; 166 167 friend class IsoCellSet; 167 168 friend class MarkedBlock; 168 169 JS_EXPORT_PRIVATE void* allocateSlowCase(GCDeferralContext*, AllocationFailureMode failureMode);170 void didConsumeFreeList();171 void* tryAllocateWithoutCollecting();169 friend class ThreadLocalCacheLayout; 170 171 MarkedBlock::Handle* findBlockForAllocation(); 172 172 173 MarkedBlock::Handle* tryAllocateBlock(); 173 void* tryAllocateIn(MarkedBlock::Handle*);174 void* allocateIn(MarkedBlock::Handle*);175 ALWAYS_INLINE void doTestCollectionsIfNeeded(GCDeferralContext*);176 177 FreeList m_freeList;178 174 179 175 Vector<MarkedBlock::Handle*> m_blocks; … … 194 190 size_t m_unsweptCursor { 0 }; // Points to the next block that is a candidate for incremental sweeping. 195 191 196 MarkedBlock::Handle* m_currentBlock;197 MarkedBlock::Handle* m_lastActiveBlock;198 199 Lock m_lock;200 192 unsigned m_cellSize; 201 193 CellAttributes m_attributes; … … 207 199 BlockDirectory* m_nextDirectoryInSubspace { nullptr }; 208 200 BlockDirectory* m_nextDirectoryInAlignedMemoryAllocator { nullptr }; 201 202 Lock m_localAllocatorsLock; 203 size_t m_tlcOffset; 204 SentinelLinkedList<LocalAllocator, BasicRawSentinelNode<LocalAllocator>> m_localAllocators; 209 205 }; 210 206 211 inline ptrdiff_t BlockDirectory::offsetOfFreeList()212 {213 return OBJECT_OFFSETOF(BlockDirectory, m_freeList);214 }215 216 inline ptrdiff_t BlockDirectory::offsetOfCellSize()217 {218 return OBJECT_OFFSETOF(BlockDirectory, m_cellSize);219 }220 221 207 } // namespace JSC -
trunk/Source/JavaScriptCore/heap/BlockDirectoryInlines.h
r227609 r227617 32 32 namespace JSC { 33 33 34 inline bool BlockDirectory::isFreeListedCell(const void* target) const35 {36 return m_freeList.contains(bitwise_cast<HeapCell*>(target));37 }38 39 ALWAYS_INLINE void* BlockDirectory::allocate(GCDeferralContext* deferralContext, AllocationFailureMode failureMode)40 {41 return m_freeList.allocate(42 [&] () -> HeapCell* {43 sanitizeStackForVM(heap()->vm());44 return static_cast<HeapCell*>(allocateSlowCase(deferralContext, failureMode));45 });46 }47 48 34 template <typename Functor> inline void BlockDirectory::forEachBlock(const Functor& functor) 49 35 { -
trunk/Source/JavaScriptCore/heap/CompleteSubspace.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2017 Apple Inc. All rights reserved.2 * Copyright (C) 2017-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 27 27 #include "Subspace.h" 28 28 29 #include "AlignedMemoryAllocator.h" 30 #include "AllocatorInlines.h" 29 31 #include "BlockDirectoryInlines.h" 30 32 #include "JSCInlines.h" 33 #include "LocalAllocatorInlines.h" 31 34 #include "MarkedBlockInlines.h" 32 35 #include "PreventCollectionScope.h" 33 36 #include "SubspaceInlines.h" 37 #include "ThreadLocalCacheInlines.h" 34 38 35 39 namespace JSC { … … 39 43 { 40 44 initialize(heapCellType, alignedMemoryAllocator); 41 for (size_t i = MarkedSpace::numSizeClasses; i--;)42 m_allocatorForSizeStep[i] = nullptr;43 45 } 44 46 … … 47 49 } 48 50 49 BlockDirectory*CompleteSubspace::allocatorFor(size_t size, AllocatorForMode mode)51 Allocator CompleteSubspace::allocatorFor(size_t size, AllocatorForMode mode) 50 52 { 51 53 return allocatorForNonVirtual(size, mode); 52 54 } 53 55 54 void* CompleteSubspace::allocate( size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)56 void* CompleteSubspace::allocate(VM& vm, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 55 57 { 56 return allocateNonVirtual( size, deferralContext, failureMode);58 return allocateNonVirtual(vm, size, deferralContext, failureMode); 57 59 } 58 60 59 void* CompleteSubspace::allocateNonVirtual( size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)61 void* CompleteSubspace::allocateNonVirtual(VM& vm, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 60 62 { 61 void *result;62 if (BlockDirectory* allocator = allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists))63 result = allocator->allocate(deferralContext, failureMode);64 else65 result = allocateSlow(size, deferralContext, failureMode);66 return result;63 Allocator allocator = allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists); 64 return allocator.tryAllocate( 65 vm, deferralContext, failureMode, 66 [&] () { 67 return allocateSlow(vm, size, deferralContext, failureMode); 68 }); 67 69 } 68 70 69 BlockDirectory*CompleteSubspace::allocatorForSlow(size_t size)71 Allocator CompleteSubspace::allocatorForSlow(size_t size) 70 72 { 71 73 size_t index = MarkedSpace::sizeClassToIndex(size); 72 74 size_t sizeClass = MarkedSpace::s_sizeClassForSizeStep[index]; 73 75 if (!sizeClass) 74 return nullptr;76 return Allocator(); 75 77 76 78 // This is written in such a way that it's OK for the JIT threads to end up here if they want … … 83 85 // enough: it will have 84 86 auto locker = holdLock(m_space.directoryLock()); 85 if ( BlockDirectory*allocator = m_allocatorForSizeStep[index])87 if (Allocator allocator = m_allocatorForSizeStep[index]) 86 88 return allocator; 87 89 … … 99 101 break; 100 102 101 m_allocatorForSizeStep[index] = directory ;103 m_allocatorForSizeStep[index] = directory->allocator(); 102 104 103 105 if (!index--) … … 108 110 WTF::storeStoreFence(); 109 111 m_firstDirectory = directory; 110 return directory ;112 return directory->allocator(); 111 113 } 112 114 113 void* CompleteSubspace::allocateSlow( size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)115 void* CompleteSubspace::allocateSlow(VM& vm, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 114 116 { 115 void* result = tryAllocateSlow( size, deferralContext);117 void* result = tryAllocateSlow(vm, size, deferralContext); 116 118 if (failureMode == AllocationFailureMode::Assert) 117 119 RELEASE_ASSERT(result); … … 119 121 } 120 122 121 void* CompleteSubspace::tryAllocateSlow( size_t size, GCDeferralContext* deferralContext)123 void* CompleteSubspace::tryAllocateSlow(VM& vm, size_t size, GCDeferralContext* deferralContext) 122 124 { 123 sanitizeStackForVM( m_space.heap()->vm());125 sanitizeStackForVM(&vm); 124 126 125 if ( BlockDirectory*allocator = allocatorFor(size, AllocatorForMode::EnsureAllocator))126 return allocator ->allocate(deferralContext, AllocationFailureMode::ReturnNull);127 if (Allocator allocator = allocatorFor(size, AllocatorForMode::EnsureAllocator)) 128 return allocator.allocate(vm, deferralContext, AllocationFailureMode::ReturnNull); 127 129 128 130 if (size <= Options::largeAllocationCutoff() … … 133 135 } 134 136 135 m_space.heap()->collectIfNecessaryOrDefer(deferralContext);137 vm.heap.collectIfNecessaryOrDefer(deferralContext); 136 138 137 139 size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(size); 138 LargeAllocation* allocation = LargeAllocation::tryCreate( *m_space.m_heap, size, this);140 LargeAllocation* allocation = LargeAllocation::tryCreate(vm.heap, size, this); 139 141 if (!allocation) 140 142 return nullptr; 141 143 142 144 m_space.m_largeAllocations.append(allocation); 143 m_space.m_heap->didAllocate(size);145 vm.heap.didAllocate(size); 144 146 m_space.m_capacity += size; 145 147 -
trunk/Source/JavaScriptCore/heap/CompleteSubspace.h
r227609 r227617 40 40 // FIXME: Currently subspaces speak of BlockDirectories as "allocators", but that's temporary. 41 41 // https://bugs.webkit.org/show_bug.cgi?id=181559 42 BlockDirectory*allocatorFor(size_t, AllocatorForMode) override;43 BlockDirectory*allocatorForNonVirtual(size_t, AllocatorForMode);42 Allocator allocatorFor(size_t, AllocatorForMode) override; 43 Allocator allocatorForNonVirtual(size_t, AllocatorForMode); 44 44 45 void* allocate( size_t, GCDeferralContext*, AllocationFailureMode) override;46 JS_EXPORT_PRIVATE void* allocateNonVirtual( size_t, GCDeferralContext*, AllocationFailureMode);45 void* allocate(VM&, size_t, GCDeferralContext*, AllocationFailureMode) override; 46 JS_EXPORT_PRIVATE void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode); 47 47 48 48 static ptrdiff_t offsetOfAllocatorForSizeStep() { return OBJECT_OFFSETOF(CompleteSubspace, m_allocatorForSizeStep); } 49 49 50 BlockDirectory** allocatorForSizeStep() { return &m_allocatorForSizeStep[0]; }50 Allocator* allocatorForSizeStep() { return &m_allocatorForSizeStep[0]; } 51 51 52 52 private: 53 BlockDirectory*allocatorForSlow(size_t);53 Allocator allocatorForSlow(size_t); 54 54 55 55 // These slow paths are concerned with large allocations and allocator creation. 56 void* allocateSlow( size_t, GCDeferralContext*, AllocationFailureMode);57 void* tryAllocateSlow( size_t, GCDeferralContext*);56 void* allocateSlow(VM&, size_t, GCDeferralContext*, AllocationFailureMode); 57 void* tryAllocateSlow(VM&, size_t, GCDeferralContext*); 58 58 59 std::array< BlockDirectory*, MarkedSpace::numSizeClasses> m_allocatorForSizeStep;59 std::array<Allocator, MarkedSpace::numSizeClasses> m_allocatorForSizeStep; 60 60 Vector<std::unique_ptr<BlockDirectory>> m_directories; 61 61 }; 62 62 63 ALWAYS_INLINE BlockDirectory*CompleteSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode mode)63 ALWAYS_INLINE Allocator CompleteSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode mode) 64 64 { 65 65 if (size <= MarkedSpace::largeCutoff) { 66 BlockDirectory*result = m_allocatorForSizeStep[MarkedSpace::sizeClassToIndex(size)];66 Allocator result = m_allocatorForSizeStep[MarkedSpace::sizeClassToIndex(size)]; 67 67 switch (mode) { 68 68 case AllocatorForMode::MustAlreadyHaveAllocator: … … 79 79 } 80 80 RELEASE_ASSERT(mode != AllocatorForMode::MustAlreadyHaveAllocator); 81 return nullptr;81 return Allocator(); 82 82 } 83 83 -
trunk/Source/JavaScriptCore/heap/FreeList.h
r227609 r227617 58 58 59 59 class FreeList { 60 WTF_MAKE_NONCOPYABLE(FreeList);61 62 60 public: 63 61 FreeList(unsigned cellSize); -
trunk/Source/JavaScriptCore/heap/GCDeferralContext.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2016 Apple Inc. All rights reserved.2 * Copyright (C) 2016-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 30 30 class Heap; 31 31 class BlockDirectory; 32 class LocalAllocator; 32 33 33 34 class GCDeferralContext { 34 35 friend class Heap; 35 36 friend class BlockDirectory; 37 friend class LocalAllocator; 36 38 public: 37 39 inline GCDeferralContext(Heap&); -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r227609 r227617 69 69 #include "SweepingScope.h" 70 70 #include "SynchronousStopTheWorldMutatorScheduler.h" 71 #include "ThreadLocalCacheLayout.h" 71 72 #include "TypeProfiler.h" 72 73 #include "TypeProfilerLog.h" … … 313 314 , m_threadLock(Box<Lock>::create()) 314 315 , m_threadCondition(AutomaticThreadCondition::create()) 316 , m_threadLocalCacheLayout(std::make_unique<ThreadLocalCacheLayout>()) 315 317 { 316 318 m_worldState.store(0); … … 447 449 448 450 m_arrayBuffers.lastChanceToFinalize(); 449 m_objectSpace.stopAllocating ();451 m_objectSpace.stopAllocatingForGood(); 450 452 m_objectSpace.lastChanceToFinalize(); 451 453 releaseDelayedReleasedObjects(); -
trunk/Source/JavaScriptCore/heap/Heap.h
r227609 r227617 85 85 class StopIfNecessaryTimer; 86 86 class SweepingScope; 87 class ThreadLocalCacheLayout; 87 88 class VM; 88 89 class WeakGCMapBase; … … 373 374 template<typename Func> 374 375 void forEachSlotVisitor(const Func&); 376 377 ThreadLocalCacheLayout& threadLocalCacheLayout() { return *m_threadLocalCacheLayout; } 375 378 376 379 private: … … 715 718 CurrentThreadState* m_currentThreadState { nullptr }; 716 719 WTF::Thread* m_currentThread { nullptr }; // It's OK if this becomes a dangling pointer. 720 721 std::unique_ptr<ThreadLocalCacheLayout> m_threadLocalCacheLayout; 717 722 }; 718 723 -
trunk/Source/JavaScriptCore/heap/IsoCellSet.h
r227609 r227617 26 26 #pragma once 27 27 28 #include "MarkedBlock.h" 28 29 #include <wtf/Bitmap.h> 29 30 #include <wtf/ConcurrentVector.h> 30 31 #include <wtf/FastBitVector.h> 32 #include <wtf/SentinelLinkedList.h> 33 #include <wtf/SharedTask.h> 31 34 32 35 namespace JSC { -
trunk/Source/JavaScriptCore/heap/IsoSubspace.cpp
r227609 r227617 27 27 #include "IsoSubspace.h" 28 28 29 #include "AllocatorInlines.h" 29 30 #include "BlockDirectoryInlines.h" 30 31 #include "IsoAlignedMemoryAllocator.h" 32 #include "LocalAllocatorInlines.h" 33 #include "ThreadLocalCacheInlines.h" 31 34 32 35 namespace JSC { … … 36 39 , m_size(size) 37 40 , m_directory(&heap, WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(size)) 41 , m_allocator(m_directory.allocator()) 38 42 , m_isoAlignedMemoryAllocator(std::make_unique<IsoAlignedMemoryAllocator>()) 39 43 { … … 51 55 } 52 56 53 BlockDirectory*IsoSubspace::allocatorFor(size_t size, AllocatorForMode mode)57 Allocator IsoSubspace::allocatorFor(size_t size, AllocatorForMode mode) 54 58 { 55 59 return allocatorForNonVirtual(size, mode); 56 60 } 57 61 58 void* IsoSubspace::allocate( size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)62 void* IsoSubspace::allocate(VM& vm, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 59 63 { 60 return allocateNonVirtual( size, deferralContext, failureMode);64 return allocateNonVirtual(vm, size, deferralContext, failureMode); 61 65 } 62 66 63 void* IsoSubspace::allocateNonVirtual( size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)67 void* IsoSubspace::allocateNonVirtual(VM& vm, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 64 68 { 65 69 RELEASE_ASSERT(size == this->size()); 66 void* result = m_ directory.allocate(deferralContext, failureMode);70 void* result = m_allocator.allocate(vm, deferralContext, failureMode); 67 71 return result; 68 72 } -
trunk/Source/JavaScriptCore/heap/IsoSubspace.h
r227609 r227617 42 42 size_t size() const { return m_size; } 43 43 44 BlockDirectory*allocatorFor(size_t, AllocatorForMode) override;45 BlockDirectory*allocatorForNonVirtual(size_t, AllocatorForMode);44 Allocator allocatorFor(size_t, AllocatorForMode) override; 45 Allocator allocatorForNonVirtual(size_t, AllocatorForMode); 46 46 47 void* allocate( size_t, GCDeferralContext*, AllocationFailureMode) override;48 JS_EXPORT_PRIVATE void* allocateNonVirtual( size_t, GCDeferralContext*, AllocationFailureMode);47 void* allocate(VM&, size_t, GCDeferralContext*, AllocationFailureMode) override; 48 JS_EXPORT_PRIVATE void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode); 49 49 50 50 private: … … 57 57 size_t m_size; 58 58 BlockDirectory m_directory; 59 Allocator m_allocator; 59 60 std::unique_ptr<IsoAlignedMemoryAllocator> m_isoAlignedMemoryAllocator; 60 61 SentinelLinkedList<IsoCellSet, BasicRawSentinelNode<IsoCellSet>> m_cellSets; 61 62 }; 62 63 63 inline BlockDirectory*IsoSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode)64 inline Allocator IsoSubspace::allocatorForNonVirtual(size_t size, AllocatorForMode) 64 65 { 65 66 RELEASE_ASSERT(size == this->size()); 66 return &m_directory;67 return m_allocator; 67 68 } 68 69 -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r227609 r227617 400 400 return; 401 401 402 RELEASE_ASSERT(!m_isFreeListed); 403 RELEASE_ASSERT(!isAllocated()); 402 if (m_isFreeListed) { 403 dataLog("FATAL: ", RawPointer(this), "->sweep: block is free-listed.\n"); 404 RELEASE_ASSERT_NOT_REACHED(); 405 } 406 407 if (isAllocated()) { 408 dataLog("FATAL: ", RawPointer(this), "->sweep: block is allocated.\n"); 409 RELEASE_ASSERT_NOT_REACHED(); 410 } 404 411 405 412 if (space()->isMarking()) -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r227609 r227617 303 303 } 304 304 305 void MarkedSpace::stopAllocatingForGood() 306 { 307 ASSERT(!isIterating()); 308 forEachDirectory( 309 [&] (BlockDirectory& directory) -> IterationStatus { 310 directory.stopAllocatingForGood(); 311 return IterationStatus::Continue; 312 }); 313 } 314 305 315 void MarkedSpace::prepareForConservativeScan() 306 316 { -
trunk/Source/JavaScriptCore/heap/MarkedSpace.h
r227609 r227617 95 95 Heap* heap() const { return m_heap; } 96 96 97 void lastChanceToFinalize(); // You must call stopAllocating before you call this.97 void lastChanceToFinalize(); // Must call stopAllocatingForGood first. 98 98 void freeMemory(); 99 99 … … 112 112 113 113 void stopAllocating(); 114 void stopAllocatingForGood(); 114 115 void resumeAllocating(); // If we just stopped allocation but we didn't do a collection, we need to resume allocation. 115 116 -
trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp
r227609 r227617 38 38 #include "JSString.h" 39 39 #include "JSCInlines.h" 40 #include "MarkingConstraintSolver.h" 40 41 #include "SlotVisitorInlines.h" 41 42 #include "StopIfNecessaryTimer.h" -
trunk/Source/JavaScriptCore/heap/SlotVisitor.h
r227609 r227617 32 32 #include <wtf/Forward.h> 33 33 #include <wtf/MonotonicTime.h> 34 #include <wtf/SharedTask.h> 34 35 #include <wtf/text/CString.h> 35 36 … … 42 43 class HeapSnapshotBuilder; 43 44 class MarkedBlock; 45 class MarkingConstraint; 44 46 class MarkingConstraintSolver; 45 47 class UnconditionalFinalizer; -
trunk/Source/JavaScriptCore/heap/Subspace.h
r227609 r227617 28 28 #include "AllocationFailureMode.h" 29 29 #include "AllocatorForMode.h" 30 #include "Allocator.h" 30 31 #include "MarkedBlock.h" 31 32 #include "MarkedSpace.h" … … 57 58 void destroy(VM&, JSCell*); 58 59 59 virtual BlockDirectory*allocatorFor(size_t, AllocatorForMode) = 0;60 virtual void* allocate( size_t, GCDeferralContext*, AllocationFailureMode) = 0;60 virtual Allocator allocatorFor(size_t, AllocatorForMode) = 0; 61 virtual void* allocate(VM&, size_t, GCDeferralContext*, AllocationFailureMode) = 0; 61 62 62 63 void prepareForAllocation(); -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2011-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2011-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 582 582 } 583 583 #endif 584 585 void AssemblyHelpers::emitAllocateWithNonNullAllocator(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath) 586 { 587 // NOTE: This is carefully written so that we can call it while we disallow scratch 588 // register usage. 589 590 if (Options::forceGCSlowPaths()) { 591 slowPath.append(jump()); 592 return; 593 } 594 595 Jump popPath; 596 Jump done; 597 598 #if ENABLE(FAST_TLS_JIT) 599 loadFromTLSPtr(fastTLSOffsetForKey(WTF_GC_TLC_KEY), scratchGPR); 600 #else 601 loadPtr(&vm().threadLocalCacheData, scratchGPR); 602 #endif 603 if (!isX86()) 604 load32(Address(scratchGPR, ThreadLocalCache::offsetOfSizeInData()), resultGPR); 605 if (allocator.isConstant()) { 606 if (isX86()) 607 slowPath.append(branch32(BelowOrEqual, Address(scratchGPR, ThreadLocalCache::offsetOfSizeInData()), TrustedImm32(allocator.allocator().offset()))); 608 else 609 slowPath.append(branch32(BelowOrEqual, resultGPR, TrustedImm32(allocator.allocator().offset()))); 610 addPtr(TrustedImm32(ThreadLocalCache::offsetOfFirstAllocatorInData() + allocator.allocator().offset()), scratchGPR, allocatorGPR); 611 } else { 612 if (isX86()) 613 slowPath.append(branch32(BelowOrEqual, Address(scratchGPR, ThreadLocalCache::offsetOfSizeInData()), allocatorGPR)); 614 else 615 slowPath.append(branch32(BelowOrEqual, resultGPR, allocatorGPR)); 616 addPtr(TrustedImm32(ThreadLocalCache::offsetOfFirstAllocatorInData()), allocatorGPR); 617 addPtr(scratchGPR, allocatorGPR); 618 } 619 620 load32(Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfRemaining()), resultGPR); 621 popPath = branchTest32(Zero, resultGPR); 622 if (allocator.isConstant()) 623 add32(TrustedImm32(-allocator.allocator().cellSize(vm().heap)), resultGPR, scratchGPR); 624 else { 625 if (isX86()) { 626 move(resultGPR, scratchGPR); 627 sub32(Address(allocatorGPR, LocalAllocator::offsetOfCellSize()), scratchGPR); 628 } else { 629 load32(Address(allocatorGPR, LocalAllocator::offsetOfCellSize()), scratchGPR); 630 sub32(resultGPR, scratchGPR, scratchGPR); 631 } 632 } 633 negPtr(resultGPR); 634 store32(scratchGPR, Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfRemaining())); 635 Address payloadEndAddr = Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfPayloadEnd()); 636 if (isX86()) 637 addPtr(payloadEndAddr, resultGPR); 638 else { 639 loadPtr(payloadEndAddr, scratchGPR); 640 addPtr(scratchGPR, resultGPR); 641 } 642 643 done = jump(); 644 645 popPath.link(this); 646 647 loadPtr(Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfScrambledHead()), resultGPR); 648 if (isX86()) 649 xorPtr(Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfSecret()), resultGPR); 650 else { 651 loadPtr(Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfSecret()), scratchGPR); 652 xorPtr(scratchGPR, resultGPR); 653 } 654 slowPath.append(branchTestPtr(Zero, resultGPR)); 655 656 // The object is half-allocated: we have what we know is a fresh object, but 657 // it's still on the GC's free list. 658 loadPtr(Address(resultGPR), scratchGPR); 659 storePtr(scratchGPR, Address(allocatorGPR, LocalAllocator::offsetOfFreeList() + FreeList::offsetOfScrambledHead())); 660 661 done.link(this); 662 } 663 664 void AssemblyHelpers::emitAllocate(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath) 665 { 666 if (allocator.isConstant()) { 667 if (!allocator.allocator()) { 668 slowPath.append(jump()); 669 return; 670 } 671 } 672 emitAllocateWithNonNullAllocator(resultGPR, allocator, allocatorGPR, scratchGPR, slowPath); 673 } 674 675 void AssemblyHelpers::emitAllocateVariableSized(GPRReg resultGPR, CompleteSubspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath) 676 { 677 static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two."); 678 679 unsigned stepShift = getLSBSet(MarkedSpace::sizeStep); 680 681 add32(TrustedImm32(MarkedSpace::sizeStep - 1), allocationSize, scratchGPR1); 682 urshift32(TrustedImm32(stepShift), scratchGPR1); 683 slowPath.append(branch32(Above, scratchGPR1, TrustedImm32(MarkedSpace::largeCutoff >> stepShift))); 684 move(TrustedImmPtr(subspace.allocatorForSizeStep() - 1), scratchGPR2); 685 load32(BaseIndex(scratchGPR2, scratchGPR1, TimesFour), scratchGPR1); 686 687 emitAllocate(resultGPR, JITAllocator::variable(), scratchGPR1, scratchGPR2, slowPath); 688 } 584 689 585 690 void AssemblyHelpers::restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(EntryFrame*& topEntryFrame) -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2011-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2011-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 33 33 #include "Heap.h" 34 34 #include "InlineCallFrame.h" 35 #include "JITAllocator.h" 35 36 #include "JITCode.h" 36 37 #include "MacroAssembler.h" … … 60 61 61 62 CodeBlock* codeBlock() { return m_codeBlock; } 63 VM& vm() { return *m_codeBlock->vm(); } 62 64 AssemblerType_T& assembler() { return m_assembler; } 63 65 … … 1492 1494 // Call this if you know that the value held in allocatorGPR is non-null. This DOES NOT mean 1493 1495 // that allocator is non-null; allocator can be null as a signal that we don't know what the 1494 // value of allocatorGPR is. 1495 void emitAllocateWithNonNullAllocator(GPRReg resultGPR, BlockDirectory* allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath) 1496 { 1497 // NOTE: This is carefully written so that we can call it while we disallow scratch 1498 // register usage. 1499 1500 if (Options::forceGCSlowPaths()) { 1501 slowPath.append(jump()); 1502 return; 1503 } 1504 1505 Jump popPath; 1506 Jump done; 1507 1508 load32(Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfRemaining()), resultGPR); 1509 popPath = branchTest32(Zero, resultGPR); 1510 if (allocator) 1511 add32(TrustedImm32(-allocator->cellSize()), resultGPR, scratchGPR); 1512 else { 1513 if (isX86()) { 1514 move(resultGPR, scratchGPR); 1515 sub32(Address(allocatorGPR, BlockDirectory::offsetOfCellSize()), scratchGPR); 1516 } else { 1517 load32(Address(allocatorGPR, BlockDirectory::offsetOfCellSize()), scratchGPR); 1518 sub32(resultGPR, scratchGPR, scratchGPR); 1519 } 1520 } 1521 negPtr(resultGPR); 1522 store32(scratchGPR, Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfRemaining())); 1523 Address payloadEndAddr = Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfPayloadEnd()); 1524 if (isX86()) 1525 addPtr(payloadEndAddr, resultGPR); 1526 else { 1527 loadPtr(payloadEndAddr, scratchGPR); 1528 addPtr(scratchGPR, resultGPR); 1529 } 1530 1531 done = jump(); 1532 1533 popPath.link(this); 1534 1535 loadPtr(Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfScrambledHead()), resultGPR); 1536 if (isX86()) 1537 xorPtr(Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfSecret()), resultGPR); 1538 else { 1539 loadPtr(Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfSecret()), scratchGPR); 1540 xorPtr(scratchGPR, resultGPR); 1541 } 1542 slowPath.append(branchTestPtr(Zero, resultGPR)); 1543 1544 // The object is half-allocated: we have what we know is a fresh object, but 1545 // it's still on the GC's free list. 1546 loadPtr(Address(resultGPR), scratchGPR); 1547 storePtr(scratchGPR, Address(allocatorGPR, BlockDirectory::offsetOfFreeList() + FreeList::offsetOfScrambledHead())); 1548 1549 done.link(this); 1550 } 1551 1552 void emitAllocate(GPRReg resultGPR, BlockDirectory* allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath) 1553 { 1554 if (!allocator) 1555 slowPath.append(branchTestPtr(Zero, allocatorGPR)); 1556 emitAllocateWithNonNullAllocator(resultGPR, allocator, allocatorGPR, scratchGPR, slowPath); 1557 } 1496 // value of allocatorGPR is. Additionally, if the allocator is not null, then there is no need 1497 // to populate allocatorGPR - this code will ignore the contents of allocatorGPR. 1498 void emitAllocateWithNonNullAllocator(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath); 1499 1500 void emitAllocate(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, GPRReg scratchGPR, JumpList& slowPath); 1558 1501 1559 1502 template<typename StructureType> 1560 void emitAllocateJSCell(GPRReg resultGPR, BlockDirectory*allocator, GPRReg allocatorGPR, StructureType structure, GPRReg scratchGPR, JumpList& slowPath)1503 void emitAllocateJSCell(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, StructureType structure, GPRReg scratchGPR, JumpList& slowPath) 1561 1504 { 1562 1505 emitAllocate(resultGPR, allocator, allocatorGPR, scratchGPR, slowPath); … … 1565 1508 1566 1509 template<typename StructureType, typename StorageType, typename MaskType> 1567 void emitAllocateJSObject(GPRReg resultGPR, BlockDirectory*allocator, GPRReg allocatorGPR, StructureType structure, StorageType storage, MaskType mask, GPRReg scratchGPR, JumpList& slowPath)1510 void emitAllocateJSObject(GPRReg resultGPR, const JITAllocator& allocator, GPRReg allocatorGPR, StructureType structure, StorageType storage, MaskType mask, GPRReg scratchGPR, JumpList& slowPath) 1568 1511 { 1569 1512 emitAllocateJSCell(resultGPR, allocator, allocatorGPR, structure, scratchGPR, slowPath); … … 1577 1520 GPRReg scratchGPR2, JumpList& slowPath, size_t size) 1578 1521 { 1579 BlockDirectory* allocator = subspaceFor<ClassType>(vm)->allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists); 1580 if (!allocator) { 1581 slowPath.append(jump()); 1582 return; 1583 } 1584 move(TrustedImmPtr(allocator), scratchGPR1); 1585 emitAllocateJSObject(resultGPR, allocator, scratchGPR1, structure, storage, mask, scratchGPR2, slowPath); 1522 Allocator allocator = subspaceFor<ClassType>(vm)->allocatorForNonVirtual(size, AllocatorForMode::AllocatorIfExists); 1523 emitAllocateJSObject(resultGPR, JITAllocator::constant(allocator), scratchGPR1, structure, storage, mask, scratchGPR2, slowPath); 1586 1524 } 1587 1525 … … 1594 1532 // allocationSize can be aliased with any of the other input GPRs. If it's not aliased then it 1595 1533 // won't be clobbered. 1596 void emitAllocateVariableSized(GPRReg resultGPR, CompleteSubspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath) 1597 { 1598 static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two."); 1599 1600 unsigned stepShift = getLSBSet(MarkedSpace::sizeStep); 1601 1602 add32(TrustedImm32(MarkedSpace::sizeStep - 1), allocationSize, scratchGPR1); 1603 urshift32(TrustedImm32(stepShift), scratchGPR1); 1604 slowPath.append(branch32(Above, scratchGPR1, TrustedImm32(MarkedSpace::largeCutoff >> stepShift))); 1605 move(TrustedImmPtr(subspace.allocatorForSizeStep() - 1), scratchGPR2); 1606 loadPtr(BaseIndex(scratchGPR2, scratchGPR1, timesPtr()), scratchGPR1); 1607 1608 emitAllocate(resultGPR, nullptr, scratchGPR1, scratchGPR2, slowPath); 1609 } 1534 void emitAllocateVariableSized(GPRReg resultGPR, CompleteSubspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath); 1610 1535 1611 1536 template<typename ClassType, typename StructureType> -
trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2009-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2009-2018 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> 4 4 * … … 82 82 Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); 83 83 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); 84 BlockDirectory*allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists);84 Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 85 85 86 86 RegisterID resultReg = regT0; … … 88 88 RegisterID scratchReg = regT2; 89 89 90 move(TrustedImmPtr(allocator), allocatorReg); 91 if (allocator) 92 addSlowCase(Jump()); 93 JumpList slowCases; 94 auto butterfly = TrustedImmPtr(nullptr); 95 auto mask = TrustedImm32(0); 96 emitAllocateJSObject(resultReg, allocator, allocatorReg, TrustedImmPtr(structure), butterfly, mask, scratchReg, slowCases); 97 emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); 98 addSlowCase(slowCases); 99 emitPutVirtualRegister(currentInstruction[1].u.operand); 90 if (!allocator) 91 addSlowCase(jump()); 92 else { 93 JumpList slowCases; 94 auto butterfly = TrustedImmPtr(nullptr); 95 auto mask = TrustedImm32(0); 96 emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, mask, scratchReg, slowCases); 97 emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); 98 addSlowCase(slowCases); 99 emitPutVirtualRegister(currentInstruction[1].u.operand); 100 } 100 101 } 101 102 … … 767 768 loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); 768 769 addSlowCase(branchTestPtr(Zero, rareDataReg)); 769 load Ptr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg);770 load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); 770 771 loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); 771 addSlowCase(branch TestPtr(Zero, allocatorReg));772 addSlowCase(branch32(Equal, allocatorReg, TrustedImm32(Allocator().offset()))); 772 773 773 774 loadPtr(cachedFunction, cachedFunctionReg); … … 779 780 auto butterfly = TrustedImmPtr(nullptr); 780 781 auto mask = TrustedImm32(0); 781 emitAllocateJSObject(resultReg, nullptr, allocatorReg, structureReg, butterfly, mask, scratchReg, slowCases);782 emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, mask, scratchReg, slowCases); 782 783 emitGetVirtualRegister(callee, scratchReg); 783 784 loadPtr(Address(scratchReg, JSFunction::offsetOfRareData()), scratchReg); -
trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2009-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2009-2018 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2010 Patrick Gansterer <paroga@paroga.com> 4 4 * … … 80 80 Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure(); 81 81 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity()); 82 BlockDirectory*allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists);82 Allocator allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorForNonVirtual(allocationSize, AllocatorForMode::AllocatorIfExists); 83 83 84 84 RegisterID resultReg = returnValueGPR; … … 86 86 RegisterID scratchReg = regT3; 87 87 88 move(TrustedImmPtr(allocator), allocatorReg); 89 if (allocator) 90 addSlowCase(Jump()); 91 JumpList slowCases; 92 auto butterfly = TrustedImmPtr(nullptr); 93 auto mask = TrustedImm32(0); 94 emitAllocateJSObject(resultReg, allocator, allocatorReg, TrustedImmPtr(structure), butterfly, mask, scratchReg, slowCases); 95 emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); 96 addSlowCase(slowCases); 97 emitStoreCell(currentInstruction[1].u.operand, resultReg); 88 if (!allocator) 89 addSlowCase(jump()); 90 else { 91 JumpList slowCases; 92 auto butterfly = TrustedImmPtr(nullptr); 93 auto mask = TrustedImm32(0); 94 emitAllocateJSObject(resultReg, JITAllocator::constant(allocator), allocatorReg, TrustedImmPtr(structure), butterfly, mask, scratchReg, slowCases); 95 emitInitializeInlineStorage(resultReg, structure->inlineCapacity()); 96 addSlowCase(slowCases); 97 emitStoreCell(currentInstruction[1].u.operand, resultReg); 98 } 98 99 } 99 100 … … 857 858 loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); 858 859 addSlowCase(branchTestPtr(Zero, rareDataReg)); 859 load Ptr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg);860 load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); 860 861 loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); 861 addSlowCase(branch TestPtr(Zero, allocatorReg));862 addSlowCase(branch32(Equal, allocatorReg, TrustedImm32(Allocator().offset()))); 862 863 863 864 loadPtr(cachedFunction, cachedFunctionReg); … … 869 870 auto butterfly = TrustedImmPtr(nullptr); 870 871 auto mask = TrustedImm32(0); 871 emitAllocateJSObject(resultReg, nullptr, allocatorReg, structureReg, butterfly, mask, scratchReg, slowCases);872 emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, mask, scratchReg, slowCases); 872 873 addSlowCase(slowCases); 873 874 emitStoreCell(currentInstruction[1].u.operand, resultReg); -
trunk/Source/JavaScriptCore/runtime/ButterflyInlines.h
r227609 r227617 80 80 { 81 81 size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes); 82 void* base = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( size, nullptr, AllocationFailureMode::Assert);82 void* base = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, size, nullptr, AllocationFailureMode::Assert); 83 83 Butterfly* result = fromBase(base, preCapacity, propertyCapacity); 84 84 return result; … … 88 88 { 89 89 size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes); 90 void* base = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( size, nullptr, AllocationFailureMode::ReturnNull);90 void* base = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, size, nullptr, AllocationFailureMode::ReturnNull); 91 91 if (!base) 92 92 return nullptr; … … 166 166 size_t oldSize = totalSize(0, propertyCapacity, hadIndexingHeader, oldIndexingPayloadSizeInBytes); 167 167 size_t newSize = totalSize(0, propertyCapacity, true, newIndexingPayloadSizeInBytes); 168 void* newBase = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( newSize, nullptr, AllocationFailureMode::ReturnNull);168 void* newBase = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, newSize, nullptr, AllocationFailureMode::ReturnNull); 169 169 if (!newBase) 170 170 return nullptr; -
trunk/Source/JavaScriptCore/runtime/DirectArguments.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 119 119 putDirect(vm, vm.propertyNames->iteratorSymbol, globalObject()->arrayProtoValuesFunction(), static_cast<unsigned>(PropertyAttribute::DontEnum)); 120 120 121 void* backingStore = vm.gigacageAuxiliarySpace(m_mappedArguments.kind).allocateNonVirtual( mappedArgumentsSize(), nullptr, AllocationFailureMode::Assert);121 void* backingStore = vm.gigacageAuxiliarySpace(m_mappedArguments.kind).allocateNonVirtual(vm, mappedArgumentsSize(), nullptr, AllocationFailureMode::Assert); 122 122 bool* overrides = static_cast<bool*>(backingStore); 123 123 m_mappedArguments.set(vm, this, overrides); -
trunk/Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2015-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 264 264 265 265 if (argsLength) { 266 void* backingStore = vm.gigacageAuxiliarySpace(m_modifiedArgumentsDescriptor.kind).allocateNonVirtual( WTF::roundUpToMultipleOf<8>(argsLength), nullptr, AllocationFailureMode::Assert);266 void* backingStore = vm.gigacageAuxiliarySpace(m_modifiedArgumentsDescriptor.kind).allocateNonVirtual(vm, WTF::roundUpToMultipleOf<8>(argsLength), nullptr, AllocationFailureMode::Assert); 267 267 bool* modifiedArguments = static_cast<bool*>(backingStore); 268 268 m_modifiedArgumentsDescriptor.set(vm, this, modifiedArguments); -
trunk/Source/JavaScriptCore/runtime/HashMapImpl.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2016-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2016-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 208 208 auto scope = DECLARE_THROW_SCOPE(vm); 209 209 size_t allocationSize = HashMapBuffer::allocationSize(capacity); 210 void* data = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( allocationSize, nullptr, AllocationFailureMode::ReturnNull);210 void* data = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, allocationSize, nullptr, AllocationFailureMode::ReturnNull); 211 211 if (!data) { 212 212 throwOutOfMemoryError(exec, scope); -
trunk/Source/JavaScriptCore/runtime/JSArray.cpp
r227609 r227617 1 1 /* 2 2 * Copyright (C) 1999-2000 Harri Porten (porten@kde.org) 3 * Copyright (C) 2003-201 7Apple Inc. All rights reserved.3 * Copyright (C) 2003-2018 Apple Inc. All rights reserved. 4 4 * Copyright (C) 2003 Peter Kelly (pmk@post.com) 5 5 * Copyright (C) 2006 Alexey Proskuryakov (ap@nypop.com) … … 82 82 83 83 unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, initialLength); 84 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)), deferralContext, AllocationFailureMode::ReturnNull); 84 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( 85 vm, 86 Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)), 87 deferralContext, AllocationFailureMode::ReturnNull); 85 88 if (UNLIKELY(!temp)) 86 89 return nullptr; … … 99 102 static const unsigned indexBias = 0; 100 103 unsigned vectorLength = ArrayStorage::optimalVectorLength(indexBias, structure, initialLength); 101 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(Butterfly::totalSize(indexBias, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)), deferralContext, AllocationFailureMode::ReturnNull); 104 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( 105 vm, 106 Butterfly::totalSize(indexBias, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)), 107 deferralContext, AllocationFailureMode::ReturnNull); 102 108 if (UNLIKELY(!temp)) 103 109 return nullptr; … … 369 375 } else { 370 376 size_t newSize = Butterfly::totalSize(0, propertyCapacity, true, ArrayStorage::sizeFor(desiredCapacity)); 371 newAllocBase = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(newSize, nullptr, AllocationFailureMode::ReturnNull); 377 newAllocBase = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( 378 vm, newSize, nullptr, AllocationFailureMode::ReturnNull); 372 379 if (!newAllocBase) 373 380 return false; -
trunk/Source/JavaScriptCore/runtime/JSArray.h
r227609 r227617 1 1 /* 2 2 * Copyright (C) 1999-2000 Harri Porten (porten@kde.org) 3 * Copyright (C) 2003-201 7Apple Inc. All rights reserved.3 * Copyright (C) 2003-2018 Apple Inc. All rights reserved. 4 4 * 5 5 * This library is free software; you can redistribute it and/or … … 239 239 unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, vectorLengthHint); 240 240 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( 241 vm, 241 242 Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)), 242 243 nullptr, AllocationFailureMode::ReturnNull); -
trunk/Source/JavaScriptCore/runtime/JSArrayBufferView.cpp
r227609 r227617 67 67 size_t size = sizeOf(length, elementSize); 68 68 if (size) { 69 temp = vm.primitiveGigacageAuxiliarySpace.allocateNonVirtual( size, nullptr, AllocationFailureMode::ReturnNull);69 temp = vm.primitiveGigacageAuxiliarySpace.allocateNonVirtual(vm, size, nullptr, AllocationFailureMode::ReturnNull); 70 70 if (!temp) 71 71 return; -
trunk/Source/JavaScriptCore/runtime/JSCellInlines.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2012-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2012-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 146 146 ALWAYS_INLINE void* tryAllocateCellHelper(Heap& heap, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode) 147 147 { 148 VM& vm = *heap.vm(); 148 149 ASSERT(deferralContext || !DisallowGC::isInEffectOnCurrentThread()); 149 150 ASSERT(size >= sizeof(T)); 150 JSCell* result = static_cast<JSCell*>(subspaceFor<T>( *heap.vm())->allocateNonVirtual(size, deferralContext, failureMode));151 JSCell* result = static_cast<JSCell*>(subspaceFor<T>(vm)->allocateNonVirtual(vm, size, deferralContext, failureMode)); 151 152 if (failureMode == AllocationFailureMode::ReturnNull && !result) 152 153 return nullptr; 153 154 #if ENABLE(GC_VALIDATION) 154 ASSERT(! heap.vm()->isInitializingObject());155 heap.vm()->setInitializingObjectClass(T::info());155 ASSERT(!vm.isInitializingObject()); 156 vm.setInitializingObjectClass(T::info()); 156 157 #endif 157 158 result->clearStructure(); -
trunk/Source/JavaScriptCore/runtime/JSGlobalObject.cpp
r227609 r227617 336 336 } 337 337 338 JSGlobalObject::JSGlobalObject(VM& vm, Structure* structure, const GlobalObjectMethodTable* globalObjectMethodTable )338 JSGlobalObject::JSGlobalObject(VM& vm, Structure* structure, const GlobalObjectMethodTable* globalObjectMethodTable, RefPtr<ThreadLocalCache> threadLocalCache) 339 339 : Base(vm, structure, 0) 340 340 , m_vm(vm) … … 354 354 , m_runtimeFlags() 355 355 , m_globalObjectMethodTable(globalObjectMethodTable ? globalObjectMethodTable : &s_globalObjectMethodTable) 356 , m_threadLocalCache(threadLocalCache ? WTFMove(threadLocalCache) : vm.defaultThreadLocalCache) 356 357 { 357 358 } -
trunk/Source/JavaScriptCore/runtime/JSGlobalObject.h
r227609 r227617 492 492 493 493 protected: 494 JS_EXPORT_PRIVATE explicit JSGlobalObject(VM&, Structure*, const GlobalObjectMethodTable* = 0 );494 JS_EXPORT_PRIVATE explicit JSGlobalObject(VM&, Structure*, const GlobalObjectMethodTable* = 0, RefPtr<ThreadLocalCache> = nullptr); 495 495 496 496 JS_EXPORT_PRIVATE void finishCreation(VM&); … … 894 894 void setWrapperMap(JSWrapperMap* map) { m_wrapperMap = map; } 895 895 #endif 896 897 ThreadLocalCache& threadLocalCache() const { return *m_threadLocalCache.get(); } 896 898 897 899 protected: … … 925 927 RetainPtr<JSWrapperMap> m_wrapperMap; 926 928 #endif 929 930 RefPtr<ThreadLocalCache> m_threadLocalCache; 927 931 }; 928 932 -
trunk/Source/JavaScriptCore/runtime/JSLock.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2005-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2005-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * This library is free software; you can redistribute it and/or … … 29 29 #include "MachineStackMarker.h" 30 30 #include "SamplingProfiler.h" 31 #include "ThreadLocalCacheInlines.h" 31 32 #include "WasmMachineThreads.h" 32 33 #include <thread> … … 148 149 m_vm->setLastStackTop(thread.savedLastStackTop()); 149 150 ASSERT(thread.stack().contains(m_vm->lastStackTop())); 150 151 152 m_vm->defaultThreadLocalCache->install(*m_vm); 153 151 154 m_vm->heap.machineThreads().addCurrentThread(); 152 155 #if ENABLE(WEBASSEMBLY) -
trunk/Source/JavaScriptCore/runtime/Options.h
r227609 r227617 237 237 v(bool, useBumpAllocator, true, Normal, nullptr) \ 238 238 v(bool, stealEmptyBlocksFromOtherAllocators, true, Normal, nullptr) \ 239 v(bool, tradeDestructorBlocks, true, Normal, nullptr) \ 239 240 v(bool, eagerlyUpdateTopCallFrame, false, Normal, nullptr) \ 240 241 \ -
trunk/Source/JavaScriptCore/runtime/RegExpMatchesArray.h
r227609 r227617 1 1 /* 2 * Copyright (C) 2008-201 7Apple Inc. All Rights Reserved.2 * Copyright (C) 2008-2018 Apple Inc. All Rights Reserved. 3 3 * 4 4 * This library is free software; you can redistribute it and/or … … 43 43 JSGlobalObject* globalObject = structure->globalObject(); 44 44 bool createUninitialized = globalObject->isOriginalArrayStructure(structure); 45 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual( Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)), deferralContext, AllocationFailureMode::ReturnNull);45 void* temp = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)), deferralContext, AllocationFailureMode::ReturnNull); 46 46 if (UNLIKELY(!temp)) 47 47 return nullptr; -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2008-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2008-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 122 122 #include "StructureInlines.h" 123 123 #include "TestRunnerUtils.h" 124 #include "ThreadLocalCacheInlines.h" 124 125 #include "ThunkGenerators.h" 125 126 #include "TypeProfiler.h" … … 307 308 setLastStackTop(stack.origin()); 308 309 310 defaultThreadLocalCache = ThreadLocalCache::create(heap); 311 defaultThreadLocalCache->install(*this); 312 309 313 // Need to be careful to keep everything consistent here 310 314 JSLockHolder lock(this); … … 495 499 m_apiLock->willDestroyVM(this); 496 500 heap.lastChanceToFinalize(); 501 502 #if !ENABLE(FAST_TLS_JIT) 503 ThreadLocalCache::destructor(threadLocalCacheData); 504 #endif 497 505 498 506 delete interpreter; -
trunk/Source/JavaScriptCore/runtime/VM.h
r227609 r227617 55 55 #include "VMEntryRecord.h" 56 56 #include "VMTraps.h" 57 #include "ThreadLocalCache.h" 57 58 #include "WasmContext.h" 58 59 #include "Watchpoint.h" … … 657 658 JSObject* stringRecursionCheckFirstObject { nullptr }; 658 659 HashSet<JSObject*> stringRecursionCheckVisitedObjects; 660 661 #if !ENABLE(FAST_TLS_JIT) 662 ThreadLocalCache::Data* threadLocalCacheData { nullptr }; 663 #endif 664 RefPtr<ThreadLocalCache> defaultThreadLocalCache; 659 665 660 666 LocalTimeOffsetCache localTimeOffsetCache; -
trunk/Source/JavaScriptCore/runtime/VMEntryScope.cpp
r227609 r227617 1 1 /* 2 * Copyright (C) 2013-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2013-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 30 30 #include "Options.h" 31 31 #include "SamplingProfiler.h" 32 #include "ThreadLocalCacheInlines.h" 32 33 #include "VM.h" 33 34 #include "Watchdog.h" … … 41 42 , m_globalObject(globalObject) 42 43 { 44 globalObject->threadLocalCache().install(vm); 43 45 ASSERT(!DisallowVMReentry::isInEffectOnCurrentThread()); 44 46 ASSERT(Thread::current().stack().isGrowingDownward()); -
trunk/Source/WTF/ChangeLog
r227609 r227617 1 2018-01-25 Filip Pizlo <fpizlo@apple.com> 2 3 JSC GC should support TLCs (thread local caches) 4 https://bugs.webkit.org/show_bug.cgi?id=181559 5 6 Reviewed by Mark Lam and Saam Barati. 7 8 * wtf/Bitmap.h: Just fixing a compile error. 9 1 10 2018-01-25 Commit Queue <commit-queue@webkit.org> 2 11 -
trunk/Source/WTF/wtf/Bitmap.h
r227609 r227617 22 22 #include <array> 23 23 #include <wtf/Atomics.h> 24 #include <wtf/HashFunctions.h> 24 25 #include <wtf/StdLibExtras.h> 25 26 #include <stdint.h>
Note: See TracChangeset
for help on using the changeset viewer.