Changeset 212778 in webkit
- Timestamp:
- Feb 21, 2017 4:58:15 PM (7 years ago)
- Location:
- trunk
- Files:
-
- 14 added
- 3 deleted
- 41 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r212717 r212778 1 2017-02-20 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Add versions of splay that flash heap access, to simulate what might happen if a third-party app 9 was running concurrent GC. In this case, we might actually start the collector thread. 10 11 * stress/splay-flash-access-1ms.js: Added. 12 (performance.now): 13 (this.Setup.setup.setup): 14 (this.TearDown.tearDown.tearDown): 15 (Benchmark): 16 (BenchmarkResult): 17 (BenchmarkResult.prototype.valueOf): 18 (BenchmarkSuite): 19 (alert): 20 (Math.random): 21 (BenchmarkSuite.ResetRNG): 22 (RunStep): 23 (BenchmarkSuite.RunSuites): 24 (BenchmarkSuite.CountBenchmarks): 25 (BenchmarkSuite.GeometricMean): 26 (BenchmarkSuite.GeometricMeanTime): 27 (BenchmarkSuite.AverageAbovePercentile): 28 (BenchmarkSuite.GeometricMeanLatency): 29 (BenchmarkSuite.FormatScore): 30 (BenchmarkSuite.prototype.NotifyStep): 31 (BenchmarkSuite.prototype.NotifyResult): 32 (BenchmarkSuite.prototype.NotifyError): 33 (BenchmarkSuite.prototype.RunSingleBenchmark): 34 (RunNextSetup): 35 (RunNextBenchmark): 36 (RunNextTearDown): 37 (BenchmarkSuite.prototype.RunStep): 38 (GeneratePayloadTree): 39 (GenerateKey): 40 (SplayUpdateStats): 41 (InsertNewNode): 42 (SplaySetup): 43 (SplayTearDown): 44 (SplayRun): 45 (SplayTree): 46 (SplayTree.prototype.isEmpty): 47 (SplayTree.prototype.insert): 48 (SplayTree.prototype.remove): 49 (SplayTree.prototype.find): 50 (SplayTree.prototype.findMax): 51 (SplayTree.prototype.findGreatestLessThan): 52 (SplayTree.prototype.exportKeys): 53 (SplayTree.prototype.splay_): 54 (SplayTree.Node): 55 (SplayTree.Node.prototype.traverse_): 56 (jscSetUp): 57 (jscTearDown): 58 (jscRun): 59 (averageAbovePercentile): 60 (printPercentile): 61 * stress/splay-flash-access.js: Added. 62 (performance.now): 63 (this.Setup.setup.setup): 64 (this.TearDown.tearDown.tearDown): 65 (Benchmark): 66 (BenchmarkResult): 67 (BenchmarkResult.prototype.valueOf): 68 (BenchmarkSuite): 69 (alert): 70 (Math.random): 71 (BenchmarkSuite.ResetRNG): 72 (RunStep): 73 (BenchmarkSuite.RunSuites): 74 (BenchmarkSuite.CountBenchmarks): 75 (BenchmarkSuite.GeometricMean): 76 (BenchmarkSuite.GeometricMeanTime): 77 (BenchmarkSuite.AverageAbovePercentile): 78 (BenchmarkSuite.GeometricMeanLatency): 79 (BenchmarkSuite.FormatScore): 80 (BenchmarkSuite.prototype.NotifyStep): 81 (BenchmarkSuite.prototype.NotifyResult): 82 (BenchmarkSuite.prototype.NotifyError): 83 (BenchmarkSuite.prototype.RunSingleBenchmark): 84 (RunNextSetup): 85 (RunNextBenchmark): 86 (RunNextTearDown): 87 (BenchmarkSuite.prototype.RunStep): 88 (GeneratePayloadTree): 89 (GenerateKey): 90 (SplayUpdateStats): 91 (InsertNewNode): 92 (SplaySetup): 93 (SplayTearDown): 94 (SplayRun): 95 (SplayTree): 96 (SplayTree.prototype.isEmpty): 97 (SplayTree.prototype.insert): 98 (SplayTree.prototype.remove): 99 (SplayTree.prototype.find): 100 (SplayTree.prototype.findMax): 101 (SplayTree.prototype.findGreatestLessThan): 102 (SplayTree.prototype.exportKeys): 103 (SplayTree.prototype.splay_): 104 (SplayTree.Node): 105 (SplayTree.Node.prototype.traverse_): 106 (jscSetUp): 107 (jscTearDown): 108 (jscRun): 109 (averageAbovePercentile): 110 (printPercentile): 111 1 112 2017-02-21 Ryan Haddad <ryanhaddad@apple.com> 2 113 -
trunk/Source/JavaScriptCore/CMakeLists.txt
r212775 r212778 476 476 heap/CodeBlockSet.cpp 477 477 heap/CollectionScope.cpp 478 heap/CollectorPhase.cpp 478 479 heap/ConservativeRoots.cpp 479 480 heap/DeferGC.cpp … … 483 484 heap/FreeList.cpp 484 485 heap/GCActivityCallback.cpp 486 heap/GCConductor.cpp 485 487 heap/GCLogging.cpp 486 488 heap/HandleSet.cpp … … 492 494 heap/HeapSnapshot.cpp 493 495 heap/HeapSnapshotBuilder.cpp 494 heap/HeapStatistics.cpp495 496 heap/HeapTimer.cpp 496 497 heap/HeapVerifier.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r212775 r212778 1 2017-02-20 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 This turns the collector thread's workflow into a state machine, so that the mutator thread can 9 run it directly. This reduces the amount of synchronization we do with the collector thread, and 10 means that most apps will never start the collector thread. The collector thread will still start 11 when we need to finish collecting and we don't have heap access. 12 13 In this new world, "stopping the world" means relinquishing control of collection to the mutator. 14 This means tracking who is conducting collection. I use the GCConductor enum to say who is 15 conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to 16 refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn). 17 So, stopping the world means giving the mutator the conn. Releasing heap access means giving the 18 collector the conn. 19 20 This meant bringing back the conservative scan of the calling thread. It turns out that this 21 scan was too slow to be called on each GC increment because apparently setjmp() now does system 22 calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about 23 whether or not it was correct, so I also made it so that the GC only rarely asks for the register 24 state. I think we still want to use my register saving code instead of setjmp because setjmp 25 seems to save things we don't need, and that could make us overly conservative. 26 27 It turns out that this new scheduling discipline makes the old space-time scheduler perform 28 better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is 29 because the mutator having the conn enables us to time the mutator<->collector context switches 30 by polling. The OS is never involved. So, we can use super precise timing. This allows the old 31 space-time schduler to shine like it hadn't before. 32 33 The splay results imply that this is all a good thing. On 2-core systems, this reduces pause 34 times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by 35 half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much 36 effect. 37 38 * CMakeLists.txt: 39 * JavaScriptCore.xcodeproj/project.pbxproj: 40 * bytecode/CodeBlock.cpp: 41 (JSC::CodeBlock::visitChildren): 42 * dfg/DFGWorklist.cpp: 43 (JSC::DFG::Worklist::ThreadBody::ThreadBody): 44 (JSC::DFG::Worklist::dump): 45 (JSC::DFG::numberOfWorklists): 46 (JSC::DFG::ensureWorklistForIndex): 47 (JSC::DFG::existingWorklistForIndexOrNull): 48 (JSC::DFG::existingWorklistForIndex): 49 * dfg/DFGWorklist.h: 50 (JSC::DFG::numberOfWorklists): Deleted. 51 (JSC::DFG::ensureWorklistForIndex): Deleted. 52 (JSC::DFG::existingWorklistForIndexOrNull): Deleted. 53 (JSC::DFG::existingWorklistForIndex): Deleted. 54 * heap/CollectingScope.h: Added. 55 (JSC::CollectingScope::CollectingScope): 56 (JSC::CollectingScope::~CollectingScope): 57 * heap/CollectorPhase.cpp: Added. 58 (JSC::worldShouldBeSuspended): 59 (WTF::printInternal): 60 * heap/CollectorPhase.h: Added. 61 * heap/EdenGCActivityCallback.cpp: 62 (JSC::EdenGCActivityCallback::lastGCLength): 63 * heap/FullGCActivityCallback.cpp: 64 (JSC::FullGCActivityCallback::doCollection): 65 (JSC::FullGCActivityCallback::lastGCLength): 66 * heap/GCConductor.cpp: Added. 67 (JSC::gcConductorShortName): 68 (WTF::printInternal): 69 * heap/GCConductor.h: Added. 70 * heap/GCFinalizationCallback.cpp: Added. 71 (JSC::GCFinalizationCallback::GCFinalizationCallback): 72 (JSC::GCFinalizationCallback::~GCFinalizationCallback): 73 * heap/GCFinalizationCallback.h: Added. 74 (JSC::GCFinalizationCallbackFuncAdaptor::GCFinalizationCallbackFuncAdaptor): 75 (JSC::createGCFinalizationCallback): 76 * heap/Heap.cpp: 77 (JSC::Heap::Thread::Thread): 78 (JSC::Heap::Heap): 79 (JSC::Heap::lastChanceToFinalize): 80 (JSC::Heap::gatherStackRoots): 81 (JSC::Heap::updateObjectCounts): 82 (JSC::Heap::sweepSynchronously): 83 (JSC::Heap::collectAllGarbage): 84 (JSC::Heap::collectAsync): 85 (JSC::Heap::collectSync): 86 (JSC::Heap::shouldCollectInCollectorThread): 87 (JSC::Heap::collectInCollectorThread): 88 (JSC::Heap::checkConn): 89 (JSC::Heap::runNotRunningPhase): 90 (JSC::Heap::runBeginPhase): 91 (JSC::Heap::runFixpointPhase): 92 (JSC::Heap::runConcurrentPhase): 93 (JSC::Heap::runReloopPhase): 94 (JSC::Heap::runEndPhase): 95 (JSC::Heap::changePhase): 96 (JSC::Heap::finishChangingPhase): 97 (JSC::Heap::stopThePeriphery): 98 (JSC::Heap::resumeThePeriphery): 99 (JSC::Heap::stopTheMutator): 100 (JSC::Heap::resumeTheMutator): 101 (JSC::Heap::stopIfNecessarySlow): 102 (JSC::Heap::collectInMutatorThread): 103 (JSC::Heap::waitForCollector): 104 (JSC::Heap::acquireAccessSlow): 105 (JSC::Heap::releaseAccessSlow): 106 (JSC::Heap::relinquishConn): 107 (JSC::Heap::finishRelinquishingConn): 108 (JSC::Heap::handleNeedFinalize): 109 (JSC::Heap::notifyThreadStopping): 110 (JSC::Heap::finalize): 111 (JSC::Heap::addFinalizationCallback): 112 (JSC::Heap::requestCollection): 113 (JSC::Heap::waitForCollection): 114 (JSC::Heap::updateAllocationLimits): 115 (JSC::Heap::didFinishCollection): 116 (JSC::Heap::collectIfNecessaryOrDefer): 117 (JSC::Heap::notifyIsSafeToCollect): 118 (JSC::Heap::preventCollection): 119 (JSC::Heap::performIncrement): 120 (JSC::Heap::markToFixpoint): Deleted. 121 (JSC::Heap::shouldCollectInThread): Deleted. 122 (JSC::Heap::collectInThread): Deleted. 123 (JSC::Heap::stopTheWorld): Deleted. 124 (JSC::Heap::resumeTheWorld): Deleted. 125 * heap/Heap.h: 126 (JSC::Heap::machineThreads): 127 (JSC::Heap::lastFullGCLength): 128 (JSC::Heap::lastEdenGCLength): 129 (JSC::Heap::increaseLastFullGCLength): 130 * heap/HeapInlines.h: 131 (JSC::Heap::mutatorIsStopped): Deleted. 132 * heap/HeapStatistics.cpp: Removed. 133 * heap/HeapStatistics.h: Removed. 134 * heap/HelpingGCScope.h: Removed. 135 * heap/IncrementalSweeper.cpp: 136 (JSC::IncrementalSweeper::stopSweeping): 137 (JSC::IncrementalSweeper::willFinishSweeping): Deleted. 138 * heap/IncrementalSweeper.h: 139 * heap/MachineStackMarker.cpp: 140 (JSC::MachineThreads::gatherFromCurrentThread): 141 (JSC::MachineThreads::gatherConservativeRoots): 142 (JSC::callWithCurrentThreadState): 143 * heap/MachineStackMarker.h: 144 * heap/MarkedAllocator.cpp: 145 (JSC::MarkedAllocator::allocateSlowCaseImpl): 146 * heap/MarkedBlock.cpp: 147 (JSC::MarkedBlock::Handle::sweep): 148 * heap/MarkedSpace.cpp: 149 (JSC::MarkedSpace::sweep): 150 * heap/MutatorState.cpp: 151 (WTF::printInternal): 152 * heap/MutatorState.h: 153 * heap/RegisterState.h: Added. 154 * heap/RunningScope.h: Added. 155 (JSC::RunningScope::RunningScope): 156 (JSC::RunningScope::~RunningScope): 157 * heap/SlotVisitor.cpp: 158 (JSC::SlotVisitor::SlotVisitor): 159 (JSC::SlotVisitor::drain): 160 (JSC::SlotVisitor::drainFromShared): 161 (JSC::SlotVisitor::drainInParallelPassively): 162 (JSC::SlotVisitor::donateAll): 163 (JSC::SlotVisitor::donate): 164 * heap/SlotVisitor.h: 165 (JSC::SlotVisitor::codeName): 166 * heap/StochasticSpaceTimeMutatorScheduler.cpp: 167 (JSC::StochasticSpaceTimeMutatorScheduler::beginCollection): 168 (JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall): 169 (JSC::StochasticSpaceTimeMutatorScheduler::timeToStop): 170 * heap/SweepingScope.h: Added. 171 (JSC::SweepingScope::SweepingScope): 172 (JSC::SweepingScope::~SweepingScope): 173 * jit/JITWorklist.cpp: 174 (JSC::JITWorklist::Thread::Thread): 175 * jsc.cpp: 176 (GlobalObject::finishCreation): 177 (functionFlashHeapAccess): 178 * runtime/InitializeThreading.cpp: 179 (JSC::initializeThreading): 180 * runtime/JSCellInlines.h: 181 (JSC::JSCell::classInfo): 182 * runtime/Options.cpp: 183 (JSC::overrideDefaults): 184 * runtime/Options.h: 185 * runtime/TestRunnerUtils.cpp: 186 (JSC::finalizeStatsAtEndOfTesting): 187 1 188 2017-02-21 Saam Barati <sbarati@apple.com> 2 189 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r212775 r212778 270 270 0F2BDC4F15228BF300CD8910 /* DFGValueSource.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */; }; 271 271 0F2BDC5115228FFD00CD8910 /* DFGVariableEvent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */; }; 272 0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C63A91E4FA42C00C13839 /* RunningScope.h */; settings = {ATTRIBUTES = (Private, ); }; }; 272 273 0F2D4DDD19832D34007D4B19 /* DebuggerScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */; }; 273 274 0F2D4DDE19832D34007D4B19 /* DebuggerScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 635 636 0FA762061DB9243100B7A2FD /* MutatorState.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA762021DB9242300B7A2FD /* MutatorState.cpp */; }; 636 637 0FA762071DB9243300B7A2FD /* MutatorState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762031DB9242300B7A2FD /* MutatorState.h */; settings = {ATTRIBUTES = (Private, ); }; }; 637 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */; };638 638 0FA7620B1DB959F900B7A2FD /* AllocatingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */; }; 639 639 0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; }; … … 736 736 0FCEFADF180738C000472CE4 /* FTLLocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */; }; 737 737 0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; }; 738 0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */; settings = {ATTRIBUTES = (Private, ); }; }; 739 0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E81E43D3470006AB08 /* GCConductor.h */; settings = {ATTRIBUTES = (Private, ); }; }; 740 0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */; }; 741 0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */; }; 742 0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */; }; 743 0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */; settings = {ATTRIBUTES = (Private, ); }; }; 744 0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */; }; 738 745 0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; 739 746 0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; }; … … 1206 1213 14B8EC720A5652090062BE54 /* CoreFoundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 6560A4CF04B3B3E7008AE952 /* CoreFoundation.framework */; }; 1207 1214 14BA78F113AAB88F005B7C2C /* SlotVisitor.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1208 14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; };1215 14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; settings = {COMPILER_FLAGS = "-fno-optimize-sibling-calls"; }; }; 1209 1216 14BA7A9813AADFF8005B7C2C /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA7A9613AADFF8005B7C2C /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1210 1217 14BD59C50A3E8F9F00BAF59C /* JavaScriptCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 932F5BD90822A1C700736975 /* JavaScriptCore.framework */; }; … … 2180 2187 C225494315F7DBAA0065E898 /* SlotVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C225494215F7DBAA0065E898 /* SlotVisitor.cpp */; }; 2181 2188 C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F77008E1402FDD60078EB39 /* SamplingCounter.h */; settings = {ATTRIBUTES = (Private, ); }; }; 2182 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */; };2183 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */ = {isa = PBXBuildFile; fileRef = C24D31E1161CD695002AA4DB /* HeapStatistics.h */; settings = {ATTRIBUTES = (Private, ); }; };2184 2189 C25D709B16DE99F400FCA6BC /* JSManagedValue.mm in Sources */ = {isa = PBXBuildFile; fileRef = C25D709916DE99F400FCA6BC /* JSManagedValue.mm */; }; 2185 2190 C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */ = {isa = PBXBuildFile; fileRef = C25D709A16DE99F400FCA6BC /* JSManagedValue.h */; settings = {ATTRIBUTES = (Public, ); }; }; … … 2748 2753 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValueSource.cpp; path = dfg/DFGValueSource.cpp; sourceTree = "<group>"; }; 2749 2754 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableEvent.cpp; path = dfg/DFGVariableEvent.cpp; sourceTree = "<group>"; }; 2755 0F2C63A91E4FA42C00C13839 /* RunningScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RunningScope.h; sourceTree = "<group>"; }; 2750 2756 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DebuggerScope.cpp; sourceTree = "<group>"; }; 2751 2757 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerScope.h; sourceTree = "<group>"; }; … … 3107 3113 0FA762021DB9242300B7A2FD /* MutatorState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorState.cpp; sourceTree = "<group>"; }; 3108 3114 0FA762031DB9242300B7A2FD /* MutatorState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorState.h; sourceTree = "<group>"; }; 3109 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HelpingGCScope.h; sourceTree = "<group>"; };3110 3115 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocatingScope.h; sourceTree = "<group>"; }; 3111 3116 0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; }; … … 3220 3225 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLocation.cpp; path = ftl/FTLLocation.cpp; sourceTree = "<group>"; }; 3221 3226 0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; }; 3227 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CollectorPhase.cpp; sourceTree = "<group>"; }; 3228 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectorPhase.h; sourceTree = "<group>"; }; 3229 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCConductor.cpp; sourceTree = "<group>"; }; 3230 0FD0E5E81E43D3470006AB08 /* GCConductor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCConductor.h; sourceTree = "<group>"; }; 3231 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SweepingScope.h; sourceTree = "<group>"; }; 3232 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterState.h; sourceTree = "<group>"; }; 3233 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectingScope.h; sourceTree = "<group>"; }; 3222 3234 0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; }; 3223 3235 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; }; … … 4695 4707 C2181FC118A948FB0025A235 /* JSExportTests.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = JSExportTests.mm; path = API/tests/JSExportTests.mm; sourceTree = "<group>"; }; 4696 4708 C225494215F7DBAA0065E898 /* SlotVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SlotVisitor.cpp; sourceTree = "<group>"; }; 4697 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapStatistics.cpp; sourceTree = "<group>"; };4698 C24D31E1161CD695002AA4DB /* HeapStatistics.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapStatistics.h; sourceTree = "<group>"; };4699 4709 C25D709916DE99F400FCA6BC /* JSManagedValue.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = JSManagedValue.mm; sourceTree = "<group>"; }; 4700 4710 C25D709A16DE99F400FCA6BC /* JSManagedValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSManagedValue.h; sourceTree = "<group>"; }; … … 5761 5771 0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */, 5762 5772 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */, 5773 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */, 5763 5774 0FA762001DB9242300B7A2FD /* CollectionScope.cpp */, 5764 5775 0FA762011DB9242300B7A2FD /* CollectionScope.h */, 5776 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */, 5777 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */, 5765 5778 146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */, 5766 5779 149DAAF212EB559D0083B12B /* ConservativeRoots.h */, … … 5780 5793 2AACE63B18CA5A0300ED0191 /* GCActivityCallback.h */, 5781 5794 BCBE2CAD14E985AA000593AD /* GCAssertions.h */, 5795 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */, 5796 0FD0E5E81E43D3470006AB08 /* GCConductor.h */, 5782 5797 0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */, 5783 5798 0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */, … … 5815 5830 A5311C341C77CEAC00E6B1B6 /* HeapSnapshotBuilder.cpp */, 5816 5831 A5311C351C77CEAC00E6B1B6 /* HeapSnapshotBuilder.h */, 5817 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */,5818 C24D31E1161CD695002AA4DB /* HeapStatistics.h */,5819 5832 C2E526BB1590EF000054E48D /* HeapTimer.cpp */, 5820 5833 C2E526BC1590EF000054E48D /* HeapTimer.h */, … … 5822 5835 FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */, 5823 5836 FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */, 5824 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */,5825 5837 C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */, 5826 5838 C25F8BCC157544A900245B71 /* IncrementalSweeper.h */, … … 5860 5872 ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */, 5861 5873 0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */, 5874 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */, 5862 5875 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */, 5876 0F2C63A91E4FA42C00C13839 /* RunningScope.h */, 5863 5877 C225494215F7DBAA0065E898 /* SlotVisitor.cpp */, 5864 5878 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */, … … 5875 5889 0F7DF1321E2970D50095951B /* Subspace.h */, 5876 5890 0F7DF1331E2970D50095951B /* SubspaceInlines.h */, 5891 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */, 5877 5892 0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */, 5878 5893 0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */, … … 7996 8011 DC9A0C201D2D9CB30085124E /* B3CaseCollection.h in Headers */, 7997 8012 DC9A0C1F1D2D9CB10085124E /* B3CaseCollectionInlines.h in Headers */, 8013 0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */, 7998 8014 0F338DFA1BE96AA80013C88F /* B3CCallValue.h in Headers */, 7999 8015 0F33FCFB1C1625BE00323F67 /* B3CFG.h in Headers */, … … 8271 8287 0F2017801DCADC3500EA5950 /* DFGFlowIndexing.h in Headers */, 8272 8288 0F2017821DCADD4200EA5950 /* DFGFlowMap.h in Headers */, 8289 0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */, 8273 8290 0F9D339717FFC4E60073C2BC /* DFGFlushedAt.h in Headers */, 8274 8291 A7D89CF817A0B8CC00773AD8 /* DFGFlushFormat.h in Headers */, … … 8563 8580 A54C2AB11C6544F200A18D78 /* HeapSnapshot.h in Headers */, 8564 8581 A5311C361C77CEC500E6B1B6 /* HeapSnapshotBuilder.h in Headers */, 8565 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */,8566 8582 C2E526BE1590EF000054E48D /* HeapTimer.h in Headers */, 8583 0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */, 8567 8584 0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */, 8568 8585 FE7BA6101A1A7CEC00F1F7B4 /* HeapVerifier.h in Headers */, 8569 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */,8570 8586 0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */, 8571 8587 DC2143071CA32E55000A8869 /* ICStats.h in Headers */, … … 8628 8644 A18193E41B4E0CDB00FC1029 /* IntlCollatorPrototype.lut.h in Headers */, 8629 8645 A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */, 8646 0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */, 8630 8647 A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */, 8631 8648 A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */, … … 8637 8654 A1D793011B43864B004516F5 /* IntlNumberFormatPrototype.h in Headers */, 8638 8655 A125846F1B45A36000CC7F6C /* IntlNumberFormatPrototype.lut.h in Headers */, 8656 0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */, 8639 8657 A12BBFF21B044A8B00664B69 /* IntlObject.h in Headers */, 8640 8658 708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */, … … 9217 9235 AD2FCC161DB59CB200B3E736 /* WebAssemblyCompileErrorConstructor.lut.h in Headers */, 9218 9236 AD2FCBEF1DB58DAD00B3E736 /* WebAssemblyCompileErrorPrototype.h in Headers */, 9237 0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */, 9219 9238 AD2FCC171DB59CB200B3E736 /* WebAssemblyCompileErrorPrototype.lut.h in Headers */, 9220 9239 AD4937D41DDD27DE0077C807 /* WebAssemblyFunction.h in Headers */, … … 10191 10210 A54C2AB01C6544EE00A18D78 /* HeapSnapshot.cpp in Sources */, 10192 10211 A5311C371C77CECA00E6B1B6 /* HeapSnapshotBuilder.cpp in Sources */, 10193 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */,10194 10212 C2E526BD1590EF000054E48D /* HeapTimer.cpp in Sources */, 10195 10213 FE7BA60F1A1A7CEC00F1F7B4 /* HeapVerifier.cpp in Sources */, … … 10256 10274 FE187A0E1C030D640038BBCA /* JITDivGenerator.cpp in Sources */, 10257 10275 0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */, 10276 0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */, 10258 10277 0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */, 10259 10278 FE3A06BD1C11040D00390FDD /* JITLeftShiftGenerator.cpp in Sources */, … … 10504 10523 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */, 10505 10524 0FC3141518146D7000033232 /* RegisterSet.cpp in Sources */, 10525 0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */, 10506 10526 A57D23ED1891B5540031C7FA /* RegularExpression.cpp in Sources */, 10507 10527 992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */, -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r212616 r212778 2533 2533 if (m_instructions.size()) { 2534 2534 unsigned refCount = m_instructions.refCount(); 2535 RELEASE_ASSERT(refCount); 2535 if (!refCount) { 2536 dataLog("CodeBlock: ", RawPointer(this), "\n"); 2537 dataLog("m_instructions.data(): ", RawPointer(m_instructions.data()), "\n"); 2538 dataLog("refCount: ", refCount, "\n"); 2539 RELEASE_ASSERT_NOT_REACHED(); 2540 } 2536 2541 visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount); 2537 2542 } -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.cpp
r212616 r212778 41 41 class Worklist::ThreadBody : public AutomaticThread { 42 42 public: 43 ThreadBody(const LockHolder& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)43 ThreadBody(const AbstractLocker& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority) 44 44 : AutomaticThread(locker, lock, condition) 45 45 , m_worklist(worklist) … … 50 50 51 51 protected: 52 PollResult poll(const LockHolder& locker) override52 PollResult poll(const AbstractLocker& locker) override 53 53 { 54 54 if (m_worklist.m_queue.isEmpty()) … … 151 151 } 152 152 153 void threadIsStopping(const LockHolder&) override153 void threadIsStopping(const AbstractLocker&) override 154 154 { 155 155 // We're holding the Worklist::m_lock, so we should be careful not to deadlock. … … 480 480 } 481 481 482 void Worklist::dump(const LockHolder&, PrintStream& out) const482 void Worklist::dump(const AbstractLocker&, PrintStream& out) const 483 483 { 484 484 out.print( … … 536 536 } 537 537 538 unsigned numberOfWorklists() { return 2; } 539 540 Worklist& ensureWorklistForIndex(unsigned index) 541 { 542 switch (index) { 543 case 0: 544 return ensureGlobalDFGWorklist(); 545 case 1: 546 return ensureGlobalFTLWorklist(); 547 default: 548 RELEASE_ASSERT_NOT_REACHED(); 549 return ensureGlobalDFGWorklist(); 550 } 551 } 552 553 Worklist* existingWorklistForIndexOrNull(unsigned index) 554 { 555 switch (index) { 556 case 0: 557 return existingGlobalDFGWorklistOrNull(); 558 case 1: 559 return existingGlobalFTLWorklistOrNull(); 560 default: 561 RELEASE_ASSERT_NOT_REACHED(); 562 return 0; 563 } 564 } 565 566 Worklist& existingWorklistForIndex(unsigned index) 567 { 568 Worklist* result = existingWorklistForIndexOrNull(index); 569 RELEASE_ASSERT(result); 570 return *result; 571 } 572 538 573 void completeAllPlansForVM(VM& vm) 539 574 { -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.h
r212616 r212778 94 94 void removeAllReadyPlansForVM(VM&, Vector<RefPtr<Plan>, 8>&); 95 95 96 void dump(const LockHolder&, PrintStream&) const;96 void dump(const AbstractLocker&, PrintStream&) const; 97 97 98 98 CString m_threadName; … … 133 133 134 134 // Simplify doing things for all worklists. 135 inline unsigned numberOfWorklists() { return 2; } 136 inline Worklist& ensureWorklistForIndex(unsigned index) 137 { 138 switch (index) { 139 case 0: 140 return ensureGlobalDFGWorklist(); 141 case 1: 142 return ensureGlobalFTLWorklist(); 143 default: 144 RELEASE_ASSERT_NOT_REACHED(); 145 return ensureGlobalDFGWorklist(); 146 } 147 } 148 inline Worklist* existingWorklistForIndexOrNull(unsigned index) 149 { 150 switch (index) { 151 case 0: 152 return existingGlobalDFGWorklistOrNull(); 153 case 1: 154 return existingGlobalFTLWorklistOrNull(); 155 default: 156 RELEASE_ASSERT_NOT_REACHED(); 157 return 0; 158 } 159 } 160 inline Worklist& existingWorklistForIndex(unsigned index) 161 { 162 Worklist* result = existingWorklistForIndexOrNull(index); 163 RELEASE_ASSERT(result); 164 return *result; 165 } 135 unsigned numberOfWorklists(); 136 Worklist& ensureWorklistForIndex(unsigned index); 137 Worklist* existingWorklistForIndexOrNull(unsigned index); 138 Worklist& existingWorklistForIndex(unsigned index); 166 139 167 140 #endif // ENABLE(DFG_JIT) -
trunk/Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp
r212616 r212778 1 1 /* 2 * Copyright (C) 2014 , 2016Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 46 46 double EdenGCActivityCallback::lastGCLength() 47 47 { 48 return m_vm->heap.lastEdenGCLength() ;48 return m_vm->heap.lastEdenGCLength().seconds(); 49 49 } 50 50 -
trunk/Source/JavaScriptCore/heap/FullGCActivityCallback.cpp
r212616 r212778 1 1 /* 2 * Copyright (C) 2014 , 2016Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 51 51 if (heap.isPagedOut(startTime + pagingTimeOut)) { 52 52 cancel(); 53 heap.increaseLastFullGCLength( pagingTimeOut);53 heap.increaseLastFullGCLength(Seconds(pagingTimeOut)); 54 54 return; 55 55 } … … 61 61 double FullGCActivityCallback::lastGCLength() 62 62 { 63 return m_vm->heap.lastFullGCLength() ;63 return m_vm->heap.lastFullGCLength().seconds(); 64 64 } 65 65 -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r212624 r212778 24 24 #include "CodeBlock.h" 25 25 #include "CodeBlockSetInlines.h" 26 #include "CollectingScope.h" 26 27 #include "ConservativeRoots.h" 27 28 #include "DFGWorklistInlines.h" … … 38 39 #include "HeapProfiler.h" 39 40 #include "HeapSnapshot.h" 40 #include "HeapStatistics.h"41 41 #include "HeapVerifier.h" 42 #include "HelpingGCScope.h"43 42 #include "IncrementalSweeper.h" 44 43 #include "Interpreter.h" … … 49 48 #include "JSLock.h" 50 49 #include "JSVirtualMachineInternal.h" 50 #include "MachineStackMarker.h" 51 51 #include "MarkedSpaceInlines.h" 52 52 #include "MarkingConstraintSet.h" … … 58 58 #include "StochasticSpaceTimeMutatorScheduler.h" 59 59 #include "StopIfNecessaryTimer.h" 60 #include "SweepingScope.h" 60 61 #include "SynchronousStopTheWorldMutatorScheduler.h" 61 62 #include "TypeProfilerLog.h" … … 207 208 class Heap::Thread : public AutomaticThread { 208 209 public: 209 Thread(const LockHolder& locker, Heap& heap)210 Thread(const AbstractLocker& locker, Heap& heap) 210 211 : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition) 211 212 , m_heap(heap) … … 214 215 215 216 protected: 216 PollResult poll(const LockHolder& locker) override217 PollResult poll(const AbstractLocker& locker) override 217 218 { 218 219 if (m_heap.m_threadShouldStop) { … … 220 221 return PollResult::Stop; 221 222 } 222 if (m_heap.shouldCollectIn Thread(locker))223 if (m_heap.shouldCollectInCollectorThread(locker)) 223 224 return PollResult::Work; 224 225 return PollResult::Wait; … … 227 228 WorkResult work() override 228 229 { 229 m_heap.collectIn Thread();230 m_heap.collectInCollectorThread(); 230 231 return WorkResult::Continue; 231 232 } … … 258 259 , m_extraMemorySize(0) 259 260 , m_deprecatedExtraMemorySize(0) 260 , m_machineThreads( this)261 , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this ))262 , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this ))261 , m_machineThreads(std::make_unique<MachineThreads>(this)) 262 , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this, "C")) 263 , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M")) 263 264 , m_mutatorMarkStack(std::make_unique<MarkStackArray>()) 264 265 , m_raceMarkStack(std::make_unique<MarkStackArray>()) … … 334 335 void Heap::lastChanceToFinalize() 335 336 { 337 MonotonicTime before; 338 if (Options::logGC()) { 339 before = MonotonicTime::now(); 340 dataLog("[GC<", RawPointer(this), ">: shutdown "); 341 } 342 336 343 RELEASE_ASSERT(!m_vm->entryScope); 337 344 RELEASE_ASSERT(m_mutatorState == MutatorState::Running); … … 346 353 } 347 354 348 // Carefully bring the thread down. We need to use waitForCollector() until we know that there 349 // won't be any other collections. 355 if (Options::logGC()) 356 dataLog("1"); 357 358 // Prevent new collections from being started. This is probably not even necessary, since we're not 359 // going to call into anything that starts collections. Still, this makes the algorithm more 360 // obviously sound. 361 m_isSafeToCollect = false; 362 363 if (Options::logGC()) 364 dataLog("2"); 365 366 bool isCollecting; 367 { 368 auto locker = holdLock(*m_threadLock); 369 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 370 isCollecting = m_lastServedTicket < m_lastGrantedTicket; 371 } 372 if (isCollecting) { 373 if (Options::logGC()) 374 dataLog("...]\n"); 375 376 // Wait for the current collection to finish. 377 waitForCollector( 378 [&] (const AbstractLocker&) -> bool { 379 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 380 return m_lastServedTicket == m_lastGrantedTicket; 381 }); 382 383 if (Options::logGC()) 384 dataLog("[GC<", RawPointer(this), ">: shutdown "); 385 } 386 if (Options::logGC()) 387 dataLog("3"); 388 389 RELEASE_ASSERT(m_requests.isEmpty()); 390 RELEASE_ASSERT(m_lastServedTicket == m_lastGrantedTicket); 391 392 // Carefully bring the thread down. 350 393 bool stopped = false; 351 394 { 352 395 LockHolder locker(*m_threadLock); 353 396 stopped = m_thread->tryStop(locker); 354 if (!stopped) {355 m_threadShouldStop = true;397 m_threadShouldStop = true; 398 if (!stopped) 356 399 m_threadCondition->notifyOne(locker); 357 } 358 } 359 if (!stopped) { 360 waitForCollector( 361 [&] (const LockHolder&) -> bool { 362 return m_threadIsStopping; 363 }); 364 // It's now safe to join the thread, since we know that there will not be any more collections. 400 } 401 402 if (Options::logGC()) 403 dataLog("4"); 404 405 if (!stopped) 365 406 m_thread->join(); 366 } 407 408 if (Options::logGC()) 409 dataLog("5 "); 367 410 368 411 m_arrayBuffers.lastChanceToFinalize(); … … 373 416 374 417 sweepAllLogicallyEmptyWeakBlocks(); 418 419 if (Options::logGC()) 420 dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n"); 375 421 } 376 422 … … 526 572 } 527 573 528 void Heap::markToFixpoint(double gcStartTime)529 {530 TimingScope markToFixpointTimingScope(*this, "Heap::markToFixpoint");531 532 if (m_collectionScope == CollectionScope::Full) {533 m_opaqueRoots.clear();534 m_collectorSlotVisitor->clearMarkStacks();535 m_mutatorMarkStack->clear();536 }537 538 RELEASE_ASSERT(m_raceMarkStack->isEmpty());539 540 beginMarking();541 542 forEachSlotVisitor(543 [&] (SlotVisitor& visitor) {544 visitor.didStartMarking();545 });546 547 m_parallelMarkersShouldExit = false;548 549 m_helperClient.setFunction(550 [this] () {551 SlotVisitor* slotVisitor;552 {553 LockHolder locker(m_parallelSlotVisitorLock);554 if (m_availableParallelSlotVisitors.isEmpty()) {555 std::unique_ptr<SlotVisitor> newVisitor =556 std::make_unique<SlotVisitor>(*this);557 558 if (Options::optimizeParallelSlotVisitorsForStoppedMutator())559 newVisitor->optimizeForStoppedMutator();560 561 newVisitor->didStartMarking();562 563 slotVisitor = newVisitor.get();564 m_parallelSlotVisitors.append(WTFMove(newVisitor));565 } else566 slotVisitor = m_availableParallelSlotVisitors.takeLast();567 }568 569 WTF::registerGCThread(GCThreadType::Helper);570 571 {572 ParallelModeEnabler parallelModeEnabler(*slotVisitor);573 slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);574 }575 576 {577 LockHolder locker(m_parallelSlotVisitorLock);578 m_availableParallelSlotVisitors.append(slotVisitor);579 }580 });581 582 SlotVisitor& slotVisitor = *m_collectorSlotVisitor;583 584 m_constraintSet->didStartMarking();585 586 m_scheduler->beginCollection();587 if (Options::logGC())588 m_scheduler->log();589 590 // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"591 // checks because bootstrap would have put things into the visitor. So, we should fall592 // through to draining.593 594 if (!slotVisitor.didReachTermination()) {595 dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");596 dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");597 dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");598 dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");599 dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");600 dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");601 dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");602 dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");603 RELEASE_ASSERT_NOT_REACHED();604 }605 606 for (;;) {607 if (Options::logGC())608 dataLog("v=", bytesVisited() / 1024, "kb o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");609 610 if (slotVisitor.didReachTermination()) {611 m_scheduler->didReachTermination();612 613 assertSharedMarkStacksEmpty();614 615 slotVisitor.mergeIfNecessary();616 for (auto& parallelVisitor : m_parallelSlotVisitors)617 parallelVisitor->mergeIfNecessary();618 619 // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,620 // we don't have to execute root constraints again unless the mutator did run. At a621 // minimum, we could use this for work estimates - but it's probably more than just an622 // estimate.623 // https://bugs.webkit.org/show_bug.cgi?id=166828624 625 // FIXME: We should take advantage of the fact that we could timeout. This only comes626 // into play if we're executing constraints for the first time. But that will matter627 // when we have deep stacks or a lot of DOM stuff.628 // https://bugs.webkit.org/show_bug.cgi?id=166831629 630 // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also631 // add their own using Heap::addMarkingConstraint().632 bool converged =633 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());634 if (converged && slotVisitor.isEmpty()) {635 assertSharedMarkStacksEmpty();636 break;637 }638 639 m_scheduler->didExecuteConstraints();640 }641 642 if (Options::logGC())643 dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");644 645 {646 ParallelModeEnabler enabler(slotVisitor);647 slotVisitor.drainInParallel(m_scheduler->timeToResume());648 }649 650 m_scheduler->synchronousDrainingDidStall();651 652 if (slotVisitor.didReachTermination())653 continue;654 655 if (!m_scheduler->shouldResume())656 continue;657 658 m_scheduler->willResume();659 660 if (Options::logGC()) {661 double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();662 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");663 }664 665 // Forgive the mutator for its past failures to keep up.666 // FIXME: Figure out if moving this to different places results in perf changes.667 m_incrementBalance = 0;668 669 resumeTheWorld();670 671 {672 ParallelModeEnabler enabler(slotVisitor);673 slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());674 }675 676 stopTheWorld();677 678 if (Options::logGC())679 dataLog("[GC: ");680 681 m_scheduler->didStop();682 683 if (Options::logGC())684 m_scheduler->log();685 }686 687 m_scheduler->endCollection();688 689 {690 std::lock_guard<Lock> lock(m_markingMutex);691 m_parallelMarkersShouldExit = true;692 m_markingConditionVariable.notifyAll();693 }694 m_helperClient.finish();695 696 iterateExecutingAndCompilingCodeBlocks(697 [&] (CodeBlock* codeBlock) {698 writeBarrier(codeBlock);699 });700 701 updateObjectCounts(gcStartTime);702 endMarking();703 }704 705 574 void Heap::gatherStackRoots(ConservativeRoots& roots) 706 575 { 707 m_machineThreads .gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);576 m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState); 708 577 } 709 578 … … 805 674 } 806 675 807 void Heap::updateObjectCounts(double gcStartTime) 808 { 809 if (Options::logGC() == GCLogging::Verbose) { 810 dataLogF("\nNumber of live Objects after GC %lu, took %.6f secs\n", static_cast<unsigned long>(visitCount()), WTF::monotonicallyIncreasingTime() - gcStartTime); 811 } 812 676 void Heap::updateObjectCounts() 677 { 813 678 if (m_collectionScope == CollectionScope::Full) 814 679 m_totalBytesVisited = 0; … … 1034 899 double before = 0; 1035 900 if (Options::logGC()) { 1036 dataLog(" [Full sweep: ", capacity() / 1024, "kb ");901 dataLog("Full sweep: ", capacity() / 1024, "kb "); 1037 902 before = currentTimeMS(); 1038 903 } … … 1041 906 if (Options::logGC()) { 1042 907 double after = currentTimeMS(); 1043 dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms ]");908 dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms"); 1044 909 } 1045 910 } … … 1054 919 DeferGCForAWhile deferGC(*this); 1055 920 if (UNLIKELY(Options::useImmortalObjects())) 1056 sweeper()-> willFinishSweeping();921 sweeper()->stopSweeping(); 1057 922 1058 923 bool alreadySweptInCollectSync = Options::sweepSynchronously(); 1059 924 if (!alreadySweptInCollectSync) { 925 if (Options::logGC()) 926 dataLog("[GC<", RawPointer(this), ">: "); 1060 927 sweepSynchronously(); 1061 928 if (Options::logGC()) 1062 dataLog(" \n");929 dataLog("]\n"); 1063 930 } 1064 931 m_objectSpace.assertNoUnswept(); … … 1109 976 } 1110 977 1111 bool Heap::shouldCollectIn Thread(const LockHolder&)978 bool Heap::shouldCollectInCollectorThread(const AbstractLocker&) 1112 979 { 1113 980 RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket)); 1114 981 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 1115 982 1116 return !m_requests.isEmpty(); 1117 } 1118 1119 void Heap::collectInThread() 983 if (false) 984 dataLog("Mutator has the conn = ", !!(m_worldState.load() & mutatorHasConnBit), "\n"); 985 986 return !m_requests.isEmpty() && !(m_worldState.load() & mutatorHasConnBit); 987 } 988 989 void Heap::collectInCollectorThread() 990 { 991 for (;;) { 992 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Collector, nullptr); 993 switch (result) { 994 case RunCurrentPhaseResult::Finished: 995 return; 996 case RunCurrentPhaseResult::Continue: 997 break; 998 case RunCurrentPhaseResult::NeedCurrentThreadState: 999 RELEASE_ASSERT_NOT_REACHED(); 1000 break; 1001 } 1002 } 1003 } 1004 1005 void Heap::checkConn(GCConductor conn) 1006 { 1007 switch (conn) { 1008 case GCConductor::Mutator: 1009 RELEASE_ASSERT(m_worldState.load() & mutatorHasConnBit); 1010 return; 1011 case GCConductor::Collector: 1012 RELEASE_ASSERT(!(m_worldState.load() & mutatorHasConnBit)); 1013 return; 1014 } 1015 RELEASE_ASSERT_NOT_REACHED(); 1016 } 1017 1018 auto Heap::runCurrentPhase(GCConductor conn, CurrentThreadState* currentThreadState) -> RunCurrentPhaseResult 1019 { 1020 checkConn(conn); 1021 m_currentThreadState = currentThreadState; 1022 1023 // If the collector transfers the conn to the mutator, it leaves us in between phases. 1024 if (!finishChangingPhase(conn)) { 1025 // A mischevious mutator could repeatedly relinquish the conn back to us. We try to avoid doing 1026 // this, but it's probably not the end of the world if it did happen. 1027 if (false) 1028 dataLog("Conn bounce-back.\n"); 1029 return RunCurrentPhaseResult::Finished; 1030 } 1031 1032 bool result = false; 1033 switch (m_currentPhase) { 1034 case CollectorPhase::NotRunning: 1035 result = runNotRunningPhase(conn); 1036 break; 1037 1038 case CollectorPhase::Begin: 1039 result = runBeginPhase(conn); 1040 break; 1041 1042 case CollectorPhase::Fixpoint: 1043 if (!currentThreadState && conn == GCConductor::Mutator) 1044 return RunCurrentPhaseResult::NeedCurrentThreadState; 1045 1046 result = runFixpointPhase(conn); 1047 break; 1048 1049 case CollectorPhase::Concurrent: 1050 result = runConcurrentPhase(conn); 1051 break; 1052 1053 case CollectorPhase::Reloop: 1054 result = runReloopPhase(conn); 1055 break; 1056 1057 case CollectorPhase::End: 1058 result = runEndPhase(conn); 1059 break; 1060 } 1061 1062 return result ? RunCurrentPhaseResult::Continue : RunCurrentPhaseResult::Finished; 1063 } 1064 1065 NEVER_INLINE bool Heap::runNotRunningPhase(GCConductor conn) 1066 { 1067 // Check m_requests since the mutator calls this to poll what's going on. 1068 { 1069 auto locker = holdLock(*m_threadLock); 1070 if (m_requests.isEmpty()) 1071 return false; 1072 } 1073 1074 return changePhase(conn, CollectorPhase::Begin); 1075 } 1076 1077 NEVER_INLINE bool Heap::runBeginPhase(GCConductor conn) 1120 1078 { 1121 1079 m_currentGCStartTime = MonotonicTime::now(); 1122 1080 1123 1081 std::optional<CollectionScope> scope; 1124 1082 { … … 1127 1085 scope = m_requests.first(); 1128 1086 } 1129 1130 SuperSamplerScope superSamplerScope(false); 1131 TimingScope collectImplTimingScope(scope, "Heap::collectInThread"); 1132 1133 #if ENABLE(ALLOCATION_LOGGING) 1134 dataLogF("JSC GC starting collection.\n"); 1135 #endif 1136 1137 stopTheWorld(); 1138 1139 if (false) 1140 dataLog("GC START!\n"); 1141 1142 MonotonicTime before; 1143 if (Options::logGC()) { 1144 dataLog("[GC: START ", capacity() / 1024, "kb "); 1145 before = MonotonicTime::now(); 1146 } 1147 1148 double gcStartTime; 1149 1150 ASSERT(m_isSafeToCollect); 1087 1088 if (Options::logGC()) 1089 dataLog("[GC<", RawPointer(this), ">: START ", gcConductorShortName(conn), " ", capacity() / 1024, "kb "); 1090 1091 m_beforeGC = MonotonicTime::now(); 1092 1151 1093 if (m_collectionScope) { 1152 1094 dataLog("Collection scope already set during GC: ", *m_collectionScope, "\n"); 1153 1095 RELEASE_ASSERT_NOT_REACHED(); 1154 1096 } 1155 1097 1156 1098 willStartCollection(scope); 1157 collectImplTimingScope.setScope(*this); 1158 1159 gcStartTime = WTF::monotonicallyIncreasingTime(); 1099 1160 1100 if (m_verifier) { 1161 1101 // Verify that live objects from the last GC cycle haven't been corrupted by … … 1166 1106 m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking); 1167 1107 } 1168 1108 1169 1109 prepareForMarking(); 1170 1171 markToFixpoint(gcStartTime); 1172 1110 1111 if (m_collectionScope == CollectionScope::Full) { 1112 m_opaqueRoots.clear(); 1113 m_collectorSlotVisitor->clearMarkStacks(); 1114 m_mutatorMarkStack->clear(); 1115 } 1116 1117 RELEASE_ASSERT(m_raceMarkStack->isEmpty()); 1118 1119 beginMarking(); 1120 1121 forEachSlotVisitor( 1122 [&] (SlotVisitor& visitor) { 1123 visitor.didStartMarking(); 1124 }); 1125 1126 m_parallelMarkersShouldExit = false; 1127 1128 m_helperClient.setFunction( 1129 [this] () { 1130 SlotVisitor* slotVisitor; 1131 { 1132 LockHolder locker(m_parallelSlotVisitorLock); 1133 if (m_availableParallelSlotVisitors.isEmpty()) { 1134 std::unique_ptr<SlotVisitor> newVisitor = std::make_unique<SlotVisitor>( 1135 *this, toCString("P", m_parallelSlotVisitors.size() + 1)); 1136 1137 if (Options::optimizeParallelSlotVisitorsForStoppedMutator()) 1138 newVisitor->optimizeForStoppedMutator(); 1139 1140 newVisitor->didStartMarking(); 1141 1142 slotVisitor = newVisitor.get(); 1143 m_parallelSlotVisitors.append(WTFMove(newVisitor)); 1144 } else 1145 slotVisitor = m_availableParallelSlotVisitors.takeLast(); 1146 } 1147 1148 WTF::registerGCThread(GCThreadType::Helper); 1149 1150 { 1151 ParallelModeEnabler parallelModeEnabler(*slotVisitor); 1152 slotVisitor->drainFromShared(SlotVisitor::SlaveDrain); 1153 } 1154 1155 { 1156 LockHolder locker(m_parallelSlotVisitorLock); 1157 m_availableParallelSlotVisitors.append(slotVisitor); 1158 } 1159 }); 1160 1161 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1162 1163 m_constraintSet->didStartMarking(); 1164 1165 m_scheduler->beginCollection(); 1166 if (Options::logGC()) 1167 m_scheduler->log(); 1168 1169 // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()" 1170 // checks because bootstrap would have put things into the visitor. So, we should fall 1171 // through to draining. 1172 1173 if (!slotVisitor.didReachTermination()) { 1174 dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n"); 1175 dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n"); 1176 dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n"); 1177 dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n"); 1178 dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n"); 1179 dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n"); 1180 dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n"); 1181 dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n"); 1182 RELEASE_ASSERT_NOT_REACHED(); 1183 } 1184 1185 return changePhase(conn, CollectorPhase::Fixpoint); 1186 } 1187 1188 NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn) 1189 { 1190 RELEASE_ASSERT(conn == GCConductor::Collector || m_currentThreadState); 1191 1192 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1193 1194 if (Options::logGC()) { 1195 HashMap<const char*, size_t> visitMap; 1196 forEachSlotVisitor( 1197 [&] (SlotVisitor& slotVisitor) { 1198 visitMap.add(slotVisitor.codeName(), slotVisitor.bytesVisited() / 1024); 1199 }); 1200 1201 auto perVisitorDump = sortedMapDump( 1202 visitMap, 1203 [] (const char* a, const char* b) -> bool { 1204 return strcmp(a, b) < 0; 1205 }, 1206 ":", " "); 1207 1208 dataLog("v=", bytesVisited() / 1024, "kb (", perVisitorDump, ") o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " "); 1209 } 1210 1211 if (slotVisitor.didReachTermination()) { 1212 m_scheduler->didReachTermination(); 1213 1214 assertSharedMarkStacksEmpty(); 1215 1216 slotVisitor.mergeIfNecessary(); 1217 for (auto& parallelVisitor : m_parallelSlotVisitors) 1218 parallelVisitor->mergeIfNecessary(); 1219 1220 // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely, 1221 // we don't have to execute root constraints again unless the mutator did run. At a 1222 // minimum, we could use this for work estimates - but it's probably more than just an 1223 // estimate. 1224 // https://bugs.webkit.org/show_bug.cgi?id=166828 1225 1226 // FIXME: We should take advantage of the fact that we could timeout. This only comes 1227 // into play if we're executing constraints for the first time. But that will matter 1228 // when we have deep stacks or a lot of DOM stuff. 1229 // https://bugs.webkit.org/show_bug.cgi?id=166831 1230 1231 // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also 1232 // add their own using Heap::addMarkingConstraint(). 1233 bool converged = 1234 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity()); 1235 if (converged && slotVisitor.isEmpty()) { 1236 assertSharedMarkStacksEmpty(); 1237 return changePhase(conn, CollectorPhase::End); 1238 } 1239 1240 m_scheduler->didExecuteConstraints(); 1241 } 1242 1243 if (Options::logGC()) 1244 dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " "); 1245 1246 { 1247 ParallelModeEnabler enabler(slotVisitor); 1248 slotVisitor.drainInParallel(m_scheduler->timeToResume()); 1249 } 1250 1251 m_scheduler->synchronousDrainingDidStall(); 1252 1253 if (slotVisitor.didReachTermination()) 1254 return true; // This is like relooping to the top if runFixpointPhase(). 1255 1256 if (!m_scheduler->shouldResume()) 1257 return true; 1258 1259 m_scheduler->willResume(); 1260 1261 if (Options::logGC()) { 1262 double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds(); 1263 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n"); 1264 } 1265 1266 // Forgive the mutator for its past failures to keep up. 1267 // FIXME: Figure out if moving this to different places results in perf changes. 1268 m_incrementBalance = 0; 1269 1270 return changePhase(conn, CollectorPhase::Concurrent); 1271 } 1272 1273 NEVER_INLINE bool Heap::runConcurrentPhase(GCConductor conn) 1274 { 1275 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1276 1277 switch (conn) { 1278 case GCConductor::Mutator: { 1279 // When the mutator has the conn, we poll runConcurrentPhase() on every time someone says 1280 // stopIfNecessary(), so on every allocation slow path. When that happens we poll if it's time 1281 // to stop and do some work. 1282 if (slotVisitor.didReachTermination() 1283 || m_scheduler->shouldStop()) 1284 return changePhase(conn, CollectorPhase::Reloop); 1285 1286 // We could be coming from a collector phase that stuffed our SlotVisitor, so make sure we donate 1287 // everything. This is super cheap if the SlotVisitor is already empty. 1288 slotVisitor.donateAll(); 1289 return false; 1290 } 1291 case GCConductor::Collector: { 1292 { 1293 ParallelModeEnabler enabler(slotVisitor); 1294 slotVisitor.drainInParallelPassively(m_scheduler->timeToStop()); 1295 } 1296 return changePhase(conn, CollectorPhase::Reloop); 1297 } } 1298 1299 RELEASE_ASSERT_NOT_REACHED(); 1300 return false; 1301 } 1302 1303 NEVER_INLINE bool Heap::runReloopPhase(GCConductor conn) 1304 { 1305 if (Options::logGC()) 1306 dataLog("[GC<", RawPointer(this), ">: ", gcConductorShortName(conn), " "); 1307 1308 m_scheduler->didStop(); 1309 1310 if (Options::logGC()) 1311 m_scheduler->log(); 1312 1313 return changePhase(conn, CollectorPhase::Fixpoint); 1314 } 1315 1316 NEVER_INLINE bool Heap::runEndPhase(GCConductor conn) 1317 { 1318 m_scheduler->endCollection(); 1319 1320 { 1321 auto locker = holdLock(m_markingMutex); 1322 m_parallelMarkersShouldExit = true; 1323 m_markingConditionVariable.notifyAll(); 1324 } 1325 m_helperClient.finish(); 1326 1327 iterateExecutingAndCompilingCodeBlocks( 1328 [&] (CodeBlock* codeBlock) { 1329 writeBarrier(codeBlock); 1330 }); 1331 1332 updateObjectCounts(); 1333 endMarking(); 1334 1173 1335 if (m_verifier) { 1174 1336 m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking); … … 1196 1358 updateAllocationLimits(); 1197 1359 1198 didFinishCollection( gcStartTime);1360 didFinishCollection(); 1199 1361 1200 1362 if (m_verifier) { … … 1209 1371 1210 1372 if (Options::logGC()) { 1211 MonotonicTime after = MonotonicTime::now(); 1212 double thisPauseMS = (after - m_stopTime).milliseconds(); 1213 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), "ms END]\n"); 1373 double thisPauseMS = (m_afterGC - m_stopTime).milliseconds(); 1374 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (m_afterGC - m_beforeGC).milliseconds(), "ms END]\n"); 1214 1375 } 1215 1376 1216 1377 { 1217 LockHolder locker(*m_threadLock);1378 auto locker = holdLock(*m_threadLock); 1218 1379 m_requests.removeFirst(); 1219 1380 m_lastServedTicket++; … … 1226 1387 1227 1388 setNeedFinalize(); 1228 resumeTheWorld(); 1229 1389 1230 1390 m_lastGCStartTime = m_currentGCStartTime; 1231 1391 m_lastGCEndTime = MonotonicTime::now(); 1232 } 1233 1234 void Heap::stopTheWorld() 1235 { 1236 RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped); 1237 waitWhileNeedFinalize(); 1238 stopTheMutator(); 1392 1393 return changePhase(conn, CollectorPhase::NotRunning); 1394 } 1395 1396 bool Heap::changePhase(GCConductor conn, CollectorPhase nextPhase) 1397 { 1398 checkConn(conn); 1399 1400 m_nextPhase = nextPhase; 1401 1402 return finishChangingPhase(conn); 1403 } 1404 1405 NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn) 1406 { 1407 checkConn(conn); 1408 1409 if (m_nextPhase == m_currentPhase) 1410 return true; 1411 1412 if (false) 1413 dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n"); 1414 1415 bool suspendedBefore = worldShouldBeSuspended(m_currentPhase); 1416 bool suspendedAfter = worldShouldBeSuspended(m_nextPhase); 1417 1418 if (suspendedBefore != suspendedAfter) { 1419 if (suspendedBefore) { 1420 RELEASE_ASSERT(!suspendedAfter); 1421 1422 resumeThePeriphery(); 1423 if (conn == GCConductor::Collector) 1424 resumeTheMutator(); 1425 else 1426 handleNeedFinalize(); 1427 } else { 1428 RELEASE_ASSERT(!suspendedBefore); 1429 RELEASE_ASSERT(suspendedAfter); 1430 1431 if (conn == GCConductor::Collector) { 1432 waitWhileNeedFinalize(); 1433 if (!stopTheMutator()) { 1434 if (false) 1435 dataLog("Returning false.\n"); 1436 return false; 1437 } 1438 } else { 1439 sanitizeStackForVM(m_vm); 1440 handleNeedFinalize(); 1441 } 1442 stopThePeriphery(conn); 1443 } 1444 } 1445 1446 m_currentPhase = m_nextPhase; 1447 return true; 1448 } 1449 1450 void Heap::stopThePeriphery(GCConductor conn) 1451 { 1452 if (m_collectorBelievesThatTheWorldIsStopped) { 1453 dataLog("FATAL: world already stopped.\n"); 1454 RELEASE_ASSERT_NOT_REACHED(); 1455 } 1239 1456 1240 1457 if (m_mutatorDidRun) … … 1242 1459 1243 1460 m_mutatorDidRun = false; 1244 1461 1245 1462 suspendCompilerThreads(); 1246 1463 m_collectorBelievesThatTheWorldIsStopped = true; … … 1254 1471 { 1255 1472 DeferGCForAWhile awhile(*this); 1256 if (JITWorklist::instance()->completeAllForVM(*m_vm)) 1473 if (JITWorklist::instance()->completeAllForVM(*m_vm) 1474 && conn == GCConductor::Collector) 1257 1475 setGCDidJIT(); 1258 1476 } … … 1267 1485 } 1268 1486 1269 void Heap::resumeTheWorld()1487 NEVER_INLINE void Heap::resumeThePeriphery() 1270 1488 { 1271 1489 // Calling resumeAllocating does the Right Thing depending on whether this is the end of a … … 1278 1496 m_barriersExecuted = 0; 1279 1497 1280 RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped); 1498 if (!m_collectorBelievesThatTheWorldIsStopped) { 1499 dataLog("Fatal: collector does not believe that the world is stopped.\n"); 1500 RELEASE_ASSERT_NOT_REACHED(); 1501 } 1281 1502 m_collectorBelievesThatTheWorldIsStopped = false; 1282 1503 … … 1316 1537 1317 1538 resumeCompilerThreads(); 1318 resumeTheMutator(); 1319 } 1320 1321 void Heap::stopTheMutator() 1539 } 1540 1541 bool Heap::stopTheMutator() 1322 1542 { 1323 1543 for (;;) { 1324 1544 unsigned oldState = m_worldState.load(); 1325 if ((oldState & stoppedBit) 1326 && (oldState & shouldStopBit)) 1327 return; 1328 1329 // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could 1330 // switch to that if it turned out to be less confusing, but then it would not give the 1331 // mutator the opportunity to react to the world being stopped. 1332 if (oldState & mutatorWaitingBit) { 1333 if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit)) 1334 ParkingLot::unparkAll(&m_worldState); 1545 if (oldState & stoppedBit) { 1546 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1547 RELEASE_ASSERT(!(oldState & mutatorWaitingBit)); 1548 RELEASE_ASSERT(!(oldState & mutatorHasConnBit)); 1549 return true; 1550 } 1551 1552 if (oldState & mutatorHasConnBit) { 1553 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1554 RELEASE_ASSERT(!(oldState & stoppedBit)); 1555 return false; 1556 } 1557 1558 if (!(oldState & hasAccessBit)) { 1559 RELEASE_ASSERT(!(oldState & mutatorHasConnBit)); 1560 RELEASE_ASSERT(!(oldState & mutatorWaitingBit)); 1561 // We can stop the world instantly. 1562 if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit)) 1563 return true; 1335 1564 continue; 1336 1565 } 1337 1566 1338 if (!(oldState & hasAccessBit) 1339 || (oldState & stoppedBit)) { 1340 // We can stop the world instantly. 1341 if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit)) 1342 return; 1343 continue; 1344 } 1345 1567 // Transfer the conn to the mutator and bail. 1346 1568 RELEASE_ASSERT(oldState & hasAccessBit); 1347 1569 RELEASE_ASSERT(!(oldState & stoppedBit)); 1348 m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit); 1349 m_stopIfNecessaryTimer->scheduleSoon(); 1350 ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit); 1351 } 1352 } 1353 1354 void Heap::resumeTheMutator() 1355 { 1570 unsigned newState = (oldState | mutatorHasConnBit) & ~mutatorWaitingBit; 1571 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1572 if (false) 1573 dataLog("Handed off the conn.\n"); 1574 m_stopIfNecessaryTimer->scheduleSoon(); 1575 ParkingLot::unparkAll(&m_worldState); 1576 return false; 1577 } 1578 } 1579 } 1580 1581 NEVER_INLINE void Heap::resumeTheMutator() 1582 { 1583 if (false) 1584 dataLog("Resuming the mutator.\n"); 1356 1585 for (;;) { 1357 1586 unsigned oldState = m_worldState.load(); 1358 RELEASE_ASSERT(oldState & shouldStopBit); 1359 1360 if (!(oldState & hasAccessBit)) { 1361 // We can resume the world instantly. 1362 if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) { 1363 ParkingLot::unparkAll(&m_worldState); 1364 return; 1365 } 1366 continue; 1367 } 1368 1369 // We can tell the world to resume. 1370 if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) { 1587 if (!!(oldState & hasAccessBit) != !(oldState & stoppedBit)) { 1588 dataLog("Fatal: hasAccess = ", !!(oldState & hasAccessBit), ", stopped = ", !!(oldState & stoppedBit), "\n"); 1589 RELEASE_ASSERT_NOT_REACHED(); 1590 } 1591 if (oldState & mutatorHasConnBit) { 1592 dataLog("Fatal: mutator has the conn.\n"); 1593 RELEASE_ASSERT_NOT_REACHED(); 1594 } 1595 1596 if (!(oldState & stoppedBit)) { 1597 if (false) 1598 dataLog("Returning because not stopped.\n"); 1599 return; 1600 } 1601 1602 if (m_worldState.compareExchangeWeak(oldState, oldState & ~stoppedBit)) { 1603 if (false) 1604 dataLog("CASing and returning.\n"); 1371 1605 ParkingLot::unparkAll(&m_worldState); 1372 1606 return; … … 1390 1624 { 1391 1625 RELEASE_ASSERT(oldState & hasAccessBit); 1626 RELEASE_ASSERT(!(oldState & stoppedBit)); 1392 1627 1393 1628 // It's possible for us to wake up with finalization already requested but the world not yet 1394 1629 // resumed. If that happens, we can't run finalization yet. 1395 if (!(oldState & stoppedBit) 1396 && handleNeedFinalize(oldState)) 1630 if (handleNeedFinalize(oldState)) 1397 1631 return true; 1398 1399 if (!(oldState & shouldStopBit) && !m_scheduler->shouldStop()) { 1400 if (!(oldState & stoppedBit)) 1401 return false; 1402 m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit); 1403 return true; 1404 } 1405 1406 sanitizeStackForVM(m_vm); 1407 1408 if (verboseStop) { 1409 dataLog("Stopping!\n"); 1410 WTFReportBacktrace(); 1411 } 1412 m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit); 1413 ParkingLot::unparkAll(&m_worldState); 1414 ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit); 1415 return true; 1632 1633 // FIXME: When entering the concurrent phase, we could arrange for this branch not to fire, and then 1634 // have the SlotVisitor do things to the m_worldState to make this branch fire again. That would 1635 // prevent us from polling this so much. Ideally, stopIfNecessary would ignore the mutatorHasConnBit 1636 // and there would be some other bit indicating whether we were in some GC phase other than the 1637 // NotRunning or Concurrent ones. 1638 if (oldState & mutatorHasConnBit) 1639 collectInMutatorThread(); 1640 1641 return false; 1642 } 1643 1644 NEVER_INLINE void Heap::collectInMutatorThread() 1645 { 1646 CollectingScope collectingScope(*this); 1647 for (;;) { 1648 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, nullptr); 1649 switch (result) { 1650 case RunCurrentPhaseResult::Finished: 1651 return; 1652 case RunCurrentPhaseResult::Continue: 1653 break; 1654 case RunCurrentPhaseResult::NeedCurrentThreadState: 1655 sanitizeStackForVM(m_vm); 1656 auto lambda = [&] (CurrentThreadState& state) { 1657 for (;;) { 1658 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, &state); 1659 switch (result) { 1660 case RunCurrentPhaseResult::Finished: 1661 return; 1662 case RunCurrentPhaseResult::Continue: 1663 break; 1664 case RunCurrentPhaseResult::NeedCurrentThreadState: 1665 RELEASE_ASSERT_NOT_REACHED(); 1666 break; 1667 } 1668 } 1669 }; 1670 callWithCurrentThreadState(scopedLambda<void(CurrentThreadState&)>(WTFMove(lambda))); 1671 return; 1672 } 1673 } 1416 1674 } 1417 1675 … … 1426 1684 if (!done) { 1427 1685 setMutatorWaiting(); 1686 1428 1687 // At this point, the collector knows that we intend to wait, and he will clear the 1429 1688 // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit … … 1432 1691 } 1433 1692 } 1434 1693 1435 1694 // If we're in a stop-the-world scenario, we need to wait for that even if done is true. 1436 1695 unsigned oldState = m_worldState.load(); … … 1438 1697 continue; 1439 1698 1699 // FIXME: We wouldn't need this if stopIfNecessarySlow() had a mode where it knew to just 1700 // do the collection. 1701 relinquishConn(); 1702 1440 1703 if (done) { 1441 1704 clearMutatorWaiting(); // Clean up just in case. … … 1454 1717 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1455 1718 1456 if (oldState & shouldStopBit) { 1457 RELEASE_ASSERT(oldState & stoppedBit); 1719 if (oldState & stoppedBit) { 1458 1720 if (verboseStop) { 1459 1721 dataLog("Stopping in acquireAccess!\n"); … … 1471 1733 handleNeedFinalize(); 1472 1734 m_mutatorDidRun = true; 1735 stopIfNecessary(); 1473 1736 return; 1474 1737 } … … 1480 1743 for (;;) { 1481 1744 unsigned oldState = m_worldState.load(); 1482 RELEASE_ASSERT(oldState & hasAccessBit); 1483 RELEASE_ASSERT(!(oldState & stoppedBit)); 1745 if (!(oldState & hasAccessBit)) { 1746 dataLog("FATAL: Attempting to release access but the mutator does not have access.\n"); 1747 RELEASE_ASSERT_NOT_REACHED(); 1748 } 1749 if (oldState & stoppedBit) { 1750 dataLog("FATAL: Attempting to release access but the mutator is stopped.\n"); 1751 RELEASE_ASSERT_NOT_REACHED(); 1752 } 1484 1753 1485 1754 if (handleNeedFinalize(oldState)) 1486 1755 continue; 1487 1756 1488 if (oldState & shouldStopBit) { 1489 unsigned newState = (oldState & ~hasAccessBit) | stoppedBit; 1490 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1491 ParkingLot::unparkAll(&m_worldState); 1492 return; 1493 } 1494 continue; 1495 } 1496 1497 RELEASE_ASSERT(!(oldState & shouldStopBit)); 1498 1499 if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit)) 1757 unsigned newState = oldState & ~(hasAccessBit | mutatorHasConnBit); 1758 1759 if ((oldState & mutatorHasConnBit) 1760 && m_nextPhase != m_currentPhase) { 1761 // This means that the collector thread had given us the conn so that we would do something 1762 // for it. Stop ourselves as we release access. This ensures that acquireAccess blocks. In 1763 // the meantime, since we're handing the conn over, the collector will be awoken and it is 1764 // sure to have work to do. 1765 newState |= stoppedBit; 1766 } 1767 1768 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1769 if (oldState & mutatorHasConnBit) 1770 finishRelinquishingConn(); 1500 1771 return; 1501 } 1772 } 1773 } 1774 } 1775 1776 bool Heap::relinquishConn(unsigned oldState) 1777 { 1778 RELEASE_ASSERT(oldState & hasAccessBit); 1779 RELEASE_ASSERT(!(oldState & stoppedBit)); 1780 1781 if (!(oldState & mutatorHasConnBit)) 1782 return false; // Done. 1783 1784 if (m_threadShouldStop) 1785 return false; 1786 1787 if (!m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorHasConnBit)) 1788 return true; // Loop around. 1789 1790 finishRelinquishingConn(); 1791 return true; 1792 } 1793 1794 void Heap::finishRelinquishingConn() 1795 { 1796 if (false) 1797 dataLog("Relinquished the conn.\n"); 1798 1799 sanitizeStackForVM(m_vm); 1800 1801 auto locker = holdLock(*m_threadLock); 1802 if (!m_requests.isEmpty()) 1803 m_threadCondition->notifyOne(locker); 1804 ParkingLot::unparkAll(&m_worldState); 1805 } 1806 1807 void Heap::relinquishConn() 1808 { 1809 while (relinquishConn(m_worldState.load())) { } 1502 1810 } 1503 1811 … … 1514 1822 } 1515 1823 1516 bool Heap::handleNeedFinalize(unsigned oldState)1824 NEVER_INLINE bool Heap::handleNeedFinalize(unsigned oldState) 1517 1825 { 1518 1826 RELEASE_ASSERT(oldState & hasAccessBit); … … 1581 1889 } 1582 1890 1583 void Heap::notifyThreadStopping(const LockHolder&)1891 void Heap::notifyThreadStopping(const AbstractLocker&) 1584 1892 { 1585 1893 m_threadIsStopping = true; … … 1593 1901 if (Options::logGC()) { 1594 1902 before = MonotonicTime::now(); 1595 dataLog("[GC : finalize ");1903 dataLog("[GC<", RawPointer(this), ">: finalize "); 1596 1904 } 1597 1905 1598 1906 { 1599 HelpingGCScope helpingGCScope(*this);1907 SweepingScope helpingGCScope(*this); 1600 1908 deleteUnmarkedCompiledCode(); 1601 1909 deleteSourceProviderCaches(); … … 1605 1913 if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache()) 1606 1914 cache->clear(); 1607 1915 1608 1916 if (Options::sweepSynchronously()) 1609 1917 sweepSynchronously(); … … 1623 1931 1624 1932 LockHolder locker(*m_threadLock); 1933 // We may be able to steal the conn. That only works if the collector is definitely not running 1934 // right now. This is an optimization that prevents the collector thread from ever starting in most 1935 // cases. 1936 ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 1937 if (m_lastServedTicket == m_lastGrantedTicket) { 1938 if (false) 1939 dataLog("Taking the conn.\n"); 1940 m_worldState.exchangeOr(mutatorHasConnBit); 1941 } 1942 1625 1943 m_requests.append(scope); 1626 1944 m_lastGrantedTicket++; 1627 m_threadCondition->notifyOne(locker); 1945 if (!(m_worldState.load() & mutatorHasConnBit)) 1946 m_threadCondition->notifyOne(locker); 1628 1947 return m_lastGrantedTicket; 1629 1948 } … … 1632 1951 { 1633 1952 waitForCollector( 1634 [&] (const LockHolder&) -> bool {1953 [&] (const AbstractLocker&) -> bool { 1635 1954 return m_lastServedTicket >= ticket; 1636 1955 }); … … 1771 2090 dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n"); 1772 2091 1773 if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())1774 HeapStatistics::exitWithFailure();1775 1776 2092 if (m_collectionScope == CollectionScope::Full) { 1777 2093 // To avoid pathological GC churn in very small and very large heaps, we set … … 1826 2142 } 1827 2143 1828 void Heap::didFinishCollection( double gcStartTime)1829 { 1830 double gcEndTime = WTF::monotonicallyIncreasingTime();2144 void Heap::didFinishCollection() 2145 { 2146 m_afterGC = MonotonicTime::now(); 1831 2147 CollectionScope scope = *m_collectionScope; 1832 2148 if (scope == CollectionScope::Full) 1833 m_lastFullGCLength = gcEndTime - gcStartTime;2149 m_lastFullGCLength = m_afterGC - m_beforeGC; 1834 2150 else 1835 m_lastEdenGCLength = gcEndTime - gcStartTime;2151 m_lastEdenGCLength = m_afterGC - m_beforeGC; 1836 2152 1837 2153 #if ENABLE(RESOURCE_USAGE) 1838 2154 ASSERT(externalMemorySize() <= extraMemorySize()); 1839 2155 #endif 1840 1841 if (Options::recordGCPauseTimes())1842 HeapStatistics::recordGCPauseTime(gcStartTime, gcEndTime);1843 1844 if (Options::dumpObjectStatistics())1845 HeapStatistics::dumpObjectStatistics(this);1846 2156 1847 2157 if (HeapProfiler* heapProfiler = m_vm->heapProfiler()) { … … 2071 2381 if (!m_isSafeToCollect) 2072 2382 return; 2073 if (mutatorState() == MutatorState::HelpingGC) 2383 switch (mutatorState()) { 2384 case MutatorState::Running: 2385 case MutatorState::Allocating: 2386 break; 2387 case MutatorState::Sweeping: 2388 case MutatorState::Collecting: 2074 2389 return; 2390 } 2075 2391 if (!Options::useGC()) 2076 2392 return; … … 2081 2397 else if (isDeferred()) 2082 2398 m_didDeferGCWork = true; 2083 else {2399 else 2084 2400 stopIfNecessary(); 2085 // FIXME: Check if the scheduler wants us to stop.2086 // https://bugs.webkit.org/show_bug.cgi?id=1668272087 }2088 2401 } 2089 2402 … … 2100 2413 else if (isDeferred()) 2101 2414 m_didDeferGCWork = true; 2102 else 2415 else { 2103 2416 collectAsync(); 2417 stopIfNecessary(); // This will immediately start the collection if we have the conn. 2418 } 2104 2419 } 2105 2420 … … 2304 2619 void Heap::notifyIsSafeToCollect() 2305 2620 { 2621 MonotonicTime before; 2622 if (Options::logGC()) { 2623 before = MonotonicTime::now(); 2624 dataLog("[GC<", RawPointer(this), ">: starting "); 2625 } 2626 2306 2627 addCoreConstraints(); 2307 2628 … … 2338 2659 }); 2339 2660 } 2661 2662 if (Options::logGC()) 2663 dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n"); 2340 2664 } 2341 2665 … … 2350 2674 // Wait for all collections to finish. 2351 2675 waitForCollector( 2352 [&] (const LockHolder&) -> bool {2676 [&] (const AbstractLocker&) -> bool { 2353 2677 ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 2354 2678 return m_lastServedTicket == m_lastGrantedTicket; … … 2403 2727 targetBytes = std::min(targetBytes, Options::gcIncrementMaxBytes()); 2404 2728 2405 MonotonicTime before;2406 if (Options::logGC()) {2407 dataLog("[GC: increment t=", targetBytes / 1024, "kb ");2408 before = MonotonicTime::now();2409 }2410 2411 2729 SlotVisitor& slotVisitor = *m_mutatorSlotVisitor; 2412 2730 ParallelModeEnabler parallelModeEnabler(slotVisitor); … … 2414 2732 // incrementBalance may go negative here because it'll remember how many bytes we overshot. 2415 2733 m_incrementBalance -= bytesVisited; 2416 2417 if (Options::logGC()) {2418 MonotonicTime after = MonotonicTime::now();2419 dataLog("p=", (after - before).milliseconds(), "ms b=", m_incrementBalance / 1024, "kb]\n");2420 }2421 2734 } 2422 2735 -
trunk/Source/JavaScriptCore/heap/Heap.h
r212616 r212778 25 25 #include "CellState.h" 26 26 #include "CollectionScope.h" 27 #include "CollectorPhase.h" 27 28 #include "DeleteAllCodeEffort.h" 29 #include "GCConductor.h" 28 30 #include "GCIncomingRefCountedSet.h" 29 31 #include "HandleSet.h" … … 31 33 #include "HeapObserver.h" 32 34 #include "ListableHandler.h" 33 #include "MachineStackMarker.h"34 35 #include "MarkedBlock.h" 35 36 #include "MarkedBlockSet.h" … … 54 55 class CodeBlock; 55 56 class CodeBlockSet; 57 class CollectingScope; 58 class ConservativeRoots; 56 59 class GCDeferralContext; 57 60 class EdenGCActivityCallback; … … 63 66 class HeapProfiler; 64 67 class HeapVerifier; 65 class HelpingGCScope;66 68 class IncrementalSweeper; 67 69 class JITStubRoutine; … … 70 72 class JSValue; 71 73 class LLIntOffsetsExtractor; 74 class MachineThreads; 72 75 class MarkStackArray; 73 76 class MarkedAllocator; … … 76 79 class MarkingConstraintSet; 77 80 class MutatorScheduler; 81 class RunningScope; 78 82 class SlotVisitor; 79 83 class SpaceTimeMutatorScheduler; 80 84 class StopIfNecessaryTimer; 85 class SweepingScope; 81 86 class VM; 87 struct CurrentThreadState; 82 88 83 89 namespace DFG { … … 132 138 133 139 MarkedSpace& objectSpace() { return m_objectSpace; } 134 MachineThreads& machineThreads() { return m_machineThreads; }140 MachineThreads& machineThreads() { return *m_machineThreads; } 135 141 136 142 SlotVisitor& collectorSlotVisitor() { return *m_collectorSlotVisitor; } … … 148 154 std::optional<CollectionScope> collectionScope() const { return m_collectionScope; } 149 155 bool hasHeapAccess() const; 150 bool mutatorIsStopped() const;151 156 bool collectorBelievesThatTheWorldIsStopped() const; 152 157 … … 230 235 void didFinishIterating(); 231 236 232 doublelastFullGCLength() const { return m_lastFullGCLength; }233 doublelastEdenGCLength() const { return m_lastEdenGCLength; }234 void increaseLastFullGCLength( doubleamount) { m_lastFullGCLength += amount; }237 Seconds lastFullGCLength() const { return m_lastFullGCLength; } 238 Seconds lastEdenGCLength() const { return m_lastEdenGCLength; } 239 void increaseLastFullGCLength(Seconds amount) { m_lastFullGCLength += amount; } 235 240 236 241 size_t sizeBeforeLastEdenCollection() const { return m_sizeBeforeLastEdenCollect; } … … 320 325 void stopIfNecessary(); 321 326 327 // This gives the conn to the collector. 328 void relinquishConn(); 329 322 330 bool mayNeedToStop(); 323 331 … … 344 352 JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef); 345 353 #endif // USE(CF) 346 354 347 355 private: 348 356 friend class AllocatingScope; 349 357 friend class CodeBlock; 358 friend class CollectingScope; 350 359 friend class DeferGC; 351 360 friend class DeferGCForAWhile; … … 356 365 friend class HeapUtil; 357 366 friend class HeapVerifier; 358 friend class HelpingGCScope;359 367 friend class JITStubRoutine; 360 368 friend class LLIntOffsetsExtractor; … … 362 370 friend class MarkedAllocator; 363 371 friend class MarkedBlock; 372 friend class RunningScope; 364 373 friend class SlotVisitor; 365 374 friend class SpaceTimeMutatorScheduler; 366 375 friend class StochasticSpaceTimeMutatorScheduler; 376 friend class SweepingScope; 367 377 friend class IncrementalSweeper; 368 378 friend class HeapStatistics; … … 383 393 JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t); 384 394 385 bool shouldCollectInThread(const LockHolder&); 386 void collectInThread(); 387 388 void stopTheWorld(); 389 void resumeTheWorld(); 390 391 void stopTheMutator(); 395 bool shouldCollectInCollectorThread(const AbstractLocker&); 396 void collectInCollectorThread(); 397 398 void checkConn(GCConductor); 399 400 enum class RunCurrentPhaseResult { 401 Finished, 402 Continue, 403 NeedCurrentThreadState 404 }; 405 RunCurrentPhaseResult runCurrentPhase(GCConductor, CurrentThreadState*); 406 407 // Returns true if we should keep doing things. 408 bool runNotRunningPhase(GCConductor); 409 bool runBeginPhase(GCConductor); 410 bool runFixpointPhase(GCConductor); 411 bool runConcurrentPhase(GCConductor); 412 bool runReloopPhase(GCConductor); 413 bool runEndPhase(GCConductor); 414 bool changePhase(GCConductor, CollectorPhase); 415 bool finishChangingPhase(GCConductor); 416 417 void collectInMutatorThread(); 418 419 void stopThePeriphery(GCConductor); 420 void resumeThePeriphery(); 421 422 // Returns true if the mutator is stopped, false if the mutator has the conn now. 423 bool stopTheMutator(); 392 424 void resumeTheMutator(); 393 425 … … 402 434 403 435 bool handleGCDidJIT(unsigned); 436 void handleGCDidJIT(); 437 404 438 bool handleNeedFinalize(unsigned); 405 void handleGCDidJIT();406 439 void handleNeedFinalize(); 440 441 bool relinquishConn(unsigned); 442 void finishRelinquishingConn(); 407 443 408 444 void setGCDidJIT(); … … 412 448 void setMutatorWaiting(); 413 449 void clearMutatorWaiting(); 414 void notifyThreadStopping(const LockHolder&);450 void notifyThreadStopping(const AbstractLocker&); 415 451 416 452 typedef uint64_t Ticket; … … 422 458 void prepareForMarking(); 423 459 424 void markToFixpoint(double gcStartTime);425 460 void gatherStackRoots(ConservativeRoots&); 426 461 void gatherJSStackRoots(ConservativeRoots&); … … 429 464 void visitCompilerWorklistWeakReferences(); 430 465 void removeDeadCompilerWorklistEntries(); 431 void updateObjectCounts( double gcStartTime);466 void updateObjectCounts(); 432 467 void endMarking(); 433 468 … … 444 479 JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*); 445 480 void updateAllocationLimits(); 446 void didFinishCollection( double gcStartTime);481 void didFinishCollection(); 447 482 void resumeCompilerThreads(); 448 483 void gatherExtraHeapSnapshotData(HeapProfiler&); … … 511 546 std::unique_ptr<HashSet<MarkedArgumentBuffer*>> m_markListSet; 512 547 513 MachineThreadsm_machineThreads;548 std::unique_ptr<MachineThreads> m_machineThreads; 514 549 515 550 std::unique_ptr<SlotVisitor> m_collectorSlotVisitor; … … 545 580 546 581 VM* m_vm; 547 doublem_lastFullGCLength;548 doublem_lastEdenGCLength;582 Seconds m_lastFullGCLength; 583 Seconds m_lastEdenGCLength; 549 584 550 585 Vector<ExecutableBase*> m_executables; … … 602 637 std::unique_ptr<MutatorScheduler> m_scheduler; 603 638 604 static const unsigned shouldStopBit = 1u << 0u;605 static const unsigned stoppedBit = 1u << 1u; 639 static const unsigned mutatorHasConnBit = 1u << 0u; // Must also be protected by threadLock. 640 static const unsigned stoppedBit = 1u << 1u; // Only set when !hasAccessBit 606 641 static const unsigned hasAccessBit = 1u << 2u; 607 642 static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid. … … 610 645 Atomic<unsigned> m_worldState; 611 646 bool m_collectorBelievesThatTheWorldIsStopped { false }; 647 MonotonicTime m_beforeGC; 648 MonotonicTime m_afterGC; 612 649 MonotonicTime m_stopTime; 613 650 … … 615 652 Ticket m_lastServedTicket { 0 }; 616 653 Ticket m_lastGrantedTicket { 0 }; 654 CollectorPhase m_currentPhase { CollectorPhase::NotRunning }; 655 CollectorPhase m_nextPhase { CollectorPhase::NotRunning }; 617 656 bool m_threadShouldStop { false }; 618 657 bool m_threadIsStopping { false }; … … 633 672 634 673 uintptr_t m_barriersExecuted { 0 }; 674 675 CurrentThreadState* m_currentThreadState { nullptr }; 635 676 }; 636 677 -
trunk/Source/JavaScriptCore/heap/HeapInlines.h
r212616 r212778 62 62 } 63 63 64 inline bool Heap::mutatorIsStopped() const65 {66 unsigned state = m_worldState.load();67 bool shouldStop = state & shouldStopBit;68 bool stopped = state & stoppedBit;69 // I only got it right when I considered all four configurations of shouldStop/stopped:70 // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we71 // should return false.72 // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested73 // that we resume, so we should return false.74 // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.75 // The mutator should be able to do whatever it wants in this state, as if we were not76 // stopped. So return false.77 // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is78 // stopped, so return true.79 return shouldStop & stopped;80 }81 82 64 inline bool Heap::collectorBelievesThatTheWorldIsStopped() const 83 65 { -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
r212616 r212778 98 98 } 99 99 100 void IncrementalSweeper:: willFinishSweeping()100 void IncrementalSweeper::stopSweeping() 101 101 { 102 102 m_currentAllocator = nullptr; -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.h
r212616 r212778 38 38 JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*); 39 39 40 void startSweeping();40 JS_EXPORT_PRIVATE void startSweeping(); 41 41 42 42 JS_EXPORT_PRIVATE void doWork() override; 43 43 bool sweepNextBlock(); 44 void willFinishSweeping();44 JS_EXPORT_PRIVATE void stopSweeping(); 45 45 46 46 private: -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp
r212616 r212778 1 1 /* 2 * Copyright (C) 2003-20 09, 2015-2016Apple Inc. All rights reserved.2 * Copyright (C) 2003-2017 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2007 Eric Seidel <eric@webkit.org> 4 4 * Copyright (C) 2009 Acision BV. All rights reserved. … … 312 312 delete t; 313 313 } 314 } 315 316 SUPPRESS_ASAN 317 void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState& currentThreadState) 318 { 319 if (currentThreadState.registerState) { 320 void* registersBegin = currentThreadState.registerState; 321 void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(currentThreadState.registerState + 1))); 322 conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks); 323 } 324 325 conservativeRoots.add(currentThreadState.stackTop, currentThreadState.stackOrigin, jitStubRoutines, codeBlocks); 314 326 } 315 327 … … 1021 1033 } 1022 1034 1023 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks) 1024 { 1035 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState) 1036 { 1037 if (currentThreadState) 1038 gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState); 1039 1025 1040 size_t size; 1026 1041 size_t capacity = 0; … … 1037 1052 } 1038 1053 1054 NEVER_INLINE int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>& lambda) 1055 { 1056 DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(state); 1057 lambda(state); 1058 return 42; // Suppress tail call optimization. 1059 } 1060 1039 1061 } // namespace JSC -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.h
r212616 r212778 2 2 * Copyright (C) 1999-2000 Harri Porten (porten@kde.org) 3 3 * Copyright (C) 2001 Peter Kelly (pmk@post.com) 4 * Copyright (C) 2003-20 09, 2015-2016Apple Inc. All rights reserved.4 * Copyright (C) 2003-2017 Apple Inc. All rights reserved. 5 5 * 6 6 * This library is free software; you can redistribute it and/or … … 22 22 #pragma once 23 23 24 #include <setjmp.h>24 #include "RegisterState.h" 25 25 #include <wtf/Lock.h> 26 26 #include <wtf/Noncopyable.h> 27 #include <wtf/ScopedLambda.h> 27 28 #include <wtf/ThreadSpecific.h> 28 29 … … 58 59 class JITStubRoutineSet; 59 60 61 struct CurrentThreadState { 62 void* stackOrigin { nullptr }; 63 void* stackTop { nullptr }; 64 RegisterState* registerState { nullptr }; 65 }; 66 60 67 class MachineThreads { 61 68 WTF_MAKE_NONCOPYABLE(MachineThreads); 62 69 public: 63 typedef jmp_buf RegisterState;64 65 70 MachineThreads(Heap*); 66 71 ~MachineThreads(); 67 72 68 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet& );73 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*); 69 74 70 75 JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads. … … 146 151 147 152 private: 153 void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&); 154 148 155 void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*); 149 156 bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*); … … 162 169 }; 163 170 171 #define DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(stateName) \ 172 CurrentThreadState stateName; \ 173 stateName.stackTop = &stateName; \ 174 stateName.stackOrigin = wtfThreadData().stack().origin(); \ 175 ALLOCATE_AND_GET_REGISTER_STATE(stateName ## _registerState); \ 176 stateName.registerState = &stateName ## _registerState 177 178 // The return value is meaningless. We just use it to suppress tail call optimization. 179 int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>&); 180 164 181 } // namespace JSC 165 182 166 #if COMPILER(GCC_OR_CLANG)167 #define REGISTER_BUFFER_ALIGNMENT __attribute__ ((aligned (sizeof(void*))))168 #else169 #define REGISTER_BUFFER_ALIGNMENT170 #endif171 172 // ALLOCATE_AND_GET_REGISTER_STATE() is a macro so that it is always "inlined" even in debug builds.173 #if COMPILER(MSVC)174 #pragma warning(push)175 #pragma warning(disable: 4611)176 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \177 MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \178 setjmp(registers)179 #pragma warning(pop)180 #else181 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \182 MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \183 setjmp(registers)184 #endif -
trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp
r212616 r212778 217 217 m_heap->collectIfNecessaryOrDefer(deferralContext); 218 218 219 // Goofy corner case: the GC called a callback and now this allocator has a currentBlock. This only 220 // happens when running WebKit tests, which inject a callback into the GC's finalization. 221 if (UNLIKELY(m_currentBlock)) { 222 if (crashOnFailure) 223 return allocate(deferralContext); 224 return tryAllocate(deferralContext); 225 } 226 219 227 void* result = tryAllocateWithoutCollecting(); 220 228 -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r212616 r212778 27 27 #include "MarkedBlock.h" 28 28 29 #include "HelpingGCScope.h"30 29 #include "JSCell.h" 31 30 #include "JSDestructibleObject.h" … … 33 32 #include "MarkedBlockInlines.h" 34 33 #include "SuperSampler.h" 34 #include "SweepingScope.h" 35 35 36 36 namespace JSC { … … 410 410 FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode) 411 411 { 412 // FIXME: Maybe HelpingGCScope should just be called SweepScope? 413 HelpingGCScope helpingGCScope(*heap()); 412 SweepingScope sweepingScope(*heap()); 414 413 415 414 m_allocator->setIsUnswept(NoLockingNecessary, this, false); -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r212616 r212778 226 226 void MarkedSpace::sweep() 227 227 { 228 m_heap->sweeper()-> willFinishSweeping();228 m_heap->sweeper()->stopSweeping(); 229 229 forEachAllocator( 230 230 [&] (MarkedAllocator& allocator) -> IterationStatus { -
trunk/Source/JavaScriptCore/heap/MutatorState.cpp
r212616 r212778 42 42 out.print("Allocating"); 43 43 return; 44 case MutatorState::HelpingGC: 45 out.print("HelpingGC"); 44 case MutatorState::Sweeping: 45 out.print("Sweeping"); 46 return; 47 case MutatorState::Collecting: 48 out.print("Collecting"); 46 49 return; 47 50 } -
trunk/Source/JavaScriptCore/heap/MutatorState.h
r212616 r212778 35 35 Allocating, 36 36 37 // The mutator was asked by the GC to do some work. 38 HelpingGC 37 // The mutator is sweeping. 38 Sweeping, 39 40 // The mutator is collecting. 41 Collecting 39 42 }; 40 43 -
trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp
r212616 r212778 39 39 #include "JSCInlines.h" 40 40 #include "SlotVisitorInlines.h" 41 #include "StopIfNecessaryTimer.h" 41 42 #include "SuperSampler.h" 42 43 #include "VM.h" … … 76 77 #endif 77 78 78 SlotVisitor::SlotVisitor(Heap& heap )79 SlotVisitor::SlotVisitor(Heap& heap, CString codeName) 79 80 : m_bytesVisited(0) 80 81 , m_visitCount(0) … … 82 83 , m_markingVersion(MarkedSpace::initialVersion) 83 84 , m_heap(heap) 85 , m_codeName(codeName) 84 86 #if !ASSERT_DISABLED 85 87 , m_isCheckingForDefaultMarkViolation(false) … … 470 472 } 471 473 472 void SlotVisitor::drain(MonotonicTime timeout) 473 { 474 RELEASE_ASSERT(m_isInParallelMode); 474 NEVER_INLINE void SlotVisitor::drain(MonotonicTime timeout) 475 { 476 if (!m_isInParallelMode) { 477 dataLog("FATAL: attempting to drain when not in parallel mode.\n"); 478 RELEASE_ASSERT_NOT_REACHED(); 479 } 475 480 476 481 auto locker = holdLock(m_rightToRun); … … 582 587 } 583 588 584 SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)589 NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout) 585 590 { 586 591 ASSERT(m_isInParallelMode); … … 617 622 return SharedDrainResult::TimedOut; 618 623 619 if (didReachTermination(locker)) 624 if (didReachTermination(locker)) { 620 625 m_heap.m_markingConditionVariable.notifyAll(); 626 627 // If we're in concurrent mode, then we know that the mutator will eventually do 628 // the right thing because: 629 // - It's possible that the collector has the conn. In that case, the collector will 630 // wake up from the notification above. This will happen if the app released heap 631 // access. Native apps can spend a lot of time with heap access released. 632 // - It's possible that the mutator will allocate soon. Then it will check if we 633 // reached termination. This is the most likely outcome in programs that allocate 634 // a lot. 635 // - WebCore never releases access. But WebCore has a runloop. The runloop will check 636 // if we reached termination. 637 // So, this tells the runloop that it's got things to do. 638 m_heap.m_stopIfNecessaryTimer->scheduleSoon(); 639 } 621 640 622 641 auto isReady = [&] () -> bool { … … 660 679 ASSERT(Options::numberOfGCMarkers()); 661 680 662 if (!m_heap.hasHeapAccess() 681 if (Options::numberOfGCMarkers() == 1 682 || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit) 683 || !m_heap.hasHeapAccess() 663 684 || m_heap.collectorBelievesThatTheWorldIsStopped()) { 664 685 // This is an optimization over drainInParallel() when we have a concurrent mutator but … … 685 706 void SlotVisitor::donateAll() 686 707 { 708 if (isEmpty()) 709 return; 710 687 711 donateAll(holdLock(m_heap.m_markingMutex)); 688 712 } … … 756 780 void SlotVisitor::donate() 757 781 { 758 ASSERT(m_isInParallelMode); 782 if (!m_isInParallelMode) { 783 dataLog("FATAL: Attempting to donate when not in parallel mode.\n"); 784 RELEASE_ASSERT_NOT_REACHED(); 785 } 786 759 787 if (Options::numberOfGCMarkers() == 1) 760 788 return; -
trunk/Source/JavaScriptCore/heap/SlotVisitor.h
r212616 r212778 57 57 58 58 public: 59 SlotVisitor(Heap& );59 SlotVisitor(Heap&, CString codeName); 60 60 ~SlotVisitor(); 61 61 … … 168 168 169 169 void donateAll(); 170 171 const char* codeName() const { return m_codeName.data(); } 170 172 171 173 private: … … 228 230 bool m_canOptimizeForStoppedMutator { false }; 229 231 Lock m_rightToRun; 232 233 CString m_codeName; 230 234 231 235 public: -
trunk/Source/JavaScriptCore/heap/StochasticSpaceTimeMutatorScheduler.cpp
r212616 r212778 77 77 Options::concurrentGCMaxHeadroom() * 78 78 std::max<double>(m_bytesAllocatedThisCycleAtTheBeginning, m_heap.m_maxEdenSize); 79 80 if (Options::logGC()) 81 dataLog("ca=", m_bytesAllocatedThisCycleAtTheBeginning / 1024, "kb h=", (m_bytesAllocatedThisCycleAtTheEnd - m_bytesAllocatedThisCycleAtTheBeginning) / 1024, "kb "); 82 79 83 m_beforeConstraints = MonotonicTime::now(); 80 84 } … … 118 122 119 123 double resumeProbability = mutatorUtilization(snapshot); 124 if (resumeProbability < Options::epsilonMutatorUtilization()) { 125 m_plannedResumeTime = MonotonicTime::infinity(); 126 return; 127 } 128 120 129 bool shouldResume = m_random.get() < resumeProbability; 121 130 … … 136 145 return MonotonicTime::now(); 137 146 case Resumed: { 138 // Once we're running, we keep going. 139 // FIXME: Maybe force stop when we run out of headroom? 147 // Once we're running, we keep going unless we run out of headroom. 148 Snapshot snapshot(*this); 149 if (mutatorUtilization(snapshot) < Options::epsilonMutatorUtilization()) 150 return MonotonicTime::now(); 140 151 return MonotonicTime::infinity(); 141 152 } } -
trunk/Source/JavaScriptCore/jit/JITWorklist.cpp
r212616 r212778 100 100 class JITWorklist::Thread : public AutomaticThread { 101 101 public: 102 Thread(const LockHolder& locker, JITWorklist& worklist)102 Thread(const AbstractLocker& locker, JITWorklist& worklist) 103 103 : AutomaticThread(locker, worklist.m_lock, worklist.m_condition) 104 104 , m_worklist(worklist) … … 108 108 109 109 protected: 110 PollResult poll(const LockHolder&) override110 PollResult poll(const AbstractLocker&) override 111 111 { 112 112 RELEASE_ASSERT(m_worklist.m_numAvailableThreads); -
trunk/Source/JavaScriptCore/jsc.cpp
r212620 r212778 39 39 #include "HeapProfiler.h" 40 40 #include "HeapSnapshotBuilder.h" 41 #include "HeapStatistics.h"42 41 #include "InitializeThreading.h" 43 42 #include "Interpreter.h" … … 1085 1084 static EncodedJSValue JSC_HOST_CALL functionWaitForReport(ExecState*); 1086 1085 static EncodedJSValue JSC_HOST_CALL functionHeapCapacity(ExecState*); 1086 static EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState*); 1087 1087 1088 1088 struct Script { … … 1367 1367 1368 1368 addFunction(vm, "heapCapacity", functionHeapCapacity, 0); 1369 addFunction(vm, "flashHeapAccess", functionFlashHeapAccess, 0); 1369 1370 } 1370 1371 … … 2649 2650 } 2650 2651 2652 EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState* exec) 2653 { 2654 VM& vm = exec->vm(); 2655 auto scope = DECLARE_THROW_SCOPE(vm); 2656 2657 vm.heap.releaseAccess(); 2658 if (exec->argumentCount() >= 1) { 2659 double ms = exec->argument(0).toNumber(exec); 2660 RETURN_IF_EXCEPTION(scope, encodedJSValue()); 2661 sleep(Seconds::fromMilliseconds(ms)); 2662 } 2663 vm.heap.acquireAccess(); 2664 return JSValue::encode(jsUndefined()); 2665 } 2666 2651 2667 template<typename ValueType> 2652 2668 typename std::enable_if<!std::is_fundamental<ValueType>::value>::type addOption(VM&, JSObject*, Identifier, ValueType) { } -
trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp
r212616 r212778 32 32 #include "ExecutableAllocator.h" 33 33 #include "Heap.h" 34 #include "HeapStatistics.h"35 34 #include "Identifier.h" 36 35 #include "JSDateMath.h" … … 61 60 WTF::initializeGCThreads(); 62 61 Options::initialize(); 63 if (Options::recordGCPauseTimes())64 HeapStatistics::initialize();65 62 #if ENABLE(WRITE_BARRIER_PROFILING) 66 63 WriteBarrierCounters::initialize(); -
trunk/Source/JavaScriptCore/runtime/JSCellInlines.h
r212616 r212778 281 281 // independent of whether the mutator thread is sweeping or not. Hence, we also check for ownerThread() != 282 282 // std::this_thread::get_id() to allow the GC thread or JIT threads to pass this assertion. 283 ASSERT(vm.heap.mutatorState() == MutatorState::Running || vm.apiLock().ownerThread() != std::this_thread::get_id());283 ASSERT(vm.heap.mutatorState() != MutatorState::Sweeping || vm.apiLock().ownerThread() != std::this_thread::get_id()); 284 284 return structure(vm)->classInfo(); 285 285 } -
trunk/Source/JavaScriptCore/runtime/Options.cpp
r212616 r212778 318 318 Options::concurrentGCMaxHeadroom() = 1.4; 319 319 Options::minimumGCPauseMS() = 1; 320 Options::gcIncrementScale() = 1; 320 Options::useStochasticMutatorScheduler() = false; 321 if (WTF::numberOfProcessorCores() <= 1) 322 Options::gcIncrementScale() = 1; 323 else 324 Options::gcIncrementScale() = 0; 321 325 } 322 326 } -
trunk/Source/JavaScriptCore/runtime/Options.h
r212616 r212778 201 201 v(double, minimumMutatorUtilization, 0, Normal, nullptr) \ 202 202 v(double, maximumMutatorUtilization, 0.7, Normal, nullptr) \ 203 v(double, epsilonMutatorUtilization, 0.01, Normal, nullptr) \ 203 204 v(double, concurrentGCMaxHeadroom, 1.5, Normal, nullptr) \ 204 205 v(double, concurrentGCPeriodMS, 2, Normal, nullptr) \ … … 345 346 v(bool, useImmortalObjects, false, Normal, "debugging option to keep all objects alive forever") \ 346 347 v(bool, sweepSynchronously, false, Normal, "debugging option to sweep all dead objects synchronously at GC end before resuming mutator") \ 347 v(bool, dumpObjectStatistics, false, Normal, nullptr) \348 348 v(unsigned, maxSingleAllocationSize, 0, Configurable, "debugging option to limit individual allocations to a max size (0 = limit not set, N = limit size in bytes)") \ 349 349 \ -
trunk/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
r212616 r212778 1 1 /* 2 * Copyright (C) 2013-201 4, 2016Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 29 29 #include "CodeBlock.h" 30 30 #include "FunctionCodeBlock.h" 31 #include "HeapStatistics.h"32 31 #include "JSCInlines.h" 33 32 #include "LLIntData.h" … … 161 160 void finalizeStatsAtEndOfTesting() 162 161 { 163 if (Options::logHeapStatisticsAtExit())164 HeapStatistics::reportSuccess();165 162 if (Options::reportLLIntStats()) 166 163 LLInt::Data::finalizeStats(); -
trunk/Source/WTF/ChangeLog
r212757 r212778 1 2017-02-20 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Extend the use of AbstractLocker so that we can use more locking idioms. 9 10 * wtf/AutomaticThread.cpp: 11 (WTF::AutomaticThreadCondition::notifyOne): 12 (WTF::AutomaticThreadCondition::notifyAll): 13 (WTF::AutomaticThreadCondition::add): 14 (WTF::AutomaticThreadCondition::remove): 15 (WTF::AutomaticThreadCondition::contains): 16 (WTF::AutomaticThread::AutomaticThread): 17 (WTF::AutomaticThread::tryStop): 18 (WTF::AutomaticThread::isWaiting): 19 (WTF::AutomaticThread::notify): 20 (WTF::AutomaticThread::start): 21 (WTF::AutomaticThread::threadIsStopping): 22 * wtf/AutomaticThread.h: 23 * wtf/NumberOfCores.cpp: 24 (WTF::numberOfProcessorCores): 25 * wtf/ParallelHelperPool.cpp: 26 (WTF::ParallelHelperClient::finish): 27 (WTF::ParallelHelperClient::claimTask): 28 (WTF::ParallelHelperPool::Thread::Thread): 29 (WTF::ParallelHelperPool::didMakeWorkAvailable): 30 (WTF::ParallelHelperPool::hasClientWithTask): 31 (WTF::ParallelHelperPool::getClientWithTask): 32 * wtf/ParallelHelperPool.h: 33 1 34 2017-02-21 John Wilander <wilander@apple.com> 2 35 -
trunk/Source/WTF/wtf/AutomaticThread.cpp
r212616 r212778 46 46 } 47 47 48 void AutomaticThreadCondition::notifyOne(const LockHolder& locker)48 void AutomaticThreadCondition::notifyOne(const AbstractLocker& locker) 49 49 { 50 50 for (AutomaticThread* thread : m_threads) { … … 65 65 } 66 66 67 void AutomaticThreadCondition::notifyAll(const LockHolder& locker)67 void AutomaticThreadCondition::notifyAll(const AbstractLocker& locker) 68 68 { 69 69 m_condition.notifyAll(); … … 82 82 } 83 83 84 void AutomaticThreadCondition::add(const LockHolder&, AutomaticThread* thread)84 void AutomaticThreadCondition::add(const AbstractLocker&, AutomaticThread* thread) 85 85 { 86 86 ASSERT(!m_threads.contains(thread)); … … 88 88 } 89 89 90 void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)90 void AutomaticThreadCondition::remove(const AbstractLocker&, AutomaticThread* thread) 91 91 { 92 92 m_threads.removeFirst(thread); … … 94 94 } 95 95 96 bool AutomaticThreadCondition::contains(const LockHolder&, AutomaticThread* thread)96 bool AutomaticThreadCondition::contains(const AbstractLocker&, AutomaticThread* thread) 97 97 { 98 98 return m_threads.contains(thread); 99 99 } 100 100 101 AutomaticThread::AutomaticThread(const LockHolder& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)101 AutomaticThread::AutomaticThread(const AbstractLocker& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition) 102 102 : m_lock(lock) 103 103 , m_condition(condition) … … 119 119 } 120 120 121 bool AutomaticThread::tryStop(const LockHolder&)121 bool AutomaticThread::tryStop(const AbstractLocker&) 122 122 { 123 123 if (!m_isRunning) … … 129 129 } 130 130 131 bool AutomaticThread::isWaiting(const LockHolder& locker)131 bool AutomaticThread::isWaiting(const AbstractLocker& locker) 132 132 { 133 133 return hasUnderlyingThread(locker) && m_isWaiting; 134 134 } 135 135 136 bool AutomaticThread::notify(const LockHolder& locker)136 bool AutomaticThread::notify(const AbstractLocker& locker) 137 137 { 138 138 ASSERT_UNUSED(locker, hasUnderlyingThread(locker)); … … 148 148 } 149 149 150 void AutomaticThread::start(const LockHolder&)150 void AutomaticThread::start(const AbstractLocker&) 151 151 { 152 152 RELEASE_ASSERT(m_isRunning); … … 170 170 } 171 171 172 auto stopImpl = [&] (const LockHolder& locker) {172 auto stopImpl = [&] (const AbstractLocker& locker) { 173 173 thread->threadIsStopping(locker); 174 174 thread->m_hasUnderlyingThread = false; 175 175 }; 176 176 177 auto stopPermanently = [&] (const LockHolder& locker) {177 auto stopPermanently = [&] (const AbstractLocker& locker) { 178 178 m_isRunning = false; 179 179 m_isRunningCondition.notifyAll(); … … 181 181 }; 182 182 183 auto stopForTimeout = [&] (const LockHolder& locker) {183 auto stopForTimeout = [&] (const AbstractLocker& locker) { 184 184 stopImpl(locker); 185 185 }; … … 228 228 } 229 229 230 void AutomaticThread::threadIsStopping(const LockHolder&)230 void AutomaticThread::threadIsStopping(const AbstractLocker&) 231 231 { 232 232 } -
trunk/Source/WTF/wtf/AutomaticThread.h
r212616 r212778 76 76 WTF_EXPORT_PRIVATE ~AutomaticThreadCondition(); 77 77 78 WTF_EXPORT_PRIVATE void notifyOne(const LockHolder&);79 WTF_EXPORT_PRIVATE void notifyAll(const LockHolder&);78 WTF_EXPORT_PRIVATE void notifyOne(const AbstractLocker&); 79 WTF_EXPORT_PRIVATE void notifyAll(const AbstractLocker&); 80 80 81 81 // You can reuse this condition for other things, just as you would any other condition. … … 91 91 WTF_EXPORT_PRIVATE AutomaticThreadCondition(); 92 92 93 void add(const LockHolder&, AutomaticThread*);94 void remove(const LockHolder&, AutomaticThread*);95 bool contains(const LockHolder&, AutomaticThread*);93 void add(const AbstractLocker&, AutomaticThread*); 94 void remove(const AbstractLocker&, AutomaticThread*); 95 bool contains(const AbstractLocker&, AutomaticThread*); 96 96 97 97 Condition m_condition; … … 114 114 115 115 // Sometimes it's possible to optimize for the case that there is no underlying thread. 116 bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }116 bool hasUnderlyingThread(const AbstractLocker&) const { return m_hasUnderlyingThread; } 117 117 118 118 // This attempts to quickly stop the thread. This will succeed if the thread happens to not be … … 120 120 // thread is to first try this, and if that doesn't work, to tell the thread using your own 121 121 // mechanism (set some flag and then notify the condition). 122 bool tryStop(const LockHolder&);122 bool tryStop(const AbstractLocker&); 123 123 124 bool isWaiting(const LockHolder&);124 bool isWaiting(const AbstractLocker&); 125 125 126 bool notify(const LockHolder&);126 bool notify(const AbstractLocker&); 127 127 128 128 void join(); … … 131 131 // This logically creates the thread, but in reality the thread won't be created until someone 132 132 // calls AutomaticThreadCondition::notifyOne() or notifyAll(). 133 AutomaticThread(const LockHolder&, Box<Lock>, RefPtr<AutomaticThreadCondition>);133 AutomaticThread(const AbstractLocker&, Box<Lock>, RefPtr<AutomaticThreadCondition>); 134 134 135 135 // To understand PollResult and WorkResult, imagine that poll() and work() are being called like … … 160 160 161 161 enum class PollResult { Work, Stop, Wait }; 162 virtual PollResult poll(const LockHolder&) = 0;162 virtual PollResult poll(const AbstractLocker&) = 0; 163 163 164 164 enum class WorkResult { Continue, Stop }; … … 169 169 // can be sure that the default ones don't do anything (so you don't need a super call). 170 170 virtual void threadDidStart(); 171 virtual void threadIsStopping(const LockHolder&);171 virtual void threadIsStopping(const AbstractLocker&); 172 172 173 173 private: 174 174 friend class AutomaticThreadCondition; 175 175 176 void start(const LockHolder&);176 void start(const AbstractLocker&); 177 177 178 178 Box<Lock> m_lock; -
trunk/Source/WTF/wtf/NumberOfCores.cpp
r212616 r212778 48 48 if (s_numberOfCores > 0) 49 49 return s_numberOfCores; 50 51 if (const char* coresEnv = getenv("WTF_numberOfProcessorCores")) { 52 unsigned numberOfCores; 53 if (sscanf(coresEnv, "%u", &numberOfCores) == 1) { 54 s_numberOfCores = numberOfCores; 55 return s_numberOfCores; 56 } else 57 fprintf(stderr, "WARNING: failed to parse WTF_numberOfProcessorCores=%s\n", coresEnv); 58 } 50 59 51 60 #if OS(DARWIN) -
trunk/Source/WTF/wtf/ParallelHelperPool.cpp
r212616 r212778 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 89 89 } 90 90 91 void ParallelHelperClient::finish(const LockHolder&)91 void ParallelHelperClient::finish(const AbstractLocker&) 92 92 { 93 93 m_task = nullptr; … … 96 96 } 97 97 98 RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const LockHolder&)98 RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const AbstractLocker&) 99 99 { 100 100 if (!m_task) … … 171 171 class ParallelHelperPool::Thread : public AutomaticThread { 172 172 public: 173 Thread(const LockHolder& locker, ParallelHelperPool& pool)173 Thread(const AbstractLocker& locker, ParallelHelperPool& pool) 174 174 : AutomaticThread(locker, pool.m_lock, pool.m_workAvailableCondition) 175 175 , m_pool(pool) … … 178 178 179 179 protected: 180 PollResult poll(const LockHolder& locker) override180 PollResult poll(const AbstractLocker& locker) override 181 181 { 182 182 if (m_pool.m_isDying) … … 204 204 }; 205 205 206 void ParallelHelperPool::didMakeWorkAvailable(const LockHolder& locker)206 void ParallelHelperPool::didMakeWorkAvailable(const AbstractLocker& locker) 207 207 { 208 208 while (m_numThreads > m_threads.size()) … … 211 211 } 212 212 213 bool ParallelHelperPool::hasClientWithTask(const LockHolder& locker)213 bool ParallelHelperPool::hasClientWithTask(const AbstractLocker& locker) 214 214 { 215 215 return !!getClientWithTask(locker); 216 216 } 217 217 218 ParallelHelperClient* ParallelHelperPool::getClientWithTask(const LockHolder&)218 ParallelHelperClient* ParallelHelperPool::getClientWithTask(const AbstractLocker&) 219 219 { 220 220 // We load-balance by being random. -
trunk/Source/WTF/wtf/ParallelHelperPool.h
r212616 r212778 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 169 169 friend class ParallelHelperPool; 170 170 171 void finish(const LockHolder&);172 RefPtr<SharedTask<void ()>> claimTask(const LockHolder&);171 void finish(const AbstractLocker&); 172 RefPtr<SharedTask<void ()>> claimTask(const AbstractLocker&); 173 173 void runTask(RefPtr<SharedTask<void ()>>); 174 174 … … 194 194 friend class Thread; 195 195 196 void didMakeWorkAvailable(const LockHolder&);197 198 bool hasClientWithTask(const LockHolder&);199 ParallelHelperClient* getClientWithTask(const LockHolder&);200 ParallelHelperClient* waitForClientWithTask(const LockHolder&);196 void didMakeWorkAvailable(const AbstractLocker&); 197 198 bool hasClientWithTask(const AbstractLocker&); 199 ParallelHelperClient* getClientWithTask(const AbstractLocker&); 200 ParallelHelperClient* waitForClientWithTask(const AbstractLocker&); 201 201 202 202 Box<Lock> m_lock; // AutomaticThread wants this in a box for safety. -
trunk/Source/WebCore/ChangeLog
r212776 r212778 1 2017-02-20 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Added new tests in JSTests. 9 10 The WebCore changes involve: 11 12 - Refactoring around new header discipline. 13 14 - Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline. 15 16 * ForwardingHeaders/heap/GCFinalizationCallback.h: Added. 17 * ForwardingHeaders/heap/IncrementalSweeper.h: Added. 18 * ForwardingHeaders/heap/MachineStackMarker.h: Added. 19 * ForwardingHeaders/heap/RunningScope.h: Added. 20 * bindings/js/CommonVM.cpp: 21 * testing/Internals.cpp: 22 (WebCore::Internals::parserMetaData): 23 (WebCore::Internals::isReadableStreamDisturbed): 24 (WebCore::Internals::isGCRunning): 25 (WebCore::Internals::addGCFinalizationCallback): 26 (WebCore::Internals::stopSweeping): 27 (WebCore::Internals::startSweeping): 28 * testing/Internals.h: 29 * testing/Internals.idl: 30 1 31 2017-02-20 Simon Fraser <simon.fraser@apple.com> 2 32 -
trunk/Source/WebCore/bindings/js/CommonVM.cpp
r212616 r212778 31 31 #include "WebCoreJSClientData.h" 32 32 #include <heap/HeapInlines.h> 33 #include "heap/MachineStackMarker.h" 33 34 #include <runtime/VM.h> 34 35 #include <wtf/MainThread.h> -
trunk/Tools/ChangeLog
r212776 r212778 1 2017-02-20 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Make more tests collect continuously. 9 10 * Scripts/run-jsc-stress-tests: 11 1 12 2017-02-20 Simon Fraser <simon.fraser@apple.com> 2 13 -
trunk/Tools/Scripts/run-jsc-stress-tests
r212616 r212778 1469 1469 1470 1470 def runNoisyTestNoCJIT 1471 runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS ))1471 runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS)) 1472 1472 end 1473 1473 1474 1474 def runNoisyTestEagerNoCJIT 1475 runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS ))1475 runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS)) 1476 1476 end 1477 1477
Note: See TracChangeset
for help on using the changeset viewer.