Changeset 212466 in webkit
- Timestamp:
- Feb 16, 2017 2:33:37 PM (7 years ago)
- Location:
- trunk
- Files:
-
- 18 added
- 3 deleted
- 45 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r212430 r212466 1 2017-02-10 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Add versions of splay that flash heap access, to simulate what might happen if a third-party app 9 was running concurrent GC. In this case, we might actually start the collector thread. 10 11 * stress/splay-flash-access-1ms.js: Added. 12 (performance.now): 13 (this.Setup.setup.setup): 14 (this.TearDown.tearDown.tearDown): 15 (Benchmark): 16 (BenchmarkResult): 17 (BenchmarkResult.prototype.valueOf): 18 (BenchmarkSuite): 19 (alert): 20 (Math.random): 21 (BenchmarkSuite.ResetRNG): 22 (RunStep): 23 (BenchmarkSuite.RunSuites): 24 (BenchmarkSuite.CountBenchmarks): 25 (BenchmarkSuite.GeometricMean): 26 (BenchmarkSuite.GeometricMeanTime): 27 (BenchmarkSuite.AverageAbovePercentile): 28 (BenchmarkSuite.GeometricMeanLatency): 29 (BenchmarkSuite.FormatScore): 30 (BenchmarkSuite.prototype.NotifyStep): 31 (BenchmarkSuite.prototype.NotifyResult): 32 (BenchmarkSuite.prototype.NotifyError): 33 (BenchmarkSuite.prototype.RunSingleBenchmark): 34 (RunNextSetup): 35 (RunNextBenchmark): 36 (RunNextTearDown): 37 (BenchmarkSuite.prototype.RunStep): 38 (GeneratePayloadTree): 39 (GenerateKey): 40 (SplayUpdateStats): 41 (InsertNewNode): 42 (SplaySetup): 43 (SplayTearDown): 44 (SplayRun): 45 (SplayTree): 46 (SplayTree.prototype.isEmpty): 47 (SplayTree.prototype.insert): 48 (SplayTree.prototype.remove): 49 (SplayTree.prototype.find): 50 (SplayTree.prototype.findMax): 51 (SplayTree.prototype.findGreatestLessThan): 52 (SplayTree.prototype.exportKeys): 53 (SplayTree.prototype.splay_): 54 (SplayTree.Node): 55 (SplayTree.Node.prototype.traverse_): 56 (jscSetUp): 57 (jscTearDown): 58 (jscRun): 59 (averageAbovePercentile): 60 (printPercentile): 61 * stress/splay-flash-access.js: Added. 62 (performance.now): 63 (this.Setup.setup.setup): 64 (this.TearDown.tearDown.tearDown): 65 (Benchmark): 66 (BenchmarkResult): 67 (BenchmarkResult.prototype.valueOf): 68 (BenchmarkSuite): 69 (alert): 70 (Math.random): 71 (BenchmarkSuite.ResetRNG): 72 (RunStep): 73 (BenchmarkSuite.RunSuites): 74 (BenchmarkSuite.CountBenchmarks): 75 (BenchmarkSuite.GeometricMean): 76 (BenchmarkSuite.GeometricMeanTime): 77 (BenchmarkSuite.AverageAbovePercentile): 78 (BenchmarkSuite.GeometricMeanLatency): 79 (BenchmarkSuite.FormatScore): 80 (BenchmarkSuite.prototype.NotifyStep): 81 (BenchmarkSuite.prototype.NotifyResult): 82 (BenchmarkSuite.prototype.NotifyError): 83 (BenchmarkSuite.prototype.RunSingleBenchmark): 84 (RunNextSetup): 85 (RunNextBenchmark): 86 (RunNextTearDown): 87 (BenchmarkSuite.prototype.RunStep): 88 (GeneratePayloadTree): 89 (GenerateKey): 90 (SplayUpdateStats): 91 (InsertNewNode): 92 (SplaySetup): 93 (SplayTearDown): 94 (SplayRun): 95 (SplayTree): 96 (SplayTree.prototype.isEmpty): 97 (SplayTree.prototype.insert): 98 (SplayTree.prototype.remove): 99 (SplayTree.prototype.find): 100 (SplayTree.prototype.findMax): 101 (SplayTree.prototype.findGreatestLessThan): 102 (SplayTree.prototype.exportKeys): 103 (SplayTree.prototype.splay_): 104 (SplayTree.Node): 105 (SplayTree.Node.prototype.traverse_): 106 (jscSetUp): 107 (jscTearDown): 108 (jscRun): 109 (averageAbovePercentile): 110 (printPercentile): 111 1 112 2017-02-16 Yusuke Suzuki <utatane.tea@gmail.com> 2 113 -
trunk/LayoutTests/ChangeLog
r212465 r212466 1 2017-02-11 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 When running in WebCore, the JSC GC may find itself completing draining in the parallel helpers 9 at a time when the main thread runloop is idle. If the mutator has the conn, then there will not 10 be any GC threads to receive the notification from the shared mark stack condition variable. So 11 nobody will know that we need to reloop. 12 13 Fortunately, the SlotVisitor now knows that it has to schedule the stopIfNecessary timer in 14 addition to notifying the condition variable. 15 16 This adds a variant of splay that quickly builds up a big enough heap to cause significant GCs to 17 happen and then waits until a GC is running. When it's running, it registers a callback to the 18 GC's finalize phase. When the callback runs, it finishes the test. This is a barely-sound test 19 that uses a lot of while box API from Internals, but it proves that the SlotVisitor's runloop 20 ping works: if I comment it out, this test will always fail. Otherwise it always succeeds. 21 22 * js/dom/gc-slot-visitor-parallel-drain-pings-runloop-when-done.html: Added. 23 1 24 2017-02-16 Jiewen Tan <jiewen_tan@apple.com> 2 25 -
trunk/Source/JavaScriptCore/CMakeLists.txt
r212453 r212466 475 475 heap/CodeBlockSet.cpp 476 476 heap/CollectionScope.cpp 477 heap/CollectorPhase.cpp 477 478 heap/ConservativeRoots.cpp 478 479 heap/DeferGC.cpp … … 482 483 heap/FreeList.cpp 483 484 heap/GCActivityCallback.cpp 485 heap/GCConductor.cpp 486 heap/GCFinalizationCallback.cpp 484 487 heap/GCLogging.cpp 485 488 heap/HandleSet.cpp … … 491 494 heap/HeapSnapshot.cpp 492 495 heap/HeapSnapshotBuilder.cpp 493 heap/HeapStatistics.cpp494 496 heap/HeapTimer.cpp 495 497 heap/HeapVerifier.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r212464 r212466 1 2017-02-10 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 This turns the collector thread's workflow into a state machine, so that the mutator thread can 9 run it directly. This reduces the amount of synchronization we do with the collector thread, and 10 means that most apps will never start the collector thread. The collector thread will still start 11 when we need to finish collecting and we don't have heap access. 12 13 In this new world, "stopping the world" means relinquishing control of collection to the mutator. 14 This means tracking who is conducting collection. I use the GCConductor enum to say who is 15 conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to 16 refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn). 17 So, stopping the world means giving the mutator the conn. Releasing heap access means giving the 18 collector the conn. 19 20 This meant bringing back the conservative scan of the calling thread. It turns out that this 21 scan was too slow to be called on each GC increment because apparently setjmp() now does system 22 calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about 23 whether or not it was correct, so I also made it so that the GC only rarely asks for the register 24 state. I think we still want to use my register saving code instead of setjmp because setjmp 25 seems to save things we don't need, and that could make us overly conservative. 26 27 It turns out that this new scheduling discipline makes the old space-time scheduler perform 28 better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is 29 because the mutator having the conn enables us to time the mutator<->collector context switches 30 by polling. The OS is never involved. So, we can use super precise timing. This allows the old 31 space-time schduler to shine like it hadn't before. 32 33 The splay results imply that this is all a good thing. On 2-core systems, this reduces pause 34 times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by 35 half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much 36 effect. 37 38 * CMakeLists.txt: 39 * JavaScriptCore.xcodeproj/project.pbxproj: 40 * dfg/DFGWorklist.cpp: 41 (JSC::DFG::Worklist::ThreadBody::ThreadBody): 42 (JSC::DFG::Worklist::dump): 43 (JSC::DFG::numberOfWorklists): 44 (JSC::DFG::ensureWorklistForIndex): 45 (JSC::DFG::existingWorklistForIndexOrNull): 46 (JSC::DFG::existingWorklistForIndex): 47 * dfg/DFGWorklist.h: 48 (JSC::DFG::numberOfWorklists): Deleted. 49 (JSC::DFG::ensureWorklistForIndex): Deleted. 50 (JSC::DFG::existingWorklistForIndexOrNull): Deleted. 51 (JSC::DFG::existingWorklistForIndex): Deleted. 52 * heap/CollectingScope.h: Added. 53 (JSC::CollectingScope::CollectingScope): 54 (JSC::CollectingScope::~CollectingScope): 55 * heap/CollectorPhase.cpp: Added. 56 (JSC::worldShouldBeSuspended): 57 (WTF::printInternal): 58 * heap/CollectorPhase.h: Added. 59 * heap/EdenGCActivityCallback.cpp: 60 (JSC::EdenGCActivityCallback::lastGCLength): 61 * heap/FullGCActivityCallback.cpp: 62 (JSC::FullGCActivityCallback::doCollection): 63 (JSC::FullGCActivityCallback::lastGCLength): 64 * heap/GCConductor.cpp: Added. 65 (JSC::gcConductorShortName): 66 (WTF::printInternal): 67 * heap/GCConductor.h: Added. 68 * heap/Heap.cpp: 69 (JSC::Heap::Thread::Thread): 70 (JSC::Heap::Heap): 71 (JSC::Heap::lastChanceToFinalize): 72 (JSC::Heap::gatherStackRoots): 73 (JSC::Heap::updateObjectCounts): 74 (JSC::Heap::shouldCollectInCollectorThread): 75 (JSC::Heap::collectInCollectorThread): 76 (JSC::Heap::checkConn): 77 (JSC::Heap::runCurrentPhase): 78 (JSC::Heap::runNotRunningPhase): 79 (JSC::Heap::runBeginPhase): 80 (JSC::Heap::runFixpointPhase): 81 (JSC::Heap::runConcurrentPhase): 82 (JSC::Heap::runReloopPhase): 83 (JSC::Heap::runEndPhase): 84 (JSC::Heap::changePhase): 85 (JSC::Heap::finishChangingPhase): 86 (JSC::Heap::stopThePeriphery): 87 (JSC::Heap::resumeThePeriphery): 88 (JSC::Heap::stopTheMutator): 89 (JSC::Heap::resumeTheMutator): 90 (JSC::Heap::stopIfNecessarySlow): 91 (JSC::Heap::collectInMutatorThread): 92 (JSC::Heap::collectInMutatorThreadImpl): 93 (JSC::Heap::waitForCollector): 94 (JSC::Heap::acquireAccessSlow): 95 (JSC::Heap::releaseAccessSlow): 96 (JSC::Heap::relinquishConn): 97 (JSC::Heap::finishRelinquishingConn): 98 (JSC::Heap::handleNeedFinalize): 99 (JSC::Heap::notifyThreadStopping): 100 (JSC::Heap::finalize): 101 (JSC::Heap::requestCollection): 102 (JSC::Heap::waitForCollection): 103 (JSC::Heap::updateAllocationLimits): 104 (JSC::Heap::didFinishCollection): 105 (JSC::Heap::collectIfNecessaryOrDefer): 106 (JSC::Heap::preventCollection): 107 (JSC::Heap::performIncrement): 108 (JSC::Heap::markToFixpoint): Deleted. 109 (JSC::Heap::shouldCollectInThread): Deleted. 110 (JSC::Heap::collectInThread): Deleted. 111 (JSC::Heap::stopTheWorld): Deleted. 112 (JSC::Heap::resumeTheWorld): Deleted. 113 * heap/Heap.h: 114 (JSC::Heap::machineThreads): 115 (JSC::Heap::lastFullGCLength): 116 (JSC::Heap::lastEdenGCLength): 117 (JSC::Heap::increaseLastFullGCLength): 118 * heap/HeapInlines.h: 119 (JSC::Heap::mutatorIsStopped): Deleted. 120 * heap/HeapStatistics.cpp: Removed. 121 * heap/HeapStatistics.h: Removed. 122 * heap/HelpingGCScope.h: Removed. 123 * heap/MachineStackMarker.cpp: 124 (JSC::MachineThreads::gatherFromCurrentThread): 125 (JSC::MachineThreads::gatherConservativeRoots): 126 * heap/MachineStackMarker.h: 127 * heap/MarkedBlock.cpp: 128 (JSC::MarkedBlock::Handle::sweep): 129 * heap/MutatorState.cpp: 130 (WTF::printInternal): 131 * heap/MutatorState.h: 132 * heap/RegisterState.h: Added. 133 * heap/SlotVisitor.cpp: 134 (JSC::SlotVisitor::drainFromShared): 135 (JSC::SlotVisitor::drainInParallelPassively): 136 (JSC::SlotVisitor::donateAll): 137 * heap/StochasticSpaceTimeMutatorScheduler.cpp: 138 (JSC::StochasticSpaceTimeMutatorScheduler::beginCollection): 139 (JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall): 140 (JSC::StochasticSpaceTimeMutatorScheduler::timeToStop): 141 * heap/SweepingScope.h: Added. 142 (JSC::SweepingScope::SweepingScope): 143 (JSC::SweepingScope::~SweepingScope): 144 * jit/JITWorklist.cpp: 145 (JSC::JITWorklist::Thread::Thread): 146 * jsc.cpp: 147 (GlobalObject::finishCreation): 148 (functionFlashHeapAccess): 149 * runtime/InitializeThreading.cpp: 150 (JSC::initializeThreading): 151 * runtime/JSCellInlines.h: 152 (JSC::JSCell::classInfo): 153 * runtime/Options.cpp: 154 (JSC::overrideDefaults): 155 * runtime/Options.h: 156 * runtime/TestRunnerUtils.cpp: 157 (JSC::finalizeStatsAtEndOfTesting): 158 1 159 2017-02-16 Anders Carlsson <andersca@apple.com> 2 160 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r212453 r212466 270 270 0F2BDC4F15228BF300CD8910 /* DFGValueSource.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */; }; 271 271 0F2BDC5115228FFD00CD8910 /* DFGVariableEvent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */; }; 272 0F2C63A71E4F8FD300C13839 /* GCFinalizationCallback.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2C63A51E4F8FD100C13839 /* GCFinalizationCallback.cpp */; }; 273 0F2C63A81E4F8FD500C13839 /* GCFinalizationCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C63A61E4F8FD100C13839 /* GCFinalizationCallback.h */; settings = {ATTRIBUTES = (Private, ); }; }; 274 0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C63A91E4FA42C00C13839 /* RunningScope.h */; settings = {ATTRIBUTES = (Private, ); }; }; 272 275 0F2D4DDD19832D34007D4B19 /* DebuggerScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */; }; 273 276 0F2D4DDE19832D34007D4B19 /* DebuggerScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 635 638 0FA762061DB9243100B7A2FD /* MutatorState.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA762021DB9242300B7A2FD /* MutatorState.cpp */; }; 636 639 0FA762071DB9243300B7A2FD /* MutatorState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762031DB9242300B7A2FD /* MutatorState.h */; settings = {ATTRIBUTES = (Private, ); }; }; 637 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */; };638 640 0FA7620B1DB959F900B7A2FD /* AllocatingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */; }; 639 641 0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; }; … … 736 738 0FCEFADF180738C000472CE4 /* FTLLocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */; }; 737 739 0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; }; 740 0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */; settings = {ATTRIBUTES = (Private, ); }; }; 741 0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E81E43D3470006AB08 /* GCConductor.h */; settings = {ATTRIBUTES = (Private, ); }; }; 742 0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */; }; 743 0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */; }; 744 0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */; }; 745 0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */; settings = {ATTRIBUTES = (Private, ); }; }; 746 0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */; }; 738 747 0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; 739 748 0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; }; … … 1206 1215 14B8EC720A5652090062BE54 /* CoreFoundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 6560A4CF04B3B3E7008AE952 /* CoreFoundation.framework */; }; 1207 1216 14BA78F113AAB88F005B7C2C /* SlotVisitor.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1208 14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; };1217 14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; settings = {COMPILER_FLAGS = "-fno-optimize-sibling-calls"; }; }; 1209 1218 14BA7A9813AADFF8005B7C2C /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA7A9613AADFF8005B7C2C /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1210 1219 14BD59C50A3E8F9F00BAF59C /* JavaScriptCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 932F5BD90822A1C700736975 /* JavaScriptCore.framework */; }; … … 2178 2187 C225494315F7DBAA0065E898 /* SlotVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C225494215F7DBAA0065E898 /* SlotVisitor.cpp */; }; 2179 2188 C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F77008E1402FDD60078EB39 /* SamplingCounter.h */; settings = {ATTRIBUTES = (Private, ); }; }; 2180 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */; };2181 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */ = {isa = PBXBuildFile; fileRef = C24D31E1161CD695002AA4DB /* HeapStatistics.h */; settings = {ATTRIBUTES = (Private, ); }; };2182 2189 C25D709B16DE99F400FCA6BC /* JSManagedValue.mm in Sources */ = {isa = PBXBuildFile; fileRef = C25D709916DE99F400FCA6BC /* JSManagedValue.mm */; }; 2183 2190 C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */ = {isa = PBXBuildFile; fileRef = C25D709A16DE99F400FCA6BC /* JSManagedValue.h */; settings = {ATTRIBUTES = (Public, ); }; }; … … 2746 2753 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValueSource.cpp; path = dfg/DFGValueSource.cpp; sourceTree = "<group>"; }; 2747 2754 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableEvent.cpp; path = dfg/DFGVariableEvent.cpp; sourceTree = "<group>"; }; 2755 0F2C63A51E4F8FD100C13839 /* GCFinalizationCallback.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCFinalizationCallback.cpp; sourceTree = "<group>"; }; 2756 0F2C63A61E4F8FD100C13839 /* GCFinalizationCallback.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCFinalizationCallback.h; sourceTree = "<group>"; }; 2757 0F2C63A91E4FA42C00C13839 /* RunningScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RunningScope.h; sourceTree = "<group>"; }; 2748 2758 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DebuggerScope.cpp; sourceTree = "<group>"; }; 2749 2759 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerScope.h; sourceTree = "<group>"; }; … … 3105 3115 0FA762021DB9242300B7A2FD /* MutatorState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorState.cpp; sourceTree = "<group>"; }; 3106 3116 0FA762031DB9242300B7A2FD /* MutatorState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorState.h; sourceTree = "<group>"; }; 3107 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HelpingGCScope.h; sourceTree = "<group>"; };3108 3117 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocatingScope.h; sourceTree = "<group>"; }; 3109 3118 0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; }; … … 3218 3227 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLocation.cpp; path = ftl/FTLLocation.cpp; sourceTree = "<group>"; }; 3219 3228 0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; }; 3229 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CollectorPhase.cpp; sourceTree = "<group>"; }; 3230 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectorPhase.h; sourceTree = "<group>"; }; 3231 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCConductor.cpp; sourceTree = "<group>"; }; 3232 0FD0E5E81E43D3470006AB08 /* GCConductor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCConductor.h; sourceTree = "<group>"; }; 3233 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SweepingScope.h; sourceTree = "<group>"; }; 3234 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterState.h; sourceTree = "<group>"; }; 3235 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectingScope.h; sourceTree = "<group>"; }; 3220 3236 0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; }; 3221 3237 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; }; … … 4691 4707 C2181FC118A948FB0025A235 /* JSExportTests.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = JSExportTests.mm; path = API/tests/JSExportTests.mm; sourceTree = "<group>"; }; 4692 4708 C225494215F7DBAA0065E898 /* SlotVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SlotVisitor.cpp; sourceTree = "<group>"; }; 4693 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapStatistics.cpp; sourceTree = "<group>"; };4694 C24D31E1161CD695002AA4DB /* HeapStatistics.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapStatistics.h; sourceTree = "<group>"; };4695 4709 C25D709916DE99F400FCA6BC /* JSManagedValue.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = JSManagedValue.mm; sourceTree = "<group>"; }; 4696 4710 C25D709A16DE99F400FCA6BC /* JSManagedValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSManagedValue.h; sourceTree = "<group>"; }; … … 5755 5769 0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */, 5756 5770 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */, 5771 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */, 5757 5772 0FA762001DB9242300B7A2FD /* CollectionScope.cpp */, 5758 5773 0FA762011DB9242300B7A2FD /* CollectionScope.h */, 5774 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */, 5775 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */, 5759 5776 146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */, 5760 5777 149DAAF212EB559D0083B12B /* ConservativeRoots.h */, … … 5774 5791 2AACE63B18CA5A0300ED0191 /* GCActivityCallback.h */, 5775 5792 BCBE2CAD14E985AA000593AD /* GCAssertions.h */, 5793 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */, 5794 0FD0E5E81E43D3470006AB08 /* GCConductor.h */, 5776 5795 0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */, 5777 5796 0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */, 5797 0F2C63A51E4F8FD100C13839 /* GCFinalizationCallback.cpp */, 5798 0F2C63A61E4F8FD100C13839 /* GCFinalizationCallback.h */, 5778 5799 0F2B66A817B6B53D00A7AE3F /* GCIncomingRefCounted.h */, 5779 5800 0F2B66A917B6B53D00A7AE3F /* GCIncomingRefCountedInlines.h */, … … 5809 5830 A5311C341C77CEAC00E6B1B6 /* HeapSnapshotBuilder.cpp */, 5810 5831 A5311C351C77CEAC00E6B1B6 /* HeapSnapshotBuilder.h */, 5811 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */,5812 C24D31E1161CD695002AA4DB /* HeapStatistics.h */,5813 5832 C2E526BB1590EF000054E48D /* HeapTimer.cpp */, 5814 5833 C2E526BC1590EF000054E48D /* HeapTimer.h */, … … 5816 5835 FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */, 5817 5836 FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */, 5818 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */,5819 5837 C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */, 5820 5838 C25F8BCC157544A900245B71 /* IncrementalSweeper.h */, … … 5854 5872 ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */, 5855 5873 0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */, 5874 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */, 5856 5875 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */, 5876 0F2C63A91E4FA42C00C13839 /* RunningScope.h */, 5857 5877 C225494215F7DBAA0065E898 /* SlotVisitor.cpp */, 5858 5878 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */, … … 5869 5889 0F7DF1321E2970D50095951B /* Subspace.h */, 5870 5890 0F7DF1331E2970D50095951B /* SubspaceInlines.h */, 5891 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */, 5871 5892 0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */, 5872 5893 0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */, … … 7989 8010 DC9A0C201D2D9CB30085124E /* B3CaseCollection.h in Headers */, 7990 8011 DC9A0C1F1D2D9CB10085124E /* B3CaseCollectionInlines.h in Headers */, 8012 0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */, 7991 8013 0F338DFA1BE96AA80013C88F /* B3CCallValue.h in Headers */, 7992 8014 0F33FCFB1C1625BE00323F67 /* B3CFG.h in Headers */, … … 8264 8286 0F2017801DCADC3500EA5950 /* DFGFlowIndexing.h in Headers */, 8265 8287 0F2017821DCADD4200EA5950 /* DFGFlowMap.h in Headers */, 8288 0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */, 8266 8289 0F9D339717FFC4E60073C2BC /* DFGFlushedAt.h in Headers */, 8267 8290 A7D89CF817A0B8CC00773AD8 /* DFGFlushFormat.h in Headers */, … … 8556 8579 A54C2AB11C6544F200A18D78 /* HeapSnapshot.h in Headers */, 8557 8580 A5311C361C77CEC500E6B1B6 /* HeapSnapshotBuilder.h in Headers */, 8558 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */,8559 8581 C2E526BE1590EF000054E48D /* HeapTimer.h in Headers */, 8582 0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */, 8560 8583 0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */, 8561 8584 FE7BA6101A1A7CEC00F1F7B4 /* HeapVerifier.h in Headers */, 8562 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */,8563 8585 0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */, 8564 8586 DC2143071CA32E55000A8869 /* ICStats.h in Headers */, … … 8621 8643 A18193E41B4E0CDB00FC1029 /* IntlCollatorPrototype.lut.h in Headers */, 8622 8644 A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */, 8645 0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */, 8623 8646 A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */, 8624 8647 A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */, … … 8630 8653 A1D793011B43864B004516F5 /* IntlNumberFormatPrototype.h in Headers */, 8631 8654 A125846F1B45A36000CC7F6C /* IntlNumberFormatPrototype.lut.h in Headers */, 8655 0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */, 8632 8656 A12BBFF21B044A8B00664B69 /* IntlObject.h in Headers */, 8633 8657 708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */, … … 9095 9119 2AF7382D18BBBF92008A5A37 /* StructureIDTable.h in Headers */, 9096 9120 0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */, 9121 0F2C63A81E4F8FD500C13839 /* GCFinalizationCallback.h in Headers */, 9097 9122 C2FE18A416BAEC4000AF3061 /* StructureRareData.h in Headers */, 9098 9123 C20BA92D16BB1C1500B3AEA2 /* StructureRareDataInlines.h in Headers */, … … 9210 9235 AD2FCC161DB59CB200B3E736 /* WebAssemblyCompileErrorConstructor.lut.h in Headers */, 9211 9236 AD2FCBEF1DB58DAD00B3E736 /* WebAssemblyCompileErrorPrototype.h in Headers */, 9237 0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */, 9212 9238 AD2FCC171DB59CB200B3E736 /* WebAssemblyCompileErrorPrototype.lut.h in Headers */, 9213 9239 AD4937D41DDD27DE0077C807 /* WebAssemblyFunction.h in Headers */, … … 10184 10210 A54C2AB01C6544EE00A18D78 /* HeapSnapshot.cpp in Sources */, 10185 10211 A5311C371C77CECA00E6B1B6 /* HeapSnapshotBuilder.cpp in Sources */, 10186 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */,10187 10212 C2E526BD1590EF000054E48D /* HeapTimer.cpp in Sources */, 10188 10213 FE7BA60F1A1A7CEC00F1F7B4 /* HeapVerifier.cpp in Sources */, … … 10249 10274 FE187A0E1C030D640038BBCA /* JITDivGenerator.cpp in Sources */, 10250 10275 0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */, 10276 0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */, 10251 10277 0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */, 10252 10278 FE3A06BD1C11040D00390FDD /* JITLeftShiftGenerator.cpp in Sources */, … … 10496 10522 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */, 10497 10523 0FC3141518146D7000033232 /* RegisterSet.cpp in Sources */, 10524 0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */, 10498 10525 A57D23ED1891B5540031C7FA /* RegularExpression.cpp in Sources */, 10499 10526 992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */, … … 10558 10585 705B41AD1A6E501E00716757 /* SymbolConstructor.cpp in Sources */, 10559 10586 705B41AF1A6E501E00716757 /* SymbolObject.cpp in Sources */, 10587 0F2C63A71E4F8FD300C13839 /* GCFinalizationCallback.cpp in Sources */, 10560 10588 705B41B11A6E501E00716757 /* SymbolPrototype.cpp in Sources */, 10561 10589 0F919D2815856773004A4E7D /* SymbolTable.cpp in Sources */, -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r212365 r212466 2533 2533 if (m_instructions.size()) { 2534 2534 unsigned refCount = m_instructions.refCount(); 2535 RELEASE_ASSERT(refCount); 2535 if (!refCount) { 2536 dataLog("CodeBlock: ", RawPointer(this), "\n"); 2537 dataLog("m_instructions.data(): ", RawPointer(m_instructions.data()), "\n"); 2538 dataLog("refCount: ", refCount, "\n"); 2539 RELEASE_ASSERT_NOT_REACHED(); 2540 } 2536 2541 visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount); 2537 2542 } -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.cpp
r212365 r212466 41 41 class Worklist::ThreadBody : public AutomaticThread { 42 42 public: 43 ThreadBody(const LockHolder& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)43 ThreadBody(const AbstractLocker& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority) 44 44 : AutomaticThread(locker, lock, condition) 45 45 , m_worklist(worklist) … … 50 50 51 51 protected: 52 PollResult poll(const LockHolder& locker) override52 PollResult poll(const AbstractLocker& locker) override 53 53 { 54 54 if (m_worklist.m_queue.isEmpty()) … … 151 151 } 152 152 153 void threadIsStopping(const LockHolder&) override153 void threadIsStopping(const AbstractLocker&) override 154 154 { 155 155 // We're holding the Worklist::m_lock, so we should be careful not to deadlock. … … 480 480 } 481 481 482 void Worklist::dump(const LockHolder&, PrintStream& out) const482 void Worklist::dump(const AbstractLocker&, PrintStream& out) const 483 483 { 484 484 out.print( … … 536 536 } 537 537 538 unsigned numberOfWorklists() { return 2; } 539 540 Worklist& ensureWorklistForIndex(unsigned index) 541 { 542 switch (index) { 543 case 0: 544 return ensureGlobalDFGWorklist(); 545 case 1: 546 return ensureGlobalFTLWorklist(); 547 default: 548 RELEASE_ASSERT_NOT_REACHED(); 549 return ensureGlobalDFGWorklist(); 550 } 551 } 552 553 Worklist* existingWorklistForIndexOrNull(unsigned index) 554 { 555 switch (index) { 556 case 0: 557 return existingGlobalDFGWorklistOrNull(); 558 case 1: 559 return existingGlobalFTLWorklistOrNull(); 560 default: 561 RELEASE_ASSERT_NOT_REACHED(); 562 return 0; 563 } 564 } 565 566 Worklist& existingWorklistForIndex(unsigned index) 567 { 568 Worklist* result = existingWorklistForIndexOrNull(index); 569 RELEASE_ASSERT(result); 570 return *result; 571 } 572 538 573 void completeAllPlansForVM(VM& vm) 539 574 { -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.h
r212365 r212466 94 94 void removeAllReadyPlansForVM(VM&, Vector<RefPtr<Plan>, 8>&); 95 95 96 void dump(const LockHolder&, PrintStream&) const;96 void dump(const AbstractLocker&, PrintStream&) const; 97 97 98 98 CString m_threadName; … … 133 133 134 134 // Simplify doing things for all worklists. 135 inline unsigned numberOfWorklists() { return 2; } 136 inline Worklist& ensureWorklistForIndex(unsigned index) 137 { 138 switch (index) { 139 case 0: 140 return ensureGlobalDFGWorklist(); 141 case 1: 142 return ensureGlobalFTLWorklist(); 143 default: 144 RELEASE_ASSERT_NOT_REACHED(); 145 return ensureGlobalDFGWorklist(); 146 } 147 } 148 inline Worklist* existingWorklistForIndexOrNull(unsigned index) 149 { 150 switch (index) { 151 case 0: 152 return existingGlobalDFGWorklistOrNull(); 153 case 1: 154 return existingGlobalFTLWorklistOrNull(); 155 default: 156 RELEASE_ASSERT_NOT_REACHED(); 157 return 0; 158 } 159 } 160 inline Worklist& existingWorklistForIndex(unsigned index) 161 { 162 Worklist* result = existingWorklistForIndexOrNull(index); 163 RELEASE_ASSERT(result); 164 return *result; 165 } 135 unsigned numberOfWorklists(); 136 Worklist& ensureWorklistForIndex(unsigned index); 137 Worklist* existingWorklistForIndexOrNull(unsigned index); 138 Worklist& existingWorklistForIndex(unsigned index); 166 139 167 140 #endif // ENABLE(DFG_JIT) -
trunk/Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp
r210849 r212466 1 1 /* 2 * Copyright (C) 2014 , 2016Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 46 46 double EdenGCActivityCallback::lastGCLength() 47 47 { 48 return m_vm->heap.lastEdenGCLength() ;48 return m_vm->heap.lastEdenGCLength().seconds(); 49 49 } 50 50 -
trunk/Source/JavaScriptCore/heap/FullGCActivityCallback.cpp
r208306 r212466 1 1 /* 2 * Copyright (C) 2014 , 2016Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 51 51 if (heap.isPagedOut(startTime + pagingTimeOut)) { 52 52 cancel(); 53 heap.increaseLastFullGCLength( pagingTimeOut);53 heap.increaseLastFullGCLength(Seconds(pagingTimeOut)); 54 54 return; 55 55 } … … 61 61 double FullGCActivityCallback::lastGCLength() 62 62 { 63 return m_vm->heap.lastFullGCLength() ;63 return m_vm->heap.lastFullGCLength().seconds(); 64 64 } 65 65 -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r212310 r212466 24 24 #include "CodeBlock.h" 25 25 #include "CodeBlockSetInlines.h" 26 #include "CollectingScope.h" 26 27 #include "ConservativeRoots.h" 27 28 #include "DFGWorklistInlines.h" … … 30 31 #include "FullGCActivityCallback.h" 31 32 #include "GCActivityCallback.h" 33 #include "GCFinalizationCallback.h" 32 34 #include "GCIncomingRefCountedSetInlines.h" 33 35 #include "GCSegmentedArrayInlines.h" … … 38 40 #include "HeapProfiler.h" 39 41 #include "HeapSnapshot.h" 40 #include "HeapStatistics.h"41 42 #include "HeapVerifier.h" 42 #include "HelpingGCScope.h"43 43 #include "IncrementalSweeper.h" 44 44 #include "Interpreter.h" … … 49 49 #include "JSLock.h" 50 50 #include "JSVirtualMachineInternal.h" 51 #include "MachineStackMarker.h" 51 52 #include "MarkedSpaceInlines.h" 52 53 #include "MarkingConstraintSet.h" … … 58 59 #include "StochasticSpaceTimeMutatorScheduler.h" 59 60 #include "StopIfNecessaryTimer.h" 61 #include "SweepingScope.h" 60 62 #include "SynchronousStopTheWorldMutatorScheduler.h" 61 63 #include "TypeProfilerLog.h" … … 207 209 class Heap::Thread : public AutomaticThread { 208 210 public: 209 Thread(const LockHolder& locker, Heap& heap)211 Thread(const AbstractLocker& locker, Heap& heap) 210 212 : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition) 211 213 , m_heap(heap) … … 214 216 215 217 protected: 216 PollResult poll(const LockHolder& locker) override218 PollResult poll(const AbstractLocker& locker) override 217 219 { 218 220 if (m_heap.m_threadShouldStop) { … … 220 222 return PollResult::Stop; 221 223 } 222 if (m_heap.shouldCollectIn Thread(locker))224 if (m_heap.shouldCollectInCollectorThread(locker)) 223 225 return PollResult::Work; 224 226 return PollResult::Wait; … … 227 229 WorkResult work() override 228 230 { 229 m_heap.collectIn Thread();231 m_heap.collectInCollectorThread(); 230 232 return WorkResult::Continue; 231 233 } … … 258 260 , m_extraMemorySize(0) 259 261 , m_deprecatedExtraMemorySize(0) 260 , m_machineThreads( this)261 , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this ))262 , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this ))262 , m_machineThreads(std::make_unique<MachineThreads>(this)) 263 , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this, "C")) 264 , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M")) 263 265 , m_mutatorMarkStack(std::make_unique<MarkStackArray>()) 264 266 , m_raceMarkStack(std::make_unique<MarkStackArray>()) … … 334 336 void Heap::lastChanceToFinalize() 335 337 { 338 MonotonicTime before; 339 if (Options::logGC()) { 340 before = MonotonicTime::now(); 341 dataLog("[GC<", RawPointer(this), ">: shutdown "); 342 } 343 336 344 RELEASE_ASSERT(!m_vm->entryScope); 337 345 RELEASE_ASSERT(m_mutatorState == MutatorState::Running); … … 346 354 } 347 355 348 // Carefully bring the thread down. We need to use waitForCollector() until we know that there 349 // won't be any other collections. 356 if (Options::logGC()) 357 dataLog("1"); 358 359 // Prevent new collections from being started. This is probably not even necessary, since we're not 360 // going to call into anything that starts collections. Still, this makes the algorithm more 361 // obviously sound. 362 m_isSafeToCollect = false; 363 364 if (Options::logGC()) 365 dataLog("2"); 366 367 bool isCollecting; 368 { 369 auto locker = holdLock(*m_threadLock); 370 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 371 isCollecting = m_lastServedTicket < m_lastGrantedTicket; 372 } 373 if (isCollecting) { 374 if (Options::logGC()) 375 dataLog("...]\n"); 376 377 // Wait for the current collection to finish. 378 waitForCollector( 379 [&] (const AbstractLocker&) -> bool { 380 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 381 return m_lastServedTicket == m_lastGrantedTicket; 382 }); 383 384 if (Options::logGC()) 385 dataLog("[GC<", RawPointer(this), ">: shutdown "); 386 } 387 if (Options::logGC()) 388 dataLog("3"); 389 390 RELEASE_ASSERT(m_requests.isEmpty()); 391 RELEASE_ASSERT(m_lastServedTicket == m_lastGrantedTicket); 392 393 // Carefully bring the thread down. 350 394 bool stopped = false; 351 395 { 352 396 LockHolder locker(*m_threadLock); 353 397 stopped = m_thread->tryStop(locker); 354 if (!stopped) {355 m_threadShouldStop = true;398 m_threadShouldStop = true; 399 if (!stopped) 356 400 m_threadCondition->notifyOne(locker); 357 } 358 } 359 if (!stopped) { 360 waitForCollector( 361 [&] (const LockHolder&) -> bool { 362 return m_threadIsStopping; 363 }); 364 // It's now safe to join the thread, since we know that there will not be any more collections. 401 } 402 403 if (Options::logGC()) 404 dataLog("4"); 405 406 if (!stopped) 365 407 m_thread->join(); 366 } 408 409 if (Options::logGC()) 410 dataLog("5 "); 367 411 368 412 m_arrayBuffers.lastChanceToFinalize(); … … 373 417 374 418 sweepAllLogicallyEmptyWeakBlocks(); 419 420 if (Options::logGC()) 421 dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n"); 375 422 } 376 423 … … 526 573 } 527 574 528 void Heap::markToFixpoint(double gcStartTime)529 {530 TimingScope markToFixpointTimingScope(*this, "Heap::markToFixpoint");531 532 if (m_collectionScope == CollectionScope::Full) {533 m_opaqueRoots.clear();534 m_collectorSlotVisitor->clearMarkStacks();535 m_mutatorMarkStack->clear();536 }537 538 RELEASE_ASSERT(m_raceMarkStack->isEmpty());539 540 beginMarking();541 542 forEachSlotVisitor(543 [&] (SlotVisitor& visitor) {544 visitor.didStartMarking();545 });546 547 m_parallelMarkersShouldExit = false;548 549 m_helperClient.setFunction(550 [this] () {551 SlotVisitor* slotVisitor;552 {553 LockHolder locker(m_parallelSlotVisitorLock);554 if (m_availableParallelSlotVisitors.isEmpty()) {555 std::unique_ptr<SlotVisitor> newVisitor =556 std::make_unique<SlotVisitor>(*this);557 558 if (Options::optimizeParallelSlotVisitorsForStoppedMutator())559 newVisitor->optimizeForStoppedMutator();560 561 newVisitor->didStartMarking();562 563 slotVisitor = newVisitor.get();564 m_parallelSlotVisitors.append(WTFMove(newVisitor));565 } else566 slotVisitor = m_availableParallelSlotVisitors.takeLast();567 }568 569 WTF::registerGCThread(GCThreadType::Helper);570 571 {572 ParallelModeEnabler parallelModeEnabler(*slotVisitor);573 slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);574 }575 576 {577 LockHolder locker(m_parallelSlotVisitorLock);578 m_availableParallelSlotVisitors.append(slotVisitor);579 }580 });581 582 SlotVisitor& slotVisitor = *m_collectorSlotVisitor;583 584 m_constraintSet->didStartMarking();585 586 m_scheduler->beginCollection();587 if (Options::logGC())588 m_scheduler->log();589 590 // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"591 // checks because bootstrap would have put things into the visitor. So, we should fall592 // through to draining.593 594 if (!slotVisitor.didReachTermination()) {595 dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");596 dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");597 dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");598 dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");599 dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");600 dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");601 dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");602 dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");603 RELEASE_ASSERT_NOT_REACHED();604 }605 606 for (;;) {607 if (Options::logGC())608 dataLog("v=", bytesVisited() / 1024, "kb o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");609 610 if (slotVisitor.didReachTermination()) {611 m_scheduler->didReachTermination();612 613 assertSharedMarkStacksEmpty();614 615 slotVisitor.mergeIfNecessary();616 for (auto& parallelVisitor : m_parallelSlotVisitors)617 parallelVisitor->mergeIfNecessary();618 619 // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,620 // we don't have to execute root constraints again unless the mutator did run. At a621 // minimum, we could use this for work estimates - but it's probably more than just an622 // estimate.623 // https://bugs.webkit.org/show_bug.cgi?id=166828624 625 // FIXME: We should take advantage of the fact that we could timeout. This only comes626 // into play if we're executing constraints for the first time. But that will matter627 // when we have deep stacks or a lot of DOM stuff.628 // https://bugs.webkit.org/show_bug.cgi?id=166831629 630 // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also631 // add their own using Heap::addMarkingConstraint().632 bool converged =633 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());634 if (converged && slotVisitor.isEmpty()) {635 assertSharedMarkStacksEmpty();636 break;637 }638 639 m_scheduler->didExecuteConstraints();640 }641 642 if (Options::logGC())643 dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");644 645 {646 ParallelModeEnabler enabler(slotVisitor);647 slotVisitor.drainInParallel(m_scheduler->timeToResume());648 }649 650 m_scheduler->synchronousDrainingDidStall();651 652 if (slotVisitor.didReachTermination())653 continue;654 655 if (!m_scheduler->shouldResume())656 continue;657 658 m_scheduler->willResume();659 660 if (Options::logGC()) {661 double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();662 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");663 }664 665 // Forgive the mutator for its past failures to keep up.666 // FIXME: Figure out if moving this to different places results in perf changes.667 m_incrementBalance = 0;668 669 resumeTheWorld();670 671 {672 ParallelModeEnabler enabler(slotVisitor);673 slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());674 }675 676 stopTheWorld();677 678 if (Options::logGC())679 dataLog("[GC: ");680 681 m_scheduler->didStop();682 683 if (Options::logGC())684 m_scheduler->log();685 }686 687 m_scheduler->endCollection();688 689 {690 std::lock_guard<Lock> lock(m_markingMutex);691 m_parallelMarkersShouldExit = true;692 m_markingConditionVariable.notifyAll();693 }694 m_helperClient.finish();695 696 iterateExecutingAndCompilingCodeBlocks(697 [&] (CodeBlock* codeBlock) {698 writeBarrier(codeBlock);699 });700 701 updateObjectCounts(gcStartTime);702 endMarking();703 }704 705 575 void Heap::gatherStackRoots(ConservativeRoots& roots) 706 576 { 707 m_machineThreads .gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);577 m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState); 708 578 } 709 579 … … 805 675 } 806 676 807 void Heap::updateObjectCounts(double gcStartTime) 808 { 809 if (Options::logGC() == GCLogging::Verbose) { 810 dataLogF("\nNumber of live Objects after GC %lu, took %.6f secs\n", static_cast<unsigned long>(visitCount()), WTF::monotonicallyIncreasingTime() - gcStartTime); 811 } 812 677 void Heap::updateObjectCounts() 678 { 813 679 if (m_collectionScope == CollectionScope::Full) 814 680 m_totalBytesVisited = 0; … … 1034 900 double before = 0; 1035 901 if (Options::logGC()) { 1036 dataLog(" [Full sweep: ", capacity() / 1024, "kb ");902 dataLog("Full sweep: ", capacity() / 1024, "kb "); 1037 903 before = currentTimeMS(); 1038 904 } … … 1041 907 if (Options::logGC()) { 1042 908 double after = currentTimeMS(); 1043 dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms ]");909 dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms"); 1044 910 } 1045 911 } … … 1054 920 DeferGCForAWhile deferGC(*this); 1055 921 if (UNLIKELY(Options::useImmortalObjects())) 1056 sweeper()-> willFinishSweeping();922 sweeper()->stopSweeping(); 1057 923 1058 924 bool alreadySweptInCollectSync = Options::sweepSynchronously(); 1059 925 if (!alreadySweptInCollectSync) { 926 if (Options::logGC()) 927 dataLog("[GC<", RawPointer(this), ">: "); 1060 928 sweepSynchronously(); 1061 929 if (Options::logGC()) 1062 dataLog(" \n");930 dataLog("]\n"); 1063 931 } 1064 932 m_objectSpace.assertNoUnswept(); … … 1109 977 } 1110 978 1111 bool Heap::shouldCollectIn Thread(const LockHolder&)979 bool Heap::shouldCollectInCollectorThread(const AbstractLocker&) 1112 980 { 1113 981 RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket)); 1114 982 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 1115 983 1116 return !m_requests.isEmpty(); 1117 } 1118 1119 void Heap::collectInThread() 984 if (false) 985 dataLog("Mutator has the conn = ", !!(m_worldState.load() & mutatorHasConnBit), "\n"); 986 987 return !m_requests.isEmpty() && !(m_worldState.load() & mutatorHasConnBit); 988 } 989 990 void Heap::collectInCollectorThread() 991 { 992 for (;;) { 993 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Collector, nullptr); 994 switch (result) { 995 case RunCurrentPhaseResult::Finished: 996 return; 997 case RunCurrentPhaseResult::Continue: 998 break; 999 case RunCurrentPhaseResult::NeedCurrentThreadState: 1000 RELEASE_ASSERT_NOT_REACHED(); 1001 break; 1002 } 1003 } 1004 } 1005 1006 void Heap::checkConn(GCConductor conn) 1007 { 1008 switch (conn) { 1009 case GCConductor::Mutator: 1010 RELEASE_ASSERT(m_worldState.load() & mutatorHasConnBit); 1011 return; 1012 case GCConductor::Collector: 1013 RELEASE_ASSERT(!(m_worldState.load() & mutatorHasConnBit)); 1014 return; 1015 } 1016 RELEASE_ASSERT_NOT_REACHED(); 1017 } 1018 1019 auto Heap::runCurrentPhase(GCConductor conn, CurrentThreadState* currentThreadState) -> RunCurrentPhaseResult 1020 { 1021 checkConn(conn); 1022 m_currentThreadState = currentThreadState; 1023 1024 // If the collector transfers the conn to the mutator, it leaves us in between phases. 1025 if (!finishChangingPhase(conn)) { 1026 // A mischevious mutator could repeatedly relinquish the conn back to us. We try to avoid doing 1027 // this, but it's probably not the end of the world if it did happen. 1028 if (false) 1029 dataLog("Conn bounce-back.\n"); 1030 return RunCurrentPhaseResult::Finished; 1031 } 1032 1033 bool result = false; 1034 switch (m_currentPhase) { 1035 case CollectorPhase::NotRunning: 1036 result = runNotRunningPhase(conn); 1037 break; 1038 1039 case CollectorPhase::Begin: 1040 result = runBeginPhase(conn); 1041 break; 1042 1043 case CollectorPhase::Fixpoint: 1044 if (!currentThreadState && conn == GCConductor::Mutator) 1045 return RunCurrentPhaseResult::NeedCurrentThreadState; 1046 1047 result = runFixpointPhase(conn); 1048 break; 1049 1050 case CollectorPhase::Concurrent: 1051 result = runConcurrentPhase(conn); 1052 break; 1053 1054 case CollectorPhase::Reloop: 1055 result = runReloopPhase(conn); 1056 break; 1057 1058 case CollectorPhase::End: 1059 result = runEndPhase(conn); 1060 break; 1061 } 1062 1063 return result ? RunCurrentPhaseResult::Continue : RunCurrentPhaseResult::Finished; 1064 } 1065 1066 NEVER_INLINE bool Heap::runNotRunningPhase(GCConductor conn) 1067 { 1068 // Check m_requests since the mutator calls this to poll what's going on. 1069 { 1070 auto locker = holdLock(*m_threadLock); 1071 if (m_requests.isEmpty()) 1072 return false; 1073 } 1074 1075 return changePhase(conn, CollectorPhase::Begin); 1076 } 1077 1078 NEVER_INLINE bool Heap::runBeginPhase(GCConductor conn) 1120 1079 { 1121 1080 m_currentGCStartTime = MonotonicTime::now(); 1122 1081 1123 1082 std::optional<CollectionScope> scope; 1124 1083 { … … 1127 1086 scope = m_requests.first(); 1128 1087 } 1129 1130 SuperSamplerScope superSamplerScope(false); 1131 TimingScope collectImplTimingScope(scope, "Heap::collectInThread"); 1132 1133 #if ENABLE(ALLOCATION_LOGGING) 1134 dataLogF("JSC GC starting collection.\n"); 1135 #endif 1136 1137 stopTheWorld(); 1138 1139 if (false) 1140 dataLog("GC START!\n"); 1141 1142 MonotonicTime before; 1143 if (Options::logGC()) { 1144 dataLog("[GC: START ", capacity() / 1024, "kb "); 1145 before = MonotonicTime::now(); 1146 } 1147 1148 double gcStartTime; 1149 1150 ASSERT(m_isSafeToCollect); 1088 1089 if (Options::logGC()) 1090 dataLog("[GC<", RawPointer(this), ">: START ", gcConductorShortName(conn), " ", capacity() / 1024, "kb "); 1091 1092 m_beforeGC = MonotonicTime::now(); 1093 1151 1094 if (m_collectionScope) { 1152 1095 dataLog("Collection scope already set during GC: ", *m_collectionScope, "\n"); 1153 1096 RELEASE_ASSERT_NOT_REACHED(); 1154 1097 } 1155 1098 1156 1099 willStartCollection(scope); 1157 collectImplTimingScope.setScope(*this); 1158 1159 gcStartTime = WTF::monotonicallyIncreasingTime(); 1100 1160 1101 if (m_verifier) { 1161 1102 // Verify that live objects from the last GC cycle haven't been corrupted by … … 1166 1107 m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking); 1167 1108 } 1168 1109 1169 1110 prepareForMarking(); 1170 1171 markToFixpoint(gcStartTime); 1172 1111 1112 if (m_collectionScope == CollectionScope::Full) { 1113 m_opaqueRoots.clear(); 1114 m_collectorSlotVisitor->clearMarkStacks(); 1115 m_mutatorMarkStack->clear(); 1116 } 1117 1118 RELEASE_ASSERT(m_raceMarkStack->isEmpty()); 1119 1120 beginMarking(); 1121 1122 forEachSlotVisitor( 1123 [&] (SlotVisitor& visitor) { 1124 visitor.didStartMarking(); 1125 }); 1126 1127 m_parallelMarkersShouldExit = false; 1128 1129 m_helperClient.setFunction( 1130 [this] () { 1131 SlotVisitor* slotVisitor; 1132 { 1133 LockHolder locker(m_parallelSlotVisitorLock); 1134 if (m_availableParallelSlotVisitors.isEmpty()) { 1135 std::unique_ptr<SlotVisitor> newVisitor = std::make_unique<SlotVisitor>( 1136 *this, toCString("P", m_parallelSlotVisitors.size() + 1)); 1137 1138 if (Options::optimizeParallelSlotVisitorsForStoppedMutator()) 1139 newVisitor->optimizeForStoppedMutator(); 1140 1141 newVisitor->didStartMarking(); 1142 1143 slotVisitor = newVisitor.get(); 1144 m_parallelSlotVisitors.append(WTFMove(newVisitor)); 1145 } else 1146 slotVisitor = m_availableParallelSlotVisitors.takeLast(); 1147 } 1148 1149 WTF::registerGCThread(GCThreadType::Helper); 1150 1151 { 1152 ParallelModeEnabler parallelModeEnabler(*slotVisitor); 1153 slotVisitor->drainFromShared(SlotVisitor::SlaveDrain); 1154 } 1155 1156 { 1157 LockHolder locker(m_parallelSlotVisitorLock); 1158 m_availableParallelSlotVisitors.append(slotVisitor); 1159 } 1160 }); 1161 1162 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1163 1164 m_constraintSet->didStartMarking(); 1165 1166 m_scheduler->beginCollection(); 1167 if (Options::logGC()) 1168 m_scheduler->log(); 1169 1170 // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()" 1171 // checks because bootstrap would have put things into the visitor. So, we should fall 1172 // through to draining. 1173 1174 if (!slotVisitor.didReachTermination()) { 1175 dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n"); 1176 dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n"); 1177 dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n"); 1178 dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n"); 1179 dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n"); 1180 dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n"); 1181 dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n"); 1182 dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n"); 1183 RELEASE_ASSERT_NOT_REACHED(); 1184 } 1185 1186 return changePhase(conn, CollectorPhase::Fixpoint); 1187 } 1188 1189 NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn) 1190 { 1191 RELEASE_ASSERT(conn == GCConductor::Collector || m_currentThreadState); 1192 1193 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1194 1195 if (Options::logGC()) { 1196 HashMap<const char*, size_t> visitMap; 1197 forEachSlotVisitor( 1198 [&] (SlotVisitor& slotVisitor) { 1199 visitMap.add(slotVisitor.codeName(), slotVisitor.bytesVisited() / 1024); 1200 }); 1201 1202 auto perVisitorDump = sortedMapDump( 1203 visitMap, 1204 [] (const char* a, const char* b) -> bool { 1205 return strcmp(a, b) < 0; 1206 }, 1207 ":", " "); 1208 1209 dataLog("v=", bytesVisited() / 1024, "kb (", perVisitorDump, ") o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " "); 1210 } 1211 1212 if (slotVisitor.didReachTermination()) { 1213 m_scheduler->didReachTermination(); 1214 1215 assertSharedMarkStacksEmpty(); 1216 1217 slotVisitor.mergeIfNecessary(); 1218 for (auto& parallelVisitor : m_parallelSlotVisitors) 1219 parallelVisitor->mergeIfNecessary(); 1220 1221 // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely, 1222 // we don't have to execute root constraints again unless the mutator did run. At a 1223 // minimum, we could use this for work estimates - but it's probably more than just an 1224 // estimate. 1225 // https://bugs.webkit.org/show_bug.cgi?id=166828 1226 1227 // FIXME: We should take advantage of the fact that we could timeout. This only comes 1228 // into play if we're executing constraints for the first time. But that will matter 1229 // when we have deep stacks or a lot of DOM stuff. 1230 // https://bugs.webkit.org/show_bug.cgi?id=166831 1231 1232 // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also 1233 // add their own using Heap::addMarkingConstraint(). 1234 bool converged = 1235 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity()); 1236 if (converged && slotVisitor.isEmpty()) { 1237 assertSharedMarkStacksEmpty(); 1238 return changePhase(conn, CollectorPhase::End); 1239 } 1240 1241 m_scheduler->didExecuteConstraints(); 1242 } 1243 1244 if (Options::logGC()) 1245 dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " "); 1246 1247 { 1248 ParallelModeEnabler enabler(slotVisitor); 1249 slotVisitor.drainInParallel(m_scheduler->timeToResume()); 1250 } 1251 1252 m_scheduler->synchronousDrainingDidStall(); 1253 1254 if (slotVisitor.didReachTermination()) 1255 return true; // This is like relooping to the top if runFixpointPhase(). 1256 1257 if (!m_scheduler->shouldResume()) 1258 return true; 1259 1260 m_scheduler->willResume(); 1261 1262 if (Options::logGC()) { 1263 double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds(); 1264 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n"); 1265 } 1266 1267 // Forgive the mutator for its past failures to keep up. 1268 // FIXME: Figure out if moving this to different places results in perf changes. 1269 m_incrementBalance = 0; 1270 1271 return changePhase(conn, CollectorPhase::Concurrent); 1272 } 1273 1274 NEVER_INLINE bool Heap::runConcurrentPhase(GCConductor conn) 1275 { 1276 SlotVisitor& slotVisitor = *m_collectorSlotVisitor; 1277 1278 ParallelModeEnabler enabler(slotVisitor); 1279 1280 switch (conn) { 1281 case GCConductor::Mutator: { 1282 // When the mutator has the conn, we poll runConcurrentPhase() on every time someone says 1283 // stopIfNecessary(), so on every allocation slow path. When that happens we poll if it's time 1284 // to stop and do some work. 1285 if (slotVisitor.didReachTermination() 1286 || m_scheduler->shouldStop()) 1287 return changePhase(conn, CollectorPhase::Reloop); 1288 1289 // We could be coming from a collector phase that stuffed our SlotVisitor, so make sure we donate 1290 // everything. This is super cheap if the SlotVisitor is already empty. 1291 slotVisitor.donateAll(); 1292 return false; 1293 } 1294 case GCConductor::Collector: { 1295 slotVisitor.drainInParallelPassively(m_scheduler->timeToStop()); 1296 return changePhase(conn, CollectorPhase::Reloop); 1297 } } 1298 1299 RELEASE_ASSERT_NOT_REACHED(); 1300 return false; 1301 } 1302 1303 NEVER_INLINE bool Heap::runReloopPhase(GCConductor conn) 1304 { 1305 if (Options::logGC()) 1306 dataLog("[GC<", RawPointer(this), ">: ", gcConductorShortName(conn), " "); 1307 1308 m_scheduler->didStop(); 1309 1310 if (Options::logGC()) 1311 m_scheduler->log(); 1312 1313 return changePhase(conn, CollectorPhase::Fixpoint); 1314 } 1315 1316 NEVER_INLINE bool Heap::runEndPhase(GCConductor conn) 1317 { 1318 m_scheduler->endCollection(); 1319 1320 { 1321 auto locker = holdLock(m_markingMutex); 1322 m_parallelMarkersShouldExit = true; 1323 m_markingConditionVariable.notifyAll(); 1324 } 1325 m_helperClient.finish(); 1326 1327 iterateExecutingAndCompilingCodeBlocks( 1328 [&] (CodeBlock* codeBlock) { 1329 writeBarrier(codeBlock); 1330 }); 1331 1332 updateObjectCounts(); 1333 endMarking(); 1334 1173 1335 if (m_verifier) { 1174 1336 m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking); … … 1196 1358 updateAllocationLimits(); 1197 1359 1198 didFinishCollection( gcStartTime);1360 didFinishCollection(); 1199 1361 1200 1362 if (m_verifier) { … … 1209 1371 1210 1372 if (Options::logGC()) { 1211 MonotonicTime after = MonotonicTime::now(); 1212 double thisPauseMS = (after - m_stopTime).milliseconds(); 1213 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), "ms END]\n"); 1373 double thisPauseMS = (m_afterGC - m_stopTime).milliseconds(); 1374 dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (m_afterGC - m_beforeGC).milliseconds(), "ms END]\n"); 1214 1375 } 1215 1376 1216 1377 { 1217 LockHolder locker(*m_threadLock);1378 auto locker = holdLock(*m_threadLock); 1218 1379 m_requests.removeFirst(); 1219 1380 m_lastServedTicket++; … … 1226 1387 1227 1388 setNeedFinalize(); 1228 resumeTheWorld(); 1229 1389 1230 1390 m_lastGCStartTime = m_currentGCStartTime; 1231 1391 m_lastGCEndTime = MonotonicTime::now(); 1232 } 1233 1234 void Heap::stopTheWorld() 1235 { 1236 RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped); 1237 waitWhileNeedFinalize(); 1238 stopTheMutator(); 1392 1393 return changePhase(conn, CollectorPhase::NotRunning); 1394 } 1395 1396 bool Heap::changePhase(GCConductor conn, CollectorPhase nextPhase) 1397 { 1398 checkConn(conn); 1399 1400 m_nextPhase = nextPhase; 1401 1402 return finishChangingPhase(conn); 1403 } 1404 1405 NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn) 1406 { 1407 checkConn(conn); 1408 1409 if (m_nextPhase == m_currentPhase) 1410 return true; 1411 1412 if (false) 1413 dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n"); 1414 1415 bool suspendedBefore = worldShouldBeSuspended(m_currentPhase); 1416 bool suspendedAfter = worldShouldBeSuspended(m_nextPhase); 1417 1418 if (suspendedBefore != suspendedAfter) { 1419 if (suspendedBefore) { 1420 RELEASE_ASSERT(!suspendedAfter); 1421 1422 resumeThePeriphery(); 1423 if (conn == GCConductor::Collector) 1424 resumeTheMutator(); 1425 else 1426 handleNeedFinalize(); 1427 } else { 1428 RELEASE_ASSERT(!suspendedBefore); 1429 RELEASE_ASSERT(suspendedAfter); 1430 1431 if (conn == GCConductor::Collector) { 1432 waitWhileNeedFinalize(); 1433 if (!stopTheMutator()) { 1434 if (false) 1435 dataLog("Returning false.\n"); 1436 return false; 1437 } 1438 } else { 1439 sanitizeStackForVM(m_vm); 1440 handleNeedFinalize(); 1441 } 1442 stopThePeriphery(conn); 1443 } 1444 } 1445 1446 m_currentPhase = m_nextPhase; 1447 return true; 1448 } 1449 1450 void Heap::stopThePeriphery(GCConductor conn) 1451 { 1452 if (m_collectorBelievesThatTheWorldIsStopped) { 1453 dataLog("FATAL: world already stopped.\n"); 1454 RELEASE_ASSERT_NOT_REACHED(); 1455 } 1239 1456 1240 1457 if (m_mutatorDidRun) … … 1242 1459 1243 1460 m_mutatorDidRun = false; 1244 1461 1245 1462 suspendCompilerThreads(); 1246 1463 m_collectorBelievesThatTheWorldIsStopped = true; … … 1254 1471 { 1255 1472 DeferGCForAWhile awhile(*this); 1256 if (JITWorklist::instance()->completeAllForVM(*m_vm)) 1473 if (JITWorklist::instance()->completeAllForVM(*m_vm) 1474 && conn == GCConductor::Collector) 1257 1475 setGCDidJIT(); 1258 1476 } … … 1267 1485 } 1268 1486 1269 void Heap::resumeTheWorld()1487 NEVER_INLINE void Heap::resumeThePeriphery() 1270 1488 { 1271 1489 // Calling resumeAllocating does the Right Thing depending on whether this is the end of a … … 1278 1496 m_barriersExecuted = 0; 1279 1497 1280 RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped); 1498 if (!m_collectorBelievesThatTheWorldIsStopped) { 1499 dataLog("Fatal: collector does not believe that the world is stopped.\n"); 1500 RELEASE_ASSERT_NOT_REACHED(); 1501 } 1281 1502 m_collectorBelievesThatTheWorldIsStopped = false; 1282 1503 … … 1316 1537 1317 1538 resumeCompilerThreads(); 1318 resumeTheMutator(); 1319 } 1320 1321 void Heap::stopTheMutator() 1539 } 1540 1541 bool Heap::stopTheMutator() 1322 1542 { 1323 1543 for (;;) { 1324 1544 unsigned oldState = m_worldState.load(); 1325 if ((oldState & stoppedBit) 1326 && (oldState & shouldStopBit)) 1327 return; 1328 1329 // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could 1330 // switch to that if it turned out to be less confusing, but then it would not give the 1331 // mutator the opportunity to react to the world being stopped. 1332 if (oldState & mutatorWaitingBit) { 1333 if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit)) 1334 ParkingLot::unparkAll(&m_worldState); 1545 if (oldState & stoppedBit) { 1546 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1547 RELEASE_ASSERT(!(oldState & mutatorWaitingBit)); 1548 RELEASE_ASSERT(!(oldState & mutatorHasConnBit)); 1549 return true; 1550 } 1551 1552 if (oldState & mutatorHasConnBit) { 1553 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1554 RELEASE_ASSERT(!(oldState & stoppedBit)); 1555 return false; 1556 } 1557 1558 if (!(oldState & hasAccessBit)) { 1559 RELEASE_ASSERT(!(oldState & mutatorHasConnBit)); 1560 RELEASE_ASSERT(!(oldState & mutatorWaitingBit)); 1561 // We can stop the world instantly. 1562 if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit)) 1563 return true; 1335 1564 continue; 1336 1565 } 1337 1566 1338 if (!(oldState & hasAccessBit) 1339 || (oldState & stoppedBit)) { 1340 // We can stop the world instantly. 1341 if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit)) 1342 return; 1343 continue; 1344 } 1345 1567 // Transfer the conn to the mutator and bail. 1346 1568 RELEASE_ASSERT(oldState & hasAccessBit); 1347 1569 RELEASE_ASSERT(!(oldState & stoppedBit)); 1348 m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit); 1349 m_stopIfNecessaryTimer->scheduleSoon(); 1350 ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit); 1351 } 1352 } 1353 1354 void Heap::resumeTheMutator() 1355 { 1570 unsigned newState = (oldState | mutatorHasConnBit) & ~mutatorWaitingBit; 1571 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1572 if (false) 1573 dataLog("Handed off the conn.\n"); 1574 m_stopIfNecessaryTimer->scheduleSoon(); 1575 ParkingLot::unparkAll(&m_worldState); 1576 return false; 1577 } 1578 } 1579 } 1580 1581 NEVER_INLINE void Heap::resumeTheMutator() 1582 { 1583 if (false) 1584 dataLog("Resuming the mutator.\n"); 1356 1585 for (;;) { 1357 1586 unsigned oldState = m_worldState.load(); 1358 RELEASE_ASSERT(oldState & shouldStopBit); 1359 1360 if (!(oldState & hasAccessBit)) { 1361 // We can resume the world instantly. 1362 if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) { 1363 ParkingLot::unparkAll(&m_worldState); 1364 return; 1365 } 1366 continue; 1367 } 1368 1369 // We can tell the world to resume. 1370 if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) { 1587 if (!!(oldState & hasAccessBit) != !(oldState & stoppedBit)) { 1588 dataLog("Fatal: hasAccess = ", !!(oldState & hasAccessBit), ", stopped = ", !!(oldState & stoppedBit), "\n"); 1589 RELEASE_ASSERT_NOT_REACHED(); 1590 } 1591 if (oldState & mutatorHasConnBit) { 1592 dataLog("Fatal: mutator has the conn.\n"); 1593 RELEASE_ASSERT_NOT_REACHED(); 1594 } 1595 1596 if (!(oldState & stoppedBit)) { 1597 if (false) 1598 dataLog("Returning because not stopped.\n"); 1599 return; 1600 } 1601 1602 if (m_worldState.compareExchangeWeak(oldState, oldState & ~stoppedBit)) { 1603 if (false) 1604 dataLog("CASing and returning.\n"); 1371 1605 ParkingLot::unparkAll(&m_worldState); 1372 1606 return; … … 1390 1624 { 1391 1625 RELEASE_ASSERT(oldState & hasAccessBit); 1626 RELEASE_ASSERT(!(oldState & stoppedBit)); 1392 1627 1393 1628 // It's possible for us to wake up with finalization already requested but the world not yet 1394 1629 // resumed. If that happens, we can't run finalization yet. 1395 if (!(oldState & stoppedBit) 1396 && handleNeedFinalize(oldState)) 1630 if (handleNeedFinalize(oldState)) 1397 1631 return true; 1398 1399 if (!(oldState & shouldStopBit) && !m_scheduler->shouldStop()) { 1400 if (!(oldState & stoppedBit)) 1401 return false; 1402 m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit); 1403 return true; 1404 } 1405 1406 sanitizeStackForVM(m_vm); 1407 1408 if (verboseStop) { 1409 dataLog("Stopping!\n"); 1410 WTFReportBacktrace(); 1411 } 1412 m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit); 1413 ParkingLot::unparkAll(&m_worldState); 1414 ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit); 1415 return true; 1632 1633 // FIXME: When entering the concurrent phase, we could arrange for this branch not to fire, and then 1634 // have the SlotVisitor do things to the m_worldState to make this branch fire again. That would 1635 // prevent us from polling this so much. Ideally, stopIfNecessary would ignore the mutatorHasConnBit 1636 // and there would be some other bit indicating whether we were in some GC phase other than the 1637 // NotRunning or Concurrent ones. 1638 if (oldState & mutatorHasConnBit) 1639 collectInMutatorThread(); 1640 1641 return false; 1642 } 1643 1644 NEVER_INLINE void Heap::collectInMutatorThread() 1645 { 1646 CollectingScope collectingScope(*this); 1647 for (;;) { 1648 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, nullptr); 1649 switch (result) { 1650 case RunCurrentPhaseResult::Finished: 1651 return; 1652 case RunCurrentPhaseResult::Continue: 1653 break; 1654 case RunCurrentPhaseResult::NeedCurrentThreadState: 1655 sanitizeStackForVM(m_vm); 1656 auto lambda = [&] (CurrentThreadState& state) { 1657 for (;;) { 1658 RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, &state); 1659 switch (result) { 1660 case RunCurrentPhaseResult::Finished: 1661 return; 1662 case RunCurrentPhaseResult::Continue: 1663 break; 1664 case RunCurrentPhaseResult::NeedCurrentThreadState: 1665 RELEASE_ASSERT_NOT_REACHED(); 1666 break; 1667 } 1668 } 1669 }; 1670 callWithCurrentThreadState(scopedLambda<void(CurrentThreadState&)>(WTFMove(lambda))); 1671 return; 1672 } 1673 } 1416 1674 } 1417 1675 … … 1426 1684 if (!done) { 1427 1685 setMutatorWaiting(); 1686 1428 1687 // At this point, the collector knows that we intend to wait, and he will clear the 1429 1688 // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit … … 1432 1691 } 1433 1692 } 1434 1693 1435 1694 // If we're in a stop-the-world scenario, we need to wait for that even if done is true. 1436 1695 unsigned oldState = m_worldState.load(); … … 1438 1697 continue; 1439 1698 1699 // FIXME: We wouldn't need this if stopIfNecessarySlow() had a mode where it knew to just 1700 // do the collection. 1701 relinquishConn(); 1702 1440 1703 if (done) { 1441 1704 clearMutatorWaiting(); // Clean up just in case. … … 1454 1717 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1455 1718 1456 if (oldState & shouldStopBit) { 1457 RELEASE_ASSERT(oldState & stoppedBit); 1719 if (oldState & stoppedBit) { 1458 1720 if (verboseStop) { 1459 1721 dataLog("Stopping in acquireAccess!\n"); … … 1471 1733 handleNeedFinalize(); 1472 1734 m_mutatorDidRun = true; 1735 stopIfNecessary(); 1473 1736 return; 1474 1737 } … … 1480 1743 for (;;) { 1481 1744 unsigned oldState = m_worldState.load(); 1482 RELEASE_ASSERT(oldState & hasAccessBit); 1483 RELEASE_ASSERT(!(oldState & stoppedBit)); 1745 if (!(oldState & hasAccessBit)) { 1746 dataLog("FATAL: Attempting to release access but the mutator does not have access.\n"); 1747 RELEASE_ASSERT_NOT_REACHED(); 1748 } 1749 if (oldState & stoppedBit) { 1750 dataLog("FATAL: Attempting to release access but the mutator is stopped.\n"); 1751 RELEASE_ASSERT_NOT_REACHED(); 1752 } 1484 1753 1485 1754 if (handleNeedFinalize(oldState)) 1486 1755 continue; 1487 1756 1488 if (oldState & shouldStopBit) { 1489 unsigned newState = (oldState & ~hasAccessBit) | stoppedBit; 1490 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1491 ParkingLot::unparkAll(&m_worldState); 1492 return; 1493 } 1494 continue; 1495 } 1496 1497 RELEASE_ASSERT(!(oldState & shouldStopBit)); 1498 1499 if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit)) 1757 unsigned newState = oldState & ~(hasAccessBit | mutatorHasConnBit); 1758 1759 if ((oldState & mutatorHasConnBit) 1760 && m_nextPhase != m_currentPhase) { 1761 // This means that the collector thread had given us the conn so that we would do something 1762 // for it. Stop ourselves as we release access. This ensures that acquireAccess blocks. In 1763 // the meantime, since we're handing the conn over, the collector will be awoken and it is 1764 // sure to have work to do. 1765 newState |= stoppedBit; 1766 } 1767 1768 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1769 if (oldState & mutatorHasConnBit) 1770 finishRelinquishingConn(); 1500 1771 return; 1501 } 1772 } 1773 } 1774 } 1775 1776 bool Heap::relinquishConn(unsigned oldState) 1777 { 1778 RELEASE_ASSERT(oldState & hasAccessBit); 1779 RELEASE_ASSERT(!(oldState & stoppedBit)); 1780 1781 if (!(oldState & mutatorHasConnBit)) 1782 return false; // Done. 1783 1784 if (m_threadShouldStop) 1785 return false; 1786 1787 if (!m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorHasConnBit)) 1788 return true; // Loop around. 1789 1790 finishRelinquishingConn(); 1791 return true; 1792 } 1793 1794 void Heap::finishRelinquishingConn() 1795 { 1796 if (false) 1797 dataLog("Relinquished the conn.\n"); 1798 1799 sanitizeStackForVM(m_vm); 1800 1801 auto locker = holdLock(*m_threadLock); 1802 if (!m_requests.isEmpty()) 1803 m_threadCondition->notifyOne(locker); 1804 ParkingLot::unparkAll(&m_worldState); 1805 } 1806 1807 void Heap::relinquishConn() 1808 { 1809 while (relinquishConn(m_worldState.load())) { } 1502 1810 } 1503 1811 … … 1514 1822 } 1515 1823 1516 bool Heap::handleNeedFinalize(unsigned oldState)1824 NEVER_INLINE bool Heap::handleNeedFinalize(unsigned oldState) 1517 1825 { 1518 1826 RELEASE_ASSERT(oldState & hasAccessBit); … … 1581 1889 } 1582 1890 1583 void Heap::notifyThreadStopping(const LockHolder&)1891 void Heap::notifyThreadStopping(const AbstractLocker&) 1584 1892 { 1585 1893 m_threadIsStopping = true; … … 1593 1901 if (Options::logGC()) { 1594 1902 before = MonotonicTime::now(); 1595 dataLog("[GC : finalize ");1903 dataLog("[GC<", RawPointer(this), ">: finalize "); 1596 1904 } 1597 1905 1598 1906 { 1599 HelpingGCScope helpingGCScope(*this);1907 SweepingScope helpingGCScope(*this); 1600 1908 deleteUnmarkedCompiledCode(); 1601 1909 deleteSourceProviderCaches(); … … 1605 1913 if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache()) 1606 1914 cache->clear(); 1915 1916 { 1917 // This idiom allows callbacks to call addFinalizationCallback() if they want to be added back. 1918 Vector<RefPtr<GCFinalizationCallback>> myCallbacks; 1919 std::swap(myCallbacks, m_finalizationCallbacks); 1920 for (auto& callback : myCallbacks) 1921 callback->didFinalize(*this); 1922 } 1607 1923 1608 1924 if (Options::sweepSynchronously()) … … 1615 1931 } 1616 1932 1933 void Heap::addFinalizationCallback(RefPtr<GCFinalizationCallback> callback) 1934 { 1935 m_finalizationCallbacks.append(callback); 1936 } 1937 1617 1938 Heap::Ticket Heap::requestCollection(std::optional<CollectionScope> scope) 1618 1939 { … … 1623 1944 1624 1945 LockHolder locker(*m_threadLock); 1946 // We may be able to steal the conn. That only works if the collector is definitely not running 1947 // right now. This is an optimization that prevents the collector thread from ever starting in most 1948 // cases. 1949 ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 1950 if (m_lastServedTicket == m_lastGrantedTicket) { 1951 if (false) 1952 dataLog("Taking the conn.\n"); 1953 m_worldState.exchangeOr(mutatorHasConnBit); 1954 } 1955 1625 1956 m_requests.append(scope); 1626 1957 m_lastGrantedTicket++; 1627 m_threadCondition->notifyOne(locker); 1958 if (!(m_worldState.load() & mutatorHasConnBit)) 1959 m_threadCondition->notifyOne(locker); 1628 1960 return m_lastGrantedTicket; 1629 1961 } … … 1632 1964 { 1633 1965 waitForCollector( 1634 [&] (const LockHolder&) -> bool {1966 [&] (const AbstractLocker&) -> bool { 1635 1967 return m_lastServedTicket >= ticket; 1636 1968 }); … … 1771 2103 dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n"); 1772 2104 1773 if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())1774 HeapStatistics::exitWithFailure();1775 1776 2105 if (m_collectionScope == CollectionScope::Full) { 1777 2106 // To avoid pathological GC churn in very small and very large heaps, we set … … 1826 2155 } 1827 2156 1828 void Heap::didFinishCollection( double gcStartTime)1829 { 1830 double gcEndTime = WTF::monotonicallyIncreasingTime();2157 void Heap::didFinishCollection() 2158 { 2159 m_afterGC = MonotonicTime::now(); 1831 2160 CollectionScope scope = *m_collectionScope; 1832 2161 if (scope == CollectionScope::Full) 1833 m_lastFullGCLength = gcEndTime - gcStartTime;2162 m_lastFullGCLength = m_afterGC - m_beforeGC; 1834 2163 else 1835 m_lastEdenGCLength = gcEndTime - gcStartTime;2164 m_lastEdenGCLength = m_afterGC - m_beforeGC; 1836 2165 1837 2166 #if ENABLE(RESOURCE_USAGE) 1838 2167 ASSERT(externalMemorySize() <= extraMemorySize()); 1839 2168 #endif 1840 1841 if (Options::recordGCPauseTimes())1842 HeapStatistics::recordGCPauseTime(gcStartTime, gcEndTime);1843 1844 if (Options::dumpObjectStatistics())1845 HeapStatistics::dumpObjectStatistics(this);1846 2169 1847 2170 if (HeapProfiler* heapProfiler = m_vm->heapProfiler()) { … … 2071 2394 if (!m_isSafeToCollect) 2072 2395 return; 2073 if (mutatorState() == MutatorState::HelpingGC) 2396 switch (mutatorState()) { 2397 case MutatorState::Running: 2398 case MutatorState::Allocating: 2399 break; 2400 case MutatorState::Sweeping: 2401 case MutatorState::Collecting: 2074 2402 return; 2403 } 2075 2404 if (!Options::useGC()) 2076 2405 return; … … 2081 2410 else if (isDeferred()) 2082 2411 m_didDeferGCWork = true; 2083 else {2412 else 2084 2413 stopIfNecessary(); 2085 // FIXME: Check if the scheduler wants us to stop.2086 // https://bugs.webkit.org/show_bug.cgi?id=1668272087 }2088 2414 } 2089 2415 … … 2100 2426 else if (isDeferred()) 2101 2427 m_didDeferGCWork = true; 2102 else 2428 else { 2103 2429 collectAsync(); 2430 stopIfNecessary(); // This will immediately start the collection if we have the conn. 2431 } 2104 2432 } 2105 2433 … … 2304 2632 void Heap::notifyIsSafeToCollect() 2305 2633 { 2634 MonotonicTime before; 2635 if (Options::logGC()) { 2636 before = MonotonicTime::now(); 2637 dataLog("[GC<", RawPointer(this), ">: starting "); 2638 } 2639 2306 2640 addCoreConstraints(); 2307 2641 … … 2338 2672 }); 2339 2673 } 2674 2675 if (Options::logGC()) 2676 dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n"); 2340 2677 } 2341 2678 … … 2350 2687 // Wait for all collections to finish. 2351 2688 waitForCollector( 2352 [&] (const LockHolder&) -> bool {2689 [&] (const AbstractLocker&) -> bool { 2353 2690 ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 2354 2691 return m_lastServedTicket == m_lastGrantedTicket; … … 2403 2740 targetBytes = std::min(targetBytes, Options::gcIncrementMaxBytes()); 2404 2741 2405 MonotonicTime before;2406 if (Options::logGC()) {2407 dataLog("[GC: increment t=", targetBytes / 1024, "kb ");2408 before = MonotonicTime::now();2409 }2410 2411 2742 SlotVisitor& slotVisitor = *m_mutatorSlotVisitor; 2412 2743 ParallelModeEnabler parallelModeEnabler(slotVisitor); … … 2414 2745 // incrementBalance may go negative here because it'll remember how many bytes we overshot. 2415 2746 m_incrementBalance -= bytesVisited; 2416 2417 if (Options::logGC()) {2418 MonotonicTime after = MonotonicTime::now();2419 dataLog("p=", (after - before).milliseconds(), "ms b=", m_incrementBalance / 1024, "kb]\n");2420 }2421 2747 } 2422 2748 -
trunk/Source/JavaScriptCore/heap/Heap.h
r212310 r212466 25 25 #include "CellState.h" 26 26 #include "CollectionScope.h" 27 #include "CollectorPhase.h" 27 28 #include "DeleteAllCodeEffort.h" 29 #include "GCConductor.h" 28 30 #include "GCIncomingRefCountedSet.h" 29 31 #include "HandleSet.h" … … 31 33 #include "HeapObserver.h" 32 34 #include "ListableHandler.h" 33 #include "MachineStackMarker.h"34 35 #include "MarkedBlock.h" 35 36 #include "MarkedBlockSet.h" … … 54 55 class CodeBlock; 55 56 class CodeBlockSet; 57 class CollectingScope; 58 class ConservativeRoots; 56 59 class GCDeferralContext; 57 60 class EdenGCActivityCallback; … … 60 63 class GCActivityCallback; 61 64 class GCAwareJITStubRoutine; 65 class GCFinalizationCallback; 62 66 class Heap; 63 67 class HeapProfiler; 64 68 class HeapVerifier; 65 class HelpingGCScope;66 69 class IncrementalSweeper; 67 70 class JITStubRoutine; … … 70 73 class JSValue; 71 74 class LLIntOffsetsExtractor; 75 class MachineThreads; 72 76 class MarkStackArray; 73 77 class MarkedAllocator; … … 76 80 class MarkingConstraintSet; 77 81 class MutatorScheduler; 82 class RunningScope; 78 83 class SlotVisitor; 79 84 class SpaceTimeMutatorScheduler; 80 85 class StopIfNecessaryTimer; 86 class SweepingScope; 81 87 class VM; 88 struct CurrentThreadState; 82 89 83 90 namespace DFG { … … 132 139 133 140 MarkedSpace& objectSpace() { return m_objectSpace; } 134 MachineThreads& machineThreads() { return m_machineThreads; }141 MachineThreads& machineThreads() { return *m_machineThreads; } 135 142 136 143 SlotVisitor& collectorSlotVisitor() { return *m_collectorSlotVisitor; } … … 148 155 std::optional<CollectionScope> collectionScope() const { return m_collectionScope; } 149 156 bool hasHeapAccess() const; 150 bool mutatorIsStopped() const;151 157 bool collectorBelievesThatTheWorldIsStopped() const; 152 158 … … 230 236 void didFinishIterating(); 231 237 232 doublelastFullGCLength() const { return m_lastFullGCLength; }233 doublelastEdenGCLength() const { return m_lastEdenGCLength; }234 void increaseLastFullGCLength( doubleamount) { m_lastFullGCLength += amount; }238 Seconds lastFullGCLength() const { return m_lastFullGCLength; } 239 Seconds lastEdenGCLength() const { return m_lastEdenGCLength; } 240 void increaseLastFullGCLength(Seconds amount) { m_lastFullGCLength += amount; } 235 241 236 242 size_t sizeBeforeLastEdenCollection() const { return m_sizeBeforeLastEdenCollect; } … … 320 326 void stopIfNecessary(); 321 327 328 // This gives the conn to the collector. 329 void relinquishConn(); 330 322 331 bool mayNeedToStop(); 323 332 … … 344 353 JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef); 345 354 #endif // USE(CF) 355 356 JS_EXPORT_PRIVATE void addFinalizationCallback(RefPtr<GCFinalizationCallback>); 346 357 347 358 private: 348 359 friend class AllocatingScope; 349 360 friend class CodeBlock; 361 friend class CollectingScope; 350 362 friend class DeferGC; 351 363 friend class DeferGCForAWhile; … … 356 368 friend class HeapUtil; 357 369 friend class HeapVerifier; 358 friend class HelpingGCScope;359 370 friend class JITStubRoutine; 360 371 friend class LLIntOffsetsExtractor; … … 362 373 friend class MarkedAllocator; 363 374 friend class MarkedBlock; 375 friend class RunningScope; 364 376 friend class SlotVisitor; 365 377 friend class SpaceTimeMutatorScheduler; 366 378 friend class StochasticSpaceTimeMutatorScheduler; 379 friend class SweepingScope; 367 380 friend class IncrementalSweeper; 368 381 friend class HeapStatistics; … … 383 396 JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t); 384 397 385 bool shouldCollectInThread(const LockHolder&); 386 void collectInThread(); 387 388 void stopTheWorld(); 389 void resumeTheWorld(); 390 391 void stopTheMutator(); 398 bool shouldCollectInCollectorThread(const AbstractLocker&); 399 void collectInCollectorThread(); 400 401 void checkConn(GCConductor); 402 403 enum class RunCurrentPhaseResult { 404 Finished, 405 Continue, 406 NeedCurrentThreadState 407 }; 408 RunCurrentPhaseResult runCurrentPhase(GCConductor, CurrentThreadState*); 409 410 // Returns true if we should keep doing things. 411 bool runNotRunningPhase(GCConductor); 412 bool runBeginPhase(GCConductor); 413 bool runFixpointPhase(GCConductor); 414 bool runConcurrentPhase(GCConductor); 415 bool runReloopPhase(GCConductor); 416 bool runEndPhase(GCConductor); 417 bool changePhase(GCConductor, CollectorPhase); 418 bool finishChangingPhase(GCConductor); 419 420 void collectInMutatorThread(); 421 422 void stopThePeriphery(GCConductor); 423 void resumeThePeriphery(); 424 425 // Returns true if the mutator is stopped, false if the mutator has the conn now. 426 bool stopTheMutator(); 392 427 void resumeTheMutator(); 393 428 … … 402 437 403 438 bool handleGCDidJIT(unsigned); 439 void handleGCDidJIT(); 440 404 441 bool handleNeedFinalize(unsigned); 405 void handleGCDidJIT();406 442 void handleNeedFinalize(); 443 444 bool relinquishConn(unsigned); 445 void finishRelinquishingConn(); 407 446 408 447 void setGCDidJIT(); … … 412 451 void setMutatorWaiting(); 413 452 void clearMutatorWaiting(); 414 void notifyThreadStopping(const LockHolder&);453 void notifyThreadStopping(const AbstractLocker&); 415 454 416 455 typedef uint64_t Ticket; … … 422 461 void prepareForMarking(); 423 462 424 void markToFixpoint(double gcStartTime);425 463 void gatherStackRoots(ConservativeRoots&); 426 464 void gatherJSStackRoots(ConservativeRoots&); … … 429 467 void visitCompilerWorklistWeakReferences(); 430 468 void removeDeadCompilerWorklistEntries(); 431 void updateObjectCounts( double gcStartTime);469 void updateObjectCounts(); 432 470 void endMarking(); 433 471 … … 444 482 JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*); 445 483 void updateAllocationLimits(); 446 void didFinishCollection( double gcStartTime);484 void didFinishCollection(); 447 485 void resumeCompilerThreads(); 448 486 void gatherExtraHeapSnapshotData(HeapProfiler&); … … 511 549 std::unique_ptr<HashSet<MarkedArgumentBuffer*>> m_markListSet; 512 550 513 MachineThreadsm_machineThreads;551 std::unique_ptr<MachineThreads> m_machineThreads; 514 552 515 553 std::unique_ptr<SlotVisitor> m_collectorSlotVisitor; … … 545 583 546 584 VM* m_vm; 547 doublem_lastFullGCLength;548 doublem_lastEdenGCLength;585 Seconds m_lastFullGCLength; 586 Seconds m_lastEdenGCLength; 549 587 550 588 Vector<ExecutableBase*> m_executables; … … 602 640 std::unique_ptr<MutatorScheduler> m_scheduler; 603 641 604 static const unsigned shouldStopBit = 1u << 0u;605 static const unsigned stoppedBit = 1u << 1u; 642 static const unsigned mutatorHasConnBit = 1u << 0u; // Must also be protected by threadLock. 643 static const unsigned stoppedBit = 1u << 1u; // Only set when !hasAccessBit 606 644 static const unsigned hasAccessBit = 1u << 2u; 607 645 static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid. … … 610 648 Atomic<unsigned> m_worldState; 611 649 bool m_collectorBelievesThatTheWorldIsStopped { false }; 650 MonotonicTime m_beforeGC; 651 MonotonicTime m_afterGC; 612 652 MonotonicTime m_stopTime; 613 653 … … 615 655 Ticket m_lastServedTicket { 0 }; 616 656 Ticket m_lastGrantedTicket { 0 }; 657 CollectorPhase m_currentPhase { CollectorPhase::NotRunning }; 658 CollectorPhase m_nextPhase { CollectorPhase::NotRunning }; 617 659 bool m_threadShouldStop { false }; 618 660 bool m_threadIsStopping { false }; … … 633 675 634 676 uintptr_t m_barriersExecuted { 0 }; 677 678 CurrentThreadState* m_currentThreadState { nullptr }; 679 680 Vector<RefPtr<GCFinalizationCallback>> m_finalizationCallbacks; 635 681 }; 636 682 -
trunk/Source/JavaScriptCore/heap/HeapInlines.h
r211603 r212466 62 62 } 63 63 64 inline bool Heap::mutatorIsStopped() const65 {66 unsigned state = m_worldState.load();67 bool shouldStop = state & shouldStopBit;68 bool stopped = state & stoppedBit;69 // I only got it right when I considered all four configurations of shouldStop/stopped:70 // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we71 // should return false.72 // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested73 // that we resume, so we should return false.74 // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.75 // The mutator should be able to do whatever it wants in this state, as if we were not76 // stopped. So return false.77 // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is78 // stopped, so return true.79 return shouldStop & stopped;80 }81 82 64 inline bool Heap::collectorBelievesThatTheWorldIsStopped() const 83 65 { -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
r208306 r212466 98 98 } 99 99 100 void IncrementalSweeper:: willFinishSweeping()100 void IncrementalSweeper::stopSweeping() 101 101 { 102 102 m_currentAllocator = nullptr; -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.h
r208306 r212466 38 38 JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*); 39 39 40 void startSweeping();40 JS_EXPORT_PRIVATE void startSweeping(); 41 41 42 42 JS_EXPORT_PRIVATE void doWork() override; 43 43 bool sweepNextBlock(); 44 void willFinishSweeping();44 JS_EXPORT_PRIVATE void stopSweeping(); 45 45 46 46 private: -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp
r211678 r212466 1 1 /* 2 * Copyright (C) 2003-20 09, 2015-2016Apple Inc. All rights reserved.2 * Copyright (C) 2003-2017 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2007 Eric Seidel <eric@webkit.org> 4 4 * Copyright (C) 2009 Acision BV. All rights reserved. … … 312 312 delete t; 313 313 } 314 } 315 316 SUPPRESS_ASAN 317 void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState& currentThreadState) 318 { 319 if (currentThreadState.registerState) { 320 void* registersBegin = currentThreadState.registerState; 321 void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(currentThreadState.registerState + 1))); 322 conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks); 323 } 324 325 conservativeRoots.add(currentThreadState.stackTop, currentThreadState.stackOrigin, jitStubRoutines, codeBlocks); 314 326 } 315 327 … … 1021 1033 } 1022 1034 1023 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks) 1024 { 1035 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState) 1036 { 1037 if (currentThreadState) 1038 gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState); 1039 1025 1040 size_t size; 1026 1041 size_t capacity = 0; … … 1037 1052 } 1038 1053 1054 NEVER_INLINE int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>& lambda) 1055 { 1056 DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(state); 1057 lambda(state); 1058 return 42; // Suppress tail call optimization. 1059 } 1060 1039 1061 } // namespace JSC -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.h
r208306 r212466 2 2 * Copyright (C) 1999-2000 Harri Porten (porten@kde.org) 3 3 * Copyright (C) 2001 Peter Kelly (pmk@post.com) 4 * Copyright (C) 2003-20 09, 2015-2016Apple Inc. All rights reserved.4 * Copyright (C) 2003-2017 Apple Inc. All rights reserved. 5 5 * 6 6 * This library is free software; you can redistribute it and/or … … 22 22 #pragma once 23 23 24 #include <setjmp.h>24 #include "RegisterState.h" 25 25 #include <wtf/Lock.h> 26 26 #include <wtf/Noncopyable.h> 27 #include <wtf/ScopedLambda.h> 27 28 #include <wtf/ThreadSpecific.h> 28 29 … … 58 59 class JITStubRoutineSet; 59 60 61 struct CurrentThreadState { 62 void* stackOrigin { nullptr }; 63 void* stackTop { nullptr }; 64 RegisterState* registerState { nullptr }; 65 }; 66 60 67 class MachineThreads { 61 68 WTF_MAKE_NONCOPYABLE(MachineThreads); 62 69 public: 63 typedef jmp_buf RegisterState;64 65 70 MachineThreads(Heap*); 66 71 ~MachineThreads(); 67 72 68 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet& );73 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*); 69 74 70 75 JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads. … … 146 151 147 152 private: 153 void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&); 154 148 155 void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*); 149 156 bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*); … … 162 169 }; 163 170 171 #define DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(stateName) \ 172 CurrentThreadState stateName; \ 173 stateName.stackTop = &stateName; \ 174 stateName.stackOrigin = wtfThreadData().stack().origin(); \ 175 ALLOCATE_AND_GET_REGISTER_STATE(stateName ## _registerState); \ 176 stateName.registerState = &stateName ## _registerState 177 178 // The return value is meaningless. We just use it to suppress tail call optimization. 179 int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>&); 180 164 181 } // namespace JSC 165 182 166 #if COMPILER(GCC_OR_CLANG)167 #define REGISTER_BUFFER_ALIGNMENT __attribute__ ((aligned (sizeof(void*))))168 #else169 #define REGISTER_BUFFER_ALIGNMENT170 #endif171 172 // ALLOCATE_AND_GET_REGISTER_STATE() is a macro so that it is always "inlined" even in debug builds.173 #if COMPILER(MSVC)174 #pragma warning(push)175 #pragma warning(disable: 4611)176 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \177 MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \178 setjmp(registers)179 #pragma warning(pop)180 #else181 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \182 MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \183 setjmp(registers)184 #endif -
trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp
r210844 r212466 217 217 m_heap->collectIfNecessaryOrDefer(deferralContext); 218 218 219 // Goofy corner case: the GC called a callback and now this allocator has a currentBlock. This only 220 // happens when running WebKit tests, which inject a callback into the GC's finalization. 221 if (UNLIKELY(m_currentBlock)) { 222 if (crashOnFailure) 223 return allocate(deferralContext); 224 return tryAllocate(deferralContext); 225 } 226 219 227 void* result = tryAllocateWithoutCollecting(); 220 228 -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r210844 r212466 27 27 #include "MarkedBlock.h" 28 28 29 #include "HelpingGCScope.h"30 29 #include "JSCell.h" 31 30 #include "JSDestructibleObject.h" … … 33 32 #include "MarkedBlockInlines.h" 34 33 #include "SuperSampler.h" 34 #include "SweepingScope.h" 35 35 36 36 namespace JSC { … … 410 410 FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode) 411 411 { 412 // FIXME: Maybe HelpingGCScope should just be called SweepScope? 413 HelpingGCScope helpingGCScope(*heap()); 412 SweepingScope sweepingScope(*heap()); 414 413 415 414 m_allocator->setIsUnswept(NoLockingNecessary, this, false); -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r210844 r212466 226 226 void MarkedSpace::sweep() 227 227 { 228 m_heap->sweeper()-> willFinishSweeping();228 m_heap->sweeper()->stopSweeping(); 229 229 forEachAllocator( 230 230 [&] (MarkedAllocator& allocator) -> IterationStatus { -
trunk/Source/JavaScriptCore/heap/MutatorState.cpp
r207653 r212466 42 42 out.print("Allocating"); 43 43 return; 44 case MutatorState::HelpingGC: 45 out.print("HelpingGC"); 44 case MutatorState::Sweeping: 45 out.print("Sweeping"); 46 return; 47 case MutatorState::Collecting: 48 out.print("Collecting"); 46 49 return; 47 50 } -
trunk/Source/JavaScriptCore/heap/MutatorState.h
r207653 r212466 35 35 Allocating, 36 36 37 // The mutator was asked by the GC to do some work. 38 HelpingGC 37 // The mutator is sweeping. 38 Sweeping, 39 40 // The mutator is collecting. 41 Collecting 39 42 }; 40 43 -
trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp
r211448 r212466 39 39 #include "JSCInlines.h" 40 40 #include "SlotVisitorInlines.h" 41 #include "StopIfNecessaryTimer.h" 41 42 #include "SuperSampler.h" 42 43 #include "VM.h" … … 76 77 #endif 77 78 78 SlotVisitor::SlotVisitor(Heap& heap )79 SlotVisitor::SlotVisitor(Heap& heap, CString codeName) 79 80 : m_bytesVisited(0) 80 81 , m_visitCount(0) … … 82 83 , m_markingVersion(MarkedSpace::initialVersion) 83 84 , m_heap(heap) 85 , m_codeName(codeName) 84 86 #if !ASSERT_DISABLED 85 87 , m_isCheckingForDefaultMarkViolation(false) … … 470 472 } 471 473 472 void SlotVisitor::drain(MonotonicTime timeout) 473 { 474 RELEASE_ASSERT(m_isInParallelMode); 474 NEVER_INLINE void SlotVisitor::drain(MonotonicTime timeout) 475 { 476 if (!m_isInParallelMode) { 477 dataLog("FATAL: attempting to drain when not in parallel mode.\n"); 478 RELEASE_ASSERT_NOT_REACHED(); 479 } 475 480 476 481 auto locker = holdLock(m_rightToRun); … … 582 587 } 583 588 584 SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)589 NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout) 585 590 { 586 591 ASSERT(m_isInParallelMode); … … 617 622 return SharedDrainResult::TimedOut; 618 623 619 if (didReachTermination(locker)) 624 if (didReachTermination(locker)) { 620 625 m_heap.m_markingConditionVariable.notifyAll(); 626 627 // If we're in concurrent mode, then we know that the mutator will eventually do 628 // the right thing because: 629 // - It's possible that the collector has the conn. In that case, the collector will 630 // wake up from the notification above. This will happen if the app released heap 631 // access. Native apps can spend a lot of time with heap access released. 632 // - It's possible that the mutator will allocate soon. Then it will check if we 633 // reached termination. This is the most likely outcome in programs that allocate 634 // a lot. 635 // - WebCore never releases access. But WebCore has a runloop. The runloop will check 636 // if we reached termination. 637 // So, this tells the runloop that it's got things to do. 638 m_heap.m_stopIfNecessaryTimer->scheduleSoon(); 639 } 621 640 622 641 auto isReady = [&] () -> bool { … … 660 679 ASSERT(Options::numberOfGCMarkers()); 661 680 662 if (!m_heap.hasHeapAccess() 681 if (Options::numberOfGCMarkers() == 1 682 || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit) 683 || !m_heap.hasHeapAccess() 663 684 || m_heap.collectorBelievesThatTheWorldIsStopped()) { 664 685 // This is an optimization over drainInParallel() when we have a concurrent mutator but … … 685 706 void SlotVisitor::donateAll() 686 707 { 708 if (isEmpty()) 709 return; 710 687 711 donateAll(holdLock(m_heap.m_markingMutex)); 688 712 } … … 756 780 void SlotVisitor::donate() 757 781 { 758 ASSERT(m_isInParallelMode); 782 if (!m_isInParallelMode) { 783 dataLog("FATAL: Attempting to donate when not in parallel mode.\n"); 784 RELEASE_ASSERT_NOT_REACHED(); 785 } 786 759 787 if (Options::numberOfGCMarkers() == 1) 760 788 return; -
trunk/Source/JavaScriptCore/heap/SlotVisitor.h
r211448 r212466 57 57 58 58 public: 59 SlotVisitor(Heap& );59 SlotVisitor(Heap&, CString codeName); 60 60 ~SlotVisitor(); 61 61 … … 168 168 169 169 void donateAll(); 170 171 const char* codeName() const { return m_codeName.data(); } 170 172 171 173 private: … … 228 230 bool m_canOptimizeForStoppedMutator { false }; 229 231 Lock m_rightToRun; 232 233 CString m_codeName; 230 234 231 235 public: -
trunk/Source/JavaScriptCore/heap/StochasticSpaceTimeMutatorScheduler.cpp
r211448 r212466 77 77 Options::concurrentGCMaxHeadroom() * 78 78 std::max<double>(m_bytesAllocatedThisCycleAtTheBeginning, m_heap.m_maxEdenSize); 79 80 if (Options::logGC()) 81 dataLog("ca=", m_bytesAllocatedThisCycleAtTheBeginning / 1024, "kb h=", (m_bytesAllocatedThisCycleAtTheEnd - m_bytesAllocatedThisCycleAtTheBeginning) / 1024, "kb "); 82 79 83 m_beforeConstraints = MonotonicTime::now(); 80 84 } … … 118 122 119 123 double resumeProbability = mutatorUtilization(snapshot); 124 if (resumeProbability < Options::epsilonMutatorUtilization()) { 125 m_plannedResumeTime = MonotonicTime::infinity(); 126 return; 127 } 128 120 129 bool shouldResume = m_random.get() < resumeProbability; 121 130 … … 136 145 return MonotonicTime::now(); 137 146 case Resumed: { 138 // Once we're running, we keep going. 139 // FIXME: Maybe force stop when we run out of headroom? 147 // Once we're running, we keep going unless we run out of headroom. 148 Snapshot snapshot(*this); 149 if (mutatorUtilization(snapshot) < Options::epsilonMutatorUtilization()) 150 return MonotonicTime::now(); 140 151 return MonotonicTime::infinity(); 141 152 } } -
trunk/Source/JavaScriptCore/jit/JITWorklist.cpp
r211642 r212466 100 100 class JITWorklist::Thread : public AutomaticThread { 101 101 public: 102 Thread(const LockHolder& locker, JITWorklist& worklist)102 Thread(const AbstractLocker& locker, JITWorklist& worklist) 103 103 : AutomaticThread(locker, worklist.m_lock, worklist.m_condition) 104 104 , m_worklist(worklist) … … 108 108 109 109 protected: 110 PollResult poll(const LockHolder&) override110 PollResult poll(const AbstractLocker&) override 111 111 { 112 112 RELEASE_ASSERT(m_worklist.m_numAvailableThreads); -
trunk/Source/JavaScriptCore/jsc.cpp
r212464 r212466 39 39 #include "HeapProfiler.h" 40 40 #include "HeapSnapshotBuilder.h" 41 #include "HeapStatistics.h"42 41 #include "InitializeThreading.h" 43 42 #include "Interpreter.h" … … 1085 1084 static EncodedJSValue JSC_HOST_CALL functionWaitForReport(ExecState*); 1086 1085 static EncodedJSValue JSC_HOST_CALL functionHeapCapacity(ExecState*); 1086 static EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState*); 1087 1087 1088 1088 struct Script { … … 1367 1367 1368 1368 addFunction(vm, "heapCapacity", functionHeapCapacity, 0); 1369 addFunction(vm, "flashHeapAccess", functionFlashHeapAccess, 0); 1369 1370 } 1370 1371 … … 2648 2649 } 2649 2650 2651 EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState* exec) 2652 { 2653 VM& vm = exec->vm(); 2654 auto scope = DECLARE_THROW_SCOPE(vm); 2655 2656 vm.heap.releaseAccess(); 2657 if (exec->argumentCount() >= 1) { 2658 double ms = exec->argument(0).toNumber(exec); 2659 RETURN_IF_EXCEPTION(scope, encodedJSValue()); 2660 sleep(Seconds::fromMilliseconds(ms)); 2661 } 2662 vm.heap.acquireAccess(); 2663 return JSValue::encode(jsUndefined()); 2664 } 2665 2650 2666 template<typename ValueType> 2651 2667 typename std::enable_if<!std::is_fundamental<ValueType>::value>::type addOption(VM&, JSObject*, Identifier, ValueType) { } -
trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp
r211666 r212466 32 32 #include "ExecutableAllocator.h" 33 33 #include "Heap.h" 34 #include "HeapStatistics.h"35 34 #include "Identifier.h" 36 35 #include "JSDateMath.h" … … 61 60 WTF::initializeGCThreads(); 62 61 Options::initialize(); 63 if (Options::recordGCPauseTimes())64 HeapStatistics::initialize();65 62 #if ENABLE(WRITE_BARRIER_PROFILING) 66 63 WriteBarrierCounters::initialize(); -
trunk/Source/JavaScriptCore/runtime/JSCellInlines.h
r211247 r212466 281 281 // independent of whether the mutator thread is sweeping or not. Hence, we also check for ownerThread() != 282 282 // std::this_thread::get_id() to allow the GC thread or JIT threads to pass this assertion. 283 ASSERT(vm.heap.mutatorState() == MutatorState::Running || vm.apiLock().ownerThread() != std::this_thread::get_id());283 ASSERT(vm.heap.mutatorState() != MutatorState::Sweeping || vm.apiLock().ownerThread() != std::this_thread::get_id()); 284 284 return structure(vm)->classInfo(); 285 285 } -
trunk/Source/JavaScriptCore/runtime/Options.cpp
r212310 r212466 318 318 Options::concurrentGCMaxHeadroom() = 1.4; 319 319 Options::minimumGCPauseMS() = 1; 320 Options::gcIncrementScale() = 1; 320 Options::useStochasticMutatorScheduler() = false; 321 if (WTF::numberOfProcessorCores() <= 1) 322 Options::gcIncrementScale() = 1; 323 else 324 Options::gcIncrementScale() = 0; 321 325 } 322 326 } -
trunk/Source/JavaScriptCore/runtime/Options.h
r212453 r212466 201 201 v(double, minimumMutatorUtilization, 0, Normal, nullptr) \ 202 202 v(double, maximumMutatorUtilization, 0.7, Normal, nullptr) \ 203 v(double, epsilonMutatorUtilization, 0.01, Normal, nullptr) \ 203 204 v(double, concurrentGCMaxHeadroom, 1.5, Normal, nullptr) \ 204 205 v(double, concurrentGCPeriodMS, 2, Normal, nullptr) \ … … 345 346 v(bool, useImmortalObjects, false, Normal, "debugging option to keep all objects alive forever") \ 346 347 v(bool, sweepSynchronously, false, Normal, "debugging option to sweep all dead objects synchronously at GC end before resuming mutator") \ 347 v(bool, dumpObjectStatistics, false, Normal, nullptr) \348 348 v(unsigned, maxSingleAllocationSize, 0, Configurable, "debugging option to limit individual allocations to a max size (0 = limit not set, N = limit size in bytes)") \ 349 349 \ -
trunk/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
r211247 r212466 1 1 /* 2 * Copyright (C) 2013-201 4, 2016Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 29 29 #include "CodeBlock.h" 30 30 #include "FunctionCodeBlock.h" 31 #include "HeapStatistics.h"32 31 #include "JSCInlines.h" 33 32 #include "LLIntData.h" … … 161 160 void finalizeStatsAtEndOfTesting() 162 161 { 163 if (Options::logHeapStatisticsAtExit())164 HeapStatistics::reportSuccess();165 162 if (Options::reportLLIntStats()) 166 163 LLInt::Data::finalizeStats(); -
trunk/Source/WTF/ChangeLog
r212461 r212466 1 2017-02-10 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Extend the use of AbstractLocker so that we can use more locking idioms. 9 10 * wtf/AutomaticThread.cpp: 11 (WTF::AutomaticThreadCondition::notifyOne): 12 (WTF::AutomaticThreadCondition::notifyAll): 13 (WTF::AutomaticThreadCondition::add): 14 (WTF::AutomaticThreadCondition::remove): 15 (WTF::AutomaticThreadCondition::contains): 16 (WTF::AutomaticThread::AutomaticThread): 17 (WTF::AutomaticThread::tryStop): 18 (WTF::AutomaticThread::isWaiting): 19 (WTF::AutomaticThread::notify): 20 (WTF::AutomaticThread::start): 21 (WTF::AutomaticThread::threadIsStopping): 22 * wtf/AutomaticThread.h: 23 * wtf/NumberOfCores.cpp: 24 (WTF::numberOfProcessorCores): Allow this to be overridden for testing. 25 * wtf/ParallelHelperPool.cpp: 26 (WTF::ParallelHelperClient::finish): 27 (WTF::ParallelHelperClient::claimTask): 28 (WTF::ParallelHelperPool::Thread::Thread): 29 (WTF::ParallelHelperPool::didMakeWorkAvailable): 30 (WTF::ParallelHelperPool::hasClientWithTask): 31 (WTF::ParallelHelperPool::getClientWithTask): 32 * wtf/ParallelHelperPool.h: 33 1 34 2017-02-16 Anders Carlsson <andersca@apple.com> 2 35 -
trunk/Source/WTF/wtf/AutomaticThread.cpp
r210398 r212466 46 46 } 47 47 48 void AutomaticThreadCondition::notifyOne(const LockHolder& locker)48 void AutomaticThreadCondition::notifyOne(const AbstractLocker& locker) 49 49 { 50 50 for (AutomaticThread* thread : m_threads) { … … 65 65 } 66 66 67 void AutomaticThreadCondition::notifyAll(const LockHolder& locker)67 void AutomaticThreadCondition::notifyAll(const AbstractLocker& locker) 68 68 { 69 69 m_condition.notifyAll(); … … 82 82 } 83 83 84 void AutomaticThreadCondition::add(const LockHolder&, AutomaticThread* thread)84 void AutomaticThreadCondition::add(const AbstractLocker&, AutomaticThread* thread) 85 85 { 86 86 ASSERT(!m_threads.contains(thread)); … … 88 88 } 89 89 90 void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)90 void AutomaticThreadCondition::remove(const AbstractLocker&, AutomaticThread* thread) 91 91 { 92 92 m_threads.removeFirst(thread); … … 94 94 } 95 95 96 bool AutomaticThreadCondition::contains(const LockHolder&, AutomaticThread* thread)96 bool AutomaticThreadCondition::contains(const AbstractLocker&, AutomaticThread* thread) 97 97 { 98 98 return m_threads.contains(thread); 99 99 } 100 100 101 AutomaticThread::AutomaticThread(const LockHolder& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)101 AutomaticThread::AutomaticThread(const AbstractLocker& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition) 102 102 : m_lock(lock) 103 103 , m_condition(condition) … … 119 119 } 120 120 121 bool AutomaticThread::tryStop(const LockHolder&)121 bool AutomaticThread::tryStop(const AbstractLocker&) 122 122 { 123 123 if (!m_isRunning) … … 129 129 } 130 130 131 bool AutomaticThread::isWaiting(const LockHolder& locker)131 bool AutomaticThread::isWaiting(const AbstractLocker& locker) 132 132 { 133 133 return hasUnderlyingThread(locker) && m_isWaiting; 134 134 } 135 135 136 bool AutomaticThread::notify(const LockHolder& locker)136 bool AutomaticThread::notify(const AbstractLocker& locker) 137 137 { 138 138 ASSERT_UNUSED(locker, hasUnderlyingThread(locker)); … … 148 148 } 149 149 150 void AutomaticThread::start(const LockHolder&)150 void AutomaticThread::start(const AbstractLocker&) 151 151 { 152 152 RELEASE_ASSERT(m_isRunning); … … 170 170 } 171 171 172 auto stopImpl = [&] (const LockHolder& locker) {172 auto stopImpl = [&] (const AbstractLocker& locker) { 173 173 thread->threadIsStopping(locker); 174 174 thread->m_hasUnderlyingThread = false; 175 175 }; 176 176 177 auto stopPermanently = [&] (const LockHolder& locker) {177 auto stopPermanently = [&] (const AbstractLocker& locker) { 178 178 m_isRunning = false; 179 179 m_isRunningCondition.notifyAll(); … … 181 181 }; 182 182 183 auto stopForTimeout = [&] (const LockHolder& locker) {183 auto stopForTimeout = [&] (const AbstractLocker& locker) { 184 184 stopImpl(locker); 185 185 }; … … 228 228 } 229 229 230 void AutomaticThread::threadIsStopping(const LockHolder&)230 void AutomaticThread::threadIsStopping(const AbstractLocker&) 231 231 { 232 232 } -
trunk/Source/WTF/wtf/AutomaticThread.h
r210398 r212466 76 76 WTF_EXPORT_PRIVATE ~AutomaticThreadCondition(); 77 77 78 WTF_EXPORT_PRIVATE void notifyOne(const LockHolder&);79 WTF_EXPORT_PRIVATE void notifyAll(const LockHolder&);78 WTF_EXPORT_PRIVATE void notifyOne(const AbstractLocker&); 79 WTF_EXPORT_PRIVATE void notifyAll(const AbstractLocker&); 80 80 81 81 // You can reuse this condition for other things, just as you would any other condition. … … 91 91 WTF_EXPORT_PRIVATE AutomaticThreadCondition(); 92 92 93 void add(const LockHolder&, AutomaticThread*);94 void remove(const LockHolder&, AutomaticThread*);95 bool contains(const LockHolder&, AutomaticThread*);93 void add(const AbstractLocker&, AutomaticThread*); 94 void remove(const AbstractLocker&, AutomaticThread*); 95 bool contains(const AbstractLocker&, AutomaticThread*); 96 96 97 97 Condition m_condition; … … 114 114 115 115 // Sometimes it's possible to optimize for the case that there is no underlying thread. 116 bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }116 bool hasUnderlyingThread(const AbstractLocker&) const { return m_hasUnderlyingThread; } 117 117 118 118 // This attempts to quickly stop the thread. This will succeed if the thread happens to not be … … 120 120 // thread is to first try this, and if that doesn't work, to tell the thread using your own 121 121 // mechanism (set some flag and then notify the condition). 122 bool tryStop(const LockHolder&);122 bool tryStop(const AbstractLocker&); 123 123 124 bool isWaiting(const LockHolder&);124 bool isWaiting(const AbstractLocker&); 125 125 126 bool notify(const LockHolder&);126 bool notify(const AbstractLocker&); 127 127 128 128 void join(); … … 131 131 // This logically creates the thread, but in reality the thread won't be created until someone 132 132 // calls AutomaticThreadCondition::notifyOne() or notifyAll(). 133 AutomaticThread(const LockHolder&, Box<Lock>, RefPtr<AutomaticThreadCondition>);133 AutomaticThread(const AbstractLocker&, Box<Lock>, RefPtr<AutomaticThreadCondition>); 134 134 135 135 // To understand PollResult and WorkResult, imagine that poll() and work() are being called like … … 160 160 161 161 enum class PollResult { Work, Stop, Wait }; 162 virtual PollResult poll(const LockHolder&) = 0;162 virtual PollResult poll(const AbstractLocker&) = 0; 163 163 164 164 enum class WorkResult { Continue, Stop }; … … 169 169 // can be sure that the default ones don't do anything (so you don't need a super call). 170 170 virtual void threadDidStart(); 171 virtual void threadIsStopping(const LockHolder&);171 virtual void threadIsStopping(const AbstractLocker&); 172 172 173 173 private: 174 174 friend class AutomaticThreadCondition; 175 175 176 void start(const LockHolder&);176 void start(const AbstractLocker&); 177 177 178 178 Box<Lock> m_lock; -
trunk/Source/WTF/wtf/NumberOfCores.cpp
r168353 r212466 48 48 if (s_numberOfCores > 0) 49 49 return s_numberOfCores; 50 51 if (const char* coresEnv = getenv("WTF_numberOfProcessorCores")) { 52 unsigned numberOfCores; 53 if (sscanf(coresEnv, "%u", &numberOfCores) == 1) { 54 s_numberOfCores = numberOfCores; 55 return s_numberOfCores; 56 } else 57 fprintf(stderr, "WARNING: failed to parse WTF_numberOfProcessorCores=%s\n", coresEnv); 58 } 50 59 51 60 #if OS(DARWIN) -
trunk/Source/WTF/wtf/ParallelHelperPool.cpp
r207480 r212466 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 89 89 } 90 90 91 void ParallelHelperClient::finish(const LockHolder&)91 void ParallelHelperClient::finish(const AbstractLocker&) 92 92 { 93 93 m_task = nullptr; … … 96 96 } 97 97 98 RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const LockHolder&)98 RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const AbstractLocker&) 99 99 { 100 100 if (!m_task) … … 171 171 class ParallelHelperPool::Thread : public AutomaticThread { 172 172 public: 173 Thread(const LockHolder& locker, ParallelHelperPool& pool)173 Thread(const AbstractLocker& locker, ParallelHelperPool& pool) 174 174 : AutomaticThread(locker, pool.m_lock, pool.m_workAvailableCondition) 175 175 , m_pool(pool) … … 178 178 179 179 protected: 180 PollResult poll(const LockHolder& locker) override180 PollResult poll(const AbstractLocker& locker) override 181 181 { 182 182 if (m_pool.m_isDying) … … 204 204 }; 205 205 206 void ParallelHelperPool::didMakeWorkAvailable(const LockHolder& locker)206 void ParallelHelperPool::didMakeWorkAvailable(const AbstractLocker& locker) 207 207 { 208 208 while (m_numThreads > m_threads.size()) … … 211 211 } 212 212 213 bool ParallelHelperPool::hasClientWithTask(const LockHolder& locker)213 bool ParallelHelperPool::hasClientWithTask(const AbstractLocker& locker) 214 214 { 215 215 return !!getClientWithTask(locker); 216 216 } 217 217 218 ParallelHelperClient* ParallelHelperPool::getClientWithTask(const LockHolder&)218 ParallelHelperClient* ParallelHelperPool::getClientWithTask(const AbstractLocker&) 219 219 { 220 220 // We load-balance by being random. -
trunk/Source/WTF/wtf/ParallelHelperPool.h
r207480 r212466 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 169 169 friend class ParallelHelperPool; 170 170 171 void finish(const LockHolder&);172 RefPtr<SharedTask<void ()>> claimTask(const LockHolder&);171 void finish(const AbstractLocker&); 172 RefPtr<SharedTask<void ()>> claimTask(const AbstractLocker&); 173 173 void runTask(RefPtr<SharedTask<void ()>>); 174 174 … … 194 194 friend class Thread; 195 195 196 void didMakeWorkAvailable(const LockHolder&);197 198 bool hasClientWithTask(const LockHolder&);199 ParallelHelperClient* getClientWithTask(const LockHolder&);200 ParallelHelperClient* waitForClientWithTask(const LockHolder&);196 void didMakeWorkAvailable(const AbstractLocker&); 197 198 bool hasClientWithTask(const AbstractLocker&); 199 ParallelHelperClient* getClientWithTask(const AbstractLocker&); 200 ParallelHelperClient* waitForClientWithTask(const AbstractLocker&); 201 201 202 202 Box<Lock> m_lock; // AutomaticThread wants this in a box for safety. -
trunk/Source/WebCore/ChangeLog
r212465 r212466 1 2017-02-11 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Added new tests in JSTests and LayoutTests. 9 10 The WebCore changes involve: 11 12 - Refactoring around new header discipline. 13 14 - Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline. 15 16 * ForwardingHeaders/heap/GCFinalizationCallback.h: Added. 17 * ForwardingHeaders/heap/IncrementalSweeper.h: Added. 18 * ForwardingHeaders/heap/MachineStackMarker.h: Added. 19 * ForwardingHeaders/heap/RunningScope.h: Added. 20 * bindings/js/CommonVM.cpp: 21 * testing/Internals.cpp: 22 (WebCore::Internals::parserMetaData): 23 (WebCore::Internals::isReadableStreamDisturbed): 24 (WebCore::Internals::isGCRunning): 25 (WebCore::Internals::addGCFinalizationCallback): 26 (WebCore::Internals::stopSweeping): 27 (WebCore::Internals::startSweeping): 28 * testing/Internals.h: 29 * testing/Internals.idl: 30 1 31 2017-02-16 Jiewen Tan <jiewen_tan@apple.com> 2 32 -
trunk/Source/WebCore/bindings/js/CommonVM.cpp
r212343 r212466 31 31 #include "WebCoreJSClientData.h" 32 32 #include <heap/HeapInlines.h> 33 #include "heap/MachineStackMarker.h" 33 34 #include <runtime/VM.h> 34 35 #include <wtf/MainThread.h> -
trunk/Source/WebCore/testing/Internals.cpp
r212269 r212466 139 139 #include "XMLHttpRequest.h" 140 140 #include <bytecode/CodeBlock.h> 141 #include <heap/GCFinalizationCallback.h> 142 #include <heap/IncrementalSweeper.h> 143 #include <heap/RunningScope.h> 141 144 #include <inspector/InspectorAgentBase.h> 142 145 #include <inspector/InspectorFrontendChannel.h> … … 232 235 #endif 233 236 237 using JSC::ArgList; 234 238 using JSC::CallData; 235 239 using JSC::CallType; 236 240 using JSC::CodeBlock; 241 using JSC::DeferGCForAWhile; 242 using JSC::ExecState; 237 243 using JSC::FunctionExecutable; 244 using JSC::GCFinalizationCallback; 245 using JSC::Heap; 238 246 using JSC::Identifier; 239 247 using JSC::JSFunction; … … 243 251 using JSC::MarkedArgumentBuffer; 244 252 using JSC::PropertySlot; 253 using JSC::RunningScope; 245 254 using JSC::ScriptExecutable; 246 255 using JSC::StackVisitor; 256 using JSC::Strong; 257 using JSC::VM; 247 258 248 259 using namespace Inspector; … … 1684 1695 String Internals::parserMetaData(JSC::JSValue code) 1685 1696 { 1686 JSC::VM& vm = contextDocument()->vm();1687 JSC::ExecState* exec = vm.topCallFrame;1697 VM& vm = contextDocument()->vm(); 1698 ExecState* exec = vm.topCallFrame; 1688 1699 ScriptExecutable* executable; 1689 1700 … … 3538 3549 #if ENABLE(READABLE_STREAM_API) 3539 3550 3540 bool Internals::isReadableStreamDisturbed( JSC::ExecState& state, JSValue stream)3551 bool Internals::isReadableStreamDisturbed(ExecState& state, JSValue stream) 3541 3552 { 3542 3553 JSGlobalObject* globalObject = state.vmEntryGlobalObject(); … … 3713 3724 } 3714 3725 3726 bool Internals::isGCRunning(ExecState& exec) 3727 { 3728 return !!exec.vm().heap.collectionScope(); 3729 } 3730 3731 void Internals::addGCFinalizationCallback(ExecState& exec, JSValue value) 3732 { 3733 VM& vm = exec.vm(); 3734 Strong<JSC::Unknown> callback(vm, value); 3735 Strong<JSGlobalObject> globalObject(vm, exec.lexicalGlobalObject()); 3736 exec.vm().heap.addFinalizationCallback( 3737 JSC::createGCFinalizationCallback( 3738 [=] (GCFinalizationCallback*, Heap&) mutable { 3739 // This code is basically unsound. Everything in JSC assumes that some allocations, like 3740 // string allocations, are pure: they aren't going to clobber the heap. So, any JS code 3741 // you supply to this callback better be super careful that it doesn't violate that 3742 // assumption in some scary way. This kind of hack is really only appropriate for test 3743 // code. 3744 ExecState* exec = globalObject->globalExec(); 3745 VM& vm = exec->vm(); 3746 DeferGCForAWhile deferGC(vm.heap); 3747 RunningScope runningScope(vm.heap); 3748 auto scope = DECLARE_CATCH_SCOPE(vm); 3749 CallData callData; 3750 CallType callType = JSC::getCallData(callback.get(), callData); 3751 JSC::call(exec, callback.get(), callType, callData, JSC::jsNull(), ArgList()); 3752 if (scope.exception()) { 3753 dataLog("Warning: got exception.\n"); 3754 scope.clearException(); 3755 } 3756 })); 3757 } 3758 3759 void Internals::stopSweeping(ExecState& exec) 3760 { 3761 exec.vm().heap.sweeper()->stopSweeping(); 3762 } 3763 3764 void Internals::startSweeping(ExecState& exec) 3765 { 3766 exec.vm().heap.sweeper()->startSweeping(); 3767 } 3768 3715 3769 } // namespace WebCore -
trunk/Source/WebCore/testing/Internals.h
r211949 r212466 531 531 532 532 void setAsRunningUserScripts(Document&); 533 534 bool isGCRunning(JSC::ExecState&); 535 void addGCFinalizationCallback(JSC::ExecState&, JSC::JSValue); 536 void stopSweeping(JSC::ExecState&); 537 void startSweeping(JSC::ExecState&); 533 538 534 539 private: -
trunk/Source/WebCore/testing/Internals.idl
r211949 r212466 505 505 506 506 void disableTileSizeUpdateDelay(); 507 }; 507 508 [CallWith=ScriptState] boolean isGCRunning(); 509 [CallWith=ScriptState] void addGCFinalizationCallback(any callback); 510 [CallWith=ScriptState] void stopSweeping(); 511 [CallWith=ScriptState] void startSweeping(); 512 }; -
trunk/Tools/ChangeLog
r212459 r212466 1 2017-02-10 Filip Pizlo <fpizlo@apple.com> 2 3 The collector thread should only start when the mutator doesn't have heap access 4 https://bugs.webkit.org/show_bug.cgi?id=167737 5 6 Reviewed by Keith Miller. 7 8 Make more tests collect continuously. 9 10 * Scripts/run-jsc-stress-tests: 11 1 12 2017-02-16 Tim Horton <timothy_horton@apple.com> 2 13 -
trunk/Tools/Scripts/run-jsc-stress-tests
r210775 r212466 1469 1469 1470 1470 def runNoisyTestNoCJIT 1471 runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS ))1471 runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS)) 1472 1472 end 1473 1473 1474 1474 def runNoisyTestEagerNoCJIT 1475 runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS ))1475 runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS)) 1476 1476 end 1477 1477
Note: See TracChangeset
for help on using the changeset viewer.