Changeset 212778 in webkit


Ignore:
Timestamp:
Feb 21, 2017 4:58:15 PM (7 years ago)
Author:
fpizlo@apple.com
Message:

The collector thread should only start when the mutator doesn't have heap access
https://bugs.webkit.org/show_bug.cgi?id=167737

Reviewed by Keith Miller.
JSTests:


Add versions of splay that flash heap access, to simulate what might happen if a third-party app
was running concurrent GC. In this case, we might actually start the collector thread.

  • stress/splay-flash-access-1ms.js: Added.

(performance.now):
(this.Setup.setup.setup):
(this.TearDown.tearDown.tearDown):
(Benchmark):
(BenchmarkResult):
(BenchmarkResult.prototype.valueOf):
(BenchmarkSuite):
(alert):
(Math.random):
(BenchmarkSuite.ResetRNG):
(RunStep):
(BenchmarkSuite.RunSuites):
(BenchmarkSuite.CountBenchmarks):
(BenchmarkSuite.GeometricMean):
(BenchmarkSuite.GeometricMeanTime):
(BenchmarkSuite.AverageAbovePercentile):
(BenchmarkSuite.GeometricMeanLatency):
(BenchmarkSuite.FormatScore):
(BenchmarkSuite.prototype.NotifyStep):
(BenchmarkSuite.prototype.NotifyResult):
(BenchmarkSuite.prototype.NotifyError):
(BenchmarkSuite.prototype.RunSingleBenchmark):
(RunNextSetup):
(RunNextBenchmark):
(RunNextTearDown):
(BenchmarkSuite.prototype.RunStep):
(GeneratePayloadTree):
(GenerateKey):
(SplayUpdateStats):
(InsertNewNode):
(SplaySetup):
(SplayTearDown):
(SplayRun):
(SplayTree):
(SplayTree.prototype.isEmpty):
(SplayTree.prototype.insert):
(SplayTree.prototype.remove):
(SplayTree.prototype.find):
(SplayTree.prototype.findMax):
(SplayTree.prototype.findGreatestLessThan):
(SplayTree.prototype.exportKeys):
(SplayTree.prototype.splay_):
(SplayTree.Node):
(SplayTree.Node.prototype.traverse_):
(jscSetUp):
(jscTearDown):
(jscRun):
(averageAbovePercentile):
(printPercentile):

  • stress/splay-flash-access.js: Added.

(performance.now):
(this.Setup.setup.setup):
(this.TearDown.tearDown.tearDown):
(Benchmark):
(BenchmarkResult):
(BenchmarkResult.prototype.valueOf):
(BenchmarkSuite):
(alert):
(Math.random):
(BenchmarkSuite.ResetRNG):
(RunStep):
(BenchmarkSuite.RunSuites):
(BenchmarkSuite.CountBenchmarks):
(BenchmarkSuite.GeometricMean):
(BenchmarkSuite.GeometricMeanTime):
(BenchmarkSuite.AverageAbovePercentile):
(BenchmarkSuite.GeometricMeanLatency):
(BenchmarkSuite.FormatScore):
(BenchmarkSuite.prototype.NotifyStep):
(BenchmarkSuite.prototype.NotifyResult):
(BenchmarkSuite.prototype.NotifyError):
(BenchmarkSuite.prototype.RunSingleBenchmark):
(RunNextSetup):
(RunNextBenchmark):
(RunNextTearDown):
(BenchmarkSuite.prototype.RunStep):
(GeneratePayloadTree):
(GenerateKey):
(SplayUpdateStats):
(InsertNewNode):
(SplaySetup):
(SplayTearDown):
(SplayRun):
(SplayTree):
(SplayTree.prototype.isEmpty):
(SplayTree.prototype.insert):
(SplayTree.prototype.remove):
(SplayTree.prototype.find):
(SplayTree.prototype.findMax):
(SplayTree.prototype.findGreatestLessThan):
(SplayTree.prototype.exportKeys):
(SplayTree.prototype.splay_):
(SplayTree.Node):
(SplayTree.Node.prototype.traverse_):
(jscSetUp):
(jscTearDown):
(jscRun):
(averageAbovePercentile):
(printPercentile):

Source/JavaScriptCore:


This turns the collector thread's workflow into a state machine, so that the mutator thread can
run it directly. This reduces the amount of synchronization we do with the collector thread, and
means that most apps will never start the collector thread. The collector thread will still start
when we need to finish collecting and we don't have heap access.

In this new world, "stopping the world" means relinquishing control of collection to the mutator.
This means tracking who is conducting collection. I use the GCConductor enum to say who is
conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to
refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn).
So, stopping the world means giving the mutator the conn. Releasing heap access means giving the
collector the conn.

This meant bringing back the conservative scan of the calling thread. It turns out that this
scan was too slow to be called on each GC increment because apparently setjmp() now does system
calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about
whether or not it was correct, so I also made it so that the GC only rarely asks for the register
state. I think we still want to use my register saving code instead of setjmp because setjmp
seems to save things we don't need, and that could make us overly conservative.

It turns out that this new scheduling discipline makes the old space-time scheduler perform
better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is
because the mutator having the conn enables us to time the mutator<->collector context switches
by polling. The OS is never involved. So, we can use super precise timing. This allows the old
space-time schduler to shine like it hadn't before.

The splay results imply that this is all a good thing. On 2-core systems, this reduces pause
times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by
half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much
effect.

  • CMakeLists.txt:
  • JavaScriptCore.xcodeproj/project.pbxproj:
  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::visitChildren):

  • dfg/DFGWorklist.cpp:

(JSC::DFG::Worklist::ThreadBody::ThreadBody):
(JSC::DFG::Worklist::dump):
(JSC::DFG::numberOfWorklists):
(JSC::DFG::ensureWorklistForIndex):
(JSC::DFG::existingWorklistForIndexOrNull):
(JSC::DFG::existingWorklistForIndex):

  • dfg/DFGWorklist.h:

(JSC::DFG::numberOfWorklists): Deleted.
(JSC::DFG::ensureWorklistForIndex): Deleted.
(JSC::DFG::existingWorklistForIndexOrNull): Deleted.
(JSC::DFG::existingWorklistForIndex): Deleted.

  • heap/CollectingScope.h: Added.

(JSC::CollectingScope::CollectingScope):
(JSC::CollectingScope::~CollectingScope):

  • heap/CollectorPhase.cpp: Added.

(JSC::worldShouldBeSuspended):
(WTF::printInternal):

  • heap/CollectorPhase.h: Added.
  • heap/EdenGCActivityCallback.cpp:

(JSC::EdenGCActivityCallback::lastGCLength):

  • heap/FullGCActivityCallback.cpp:

(JSC::FullGCActivityCallback::doCollection):
(JSC::FullGCActivityCallback::lastGCLength):

  • heap/GCConductor.cpp: Added.

(JSC::gcConductorShortName):
(WTF::printInternal):

  • heap/GCConductor.h: Added.
  • heap/GCFinalizationCallback.cpp: Added.

(JSC::GCFinalizationCallback::GCFinalizationCallback):
(JSC::GCFinalizationCallback::~GCFinalizationCallback):

  • heap/GCFinalizationCallback.h: Added.

(JSC::GCFinalizationCallbackFuncAdaptor::GCFinalizationCallbackFuncAdaptor):
(JSC::createGCFinalizationCallback):

  • heap/Heap.cpp:

(JSC::Heap::Thread::Thread):
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::updateObjectCounts):
(JSC::Heap::sweepSynchronously):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectAsync):
(JSC::Heap::collectSync):
(JSC::Heap::shouldCollectInCollectorThread):
(JSC::Heap::collectInCollectorThread):
(JSC::Heap::checkConn):
(JSC::Heap::runNotRunningPhase):
(JSC::Heap::runBeginPhase):
(JSC::Heap::runFixpointPhase):
(JSC::Heap::runConcurrentPhase):
(JSC::Heap::runReloopPhase):
(JSC::Heap::runEndPhase):
(JSC::Heap::changePhase):
(JSC::Heap::finishChangingPhase):
(JSC::Heap::stopThePeriphery):
(JSC::Heap::resumeThePeriphery):
(JSC::Heap::stopTheMutator):
(JSC::Heap::resumeTheMutator):
(JSC::Heap::stopIfNecessarySlow):
(JSC::Heap::collectInMutatorThread):
(JSC::Heap::waitForCollector):
(JSC::Heap::acquireAccessSlow):
(JSC::Heap::releaseAccessSlow):
(JSC::Heap::relinquishConn):
(JSC::Heap::finishRelinquishingConn):
(JSC::Heap::handleNeedFinalize):
(JSC::Heap::notifyThreadStopping):
(JSC::Heap::finalize):
(JSC::Heap::addFinalizationCallback):
(JSC::Heap::requestCollection):
(JSC::Heap::waitForCollection):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::didFinishCollection):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::notifyIsSafeToCollect):
(JSC::Heap::preventCollection):
(JSC::Heap::performIncrement):
(JSC::Heap::markToFixpoint): Deleted.
(JSC::Heap::shouldCollectInThread): Deleted.
(JSC::Heap::collectInThread): Deleted.
(JSC::Heap::stopTheWorld): Deleted.
(JSC::Heap::resumeTheWorld): Deleted.

  • heap/Heap.h:

(JSC::Heap::machineThreads):
(JSC::Heap::lastFullGCLength):
(JSC::Heap::lastEdenGCLength):
(JSC::Heap::increaseLastFullGCLength):

  • heap/HeapInlines.h:

(JSC::Heap::mutatorIsStopped): Deleted.

  • heap/HeapStatistics.cpp: Removed.
  • heap/HeapStatistics.h: Removed.
  • heap/HelpingGCScope.h: Removed.
  • heap/IncrementalSweeper.cpp:

(JSC::IncrementalSweeper::stopSweeping):
(JSC::IncrementalSweeper::willFinishSweeping): Deleted.

  • heap/IncrementalSweeper.h:
  • heap/MachineStackMarker.cpp:

(JSC::MachineThreads::gatherFromCurrentThread):
(JSC::MachineThreads::gatherConservativeRoots):
(JSC::callWithCurrentThreadState):

  • heap/MachineStackMarker.h:
  • heap/MarkedAllocator.cpp:

(JSC::MarkedAllocator::allocateSlowCaseImpl):

  • heap/MarkedBlock.cpp:

(JSC::MarkedBlock::Handle::sweep):

  • heap/MarkedSpace.cpp:

(JSC::MarkedSpace::sweep):

  • heap/MutatorState.cpp:

(WTF::printInternal):

  • heap/MutatorState.h:
  • heap/RegisterState.h: Added.
  • heap/RunningScope.h: Added.

(JSC::RunningScope::RunningScope):
(JSC::RunningScope::~RunningScope):

  • heap/SlotVisitor.cpp:

(JSC::SlotVisitor::SlotVisitor):
(JSC::SlotVisitor::drain):
(JSC::SlotVisitor::drainFromShared):
(JSC::SlotVisitor::drainInParallelPassively):
(JSC::SlotVisitor::donateAll):
(JSC::SlotVisitor::donate):

  • heap/SlotVisitor.h:

(JSC::SlotVisitor::codeName):

  • heap/StochasticSpaceTimeMutatorScheduler.cpp:

(JSC::StochasticSpaceTimeMutatorScheduler::beginCollection):
(JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall):
(JSC::StochasticSpaceTimeMutatorScheduler::timeToStop):

  • heap/SweepingScope.h: Added.

(JSC::SweepingScope::SweepingScope):
(JSC::SweepingScope::~SweepingScope):

  • jit/JITWorklist.cpp:

(JSC::JITWorklist::Thread::Thread):

  • jsc.cpp:

(GlobalObject::finishCreation):
(functionFlashHeapAccess):

  • runtime/InitializeThreading.cpp:

(JSC::initializeThreading):

  • runtime/JSCellInlines.h:

(JSC::JSCell::classInfo):

  • runtime/Options.cpp:

(JSC::overrideDefaults):

  • runtime/Options.h:
  • runtime/TestRunnerUtils.cpp:

(JSC::finalizeStatsAtEndOfTesting):

Source/WebCore:

Added new tests in JSTests.

The WebCore changes involve:

  • Refactoring around new header discipline.


  • Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline.
  • ForwardingHeaders/heap/GCFinalizationCallback.h: Added.
  • ForwardingHeaders/heap/IncrementalSweeper.h: Added.
  • ForwardingHeaders/heap/MachineStackMarker.h: Added.
  • ForwardingHeaders/heap/RunningScope.h: Added.
  • bindings/js/CommonVM.cpp:
  • testing/Internals.cpp:

(WebCore::Internals::parserMetaData):
(WebCore::Internals::isReadableStreamDisturbed):
(WebCore::Internals::isGCRunning):
(WebCore::Internals::addGCFinalizationCallback):
(WebCore::Internals::stopSweeping):
(WebCore::Internals::startSweeping):

  • testing/Internals.h:
  • testing/Internals.idl:

Source/WTF:


Extend the use of AbstractLocker so that we can use more locking idioms.

  • wtf/AutomaticThread.cpp:

(WTF::AutomaticThreadCondition::notifyOne):
(WTF::AutomaticThreadCondition::notifyAll):
(WTF::AutomaticThreadCondition::add):
(WTF::AutomaticThreadCondition::remove):
(WTF::AutomaticThreadCondition::contains):
(WTF::AutomaticThread::AutomaticThread):
(WTF::AutomaticThread::tryStop):
(WTF::AutomaticThread::isWaiting):
(WTF::AutomaticThread::notify):
(WTF::AutomaticThread::start):
(WTF::AutomaticThread::threadIsStopping):

  • wtf/AutomaticThread.h:
  • wtf/NumberOfCores.cpp:

(WTF::numberOfProcessorCores):

  • wtf/ParallelHelperPool.cpp:

(WTF::ParallelHelperClient::finish):
(WTF::ParallelHelperClient::claimTask):
(WTF::ParallelHelperPool::Thread::Thread):
(WTF::ParallelHelperPool::didMakeWorkAvailable):
(WTF::ParallelHelperPool::hasClientWithTask):
(WTF::ParallelHelperPool::getClientWithTask):

  • wtf/ParallelHelperPool.h:

Tools:


Make more tests collect continuously.

  • Scripts/run-jsc-stress-tests:
Location:
trunk
Files:
14 added
3 deleted
41 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r212717 r212778  
     12017-02-20  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The collector thread should only start when the mutator doesn't have heap access
     4        https://bugs.webkit.org/show_bug.cgi?id=167737
     5
     6        Reviewed by Keith Miller.
     7       
     8        Add versions of splay that flash heap access, to simulate what might happen if a third-party app
     9        was running concurrent GC. In this case, we might actually start the collector thread.
     10
     11        * stress/splay-flash-access-1ms.js: Added.
     12        (performance.now):
     13        (this.Setup.setup.setup):
     14        (this.TearDown.tearDown.tearDown):
     15        (Benchmark):
     16        (BenchmarkResult):
     17        (BenchmarkResult.prototype.valueOf):
     18        (BenchmarkSuite):
     19        (alert):
     20        (Math.random):
     21        (BenchmarkSuite.ResetRNG):
     22        (RunStep):
     23        (BenchmarkSuite.RunSuites):
     24        (BenchmarkSuite.CountBenchmarks):
     25        (BenchmarkSuite.GeometricMean):
     26        (BenchmarkSuite.GeometricMeanTime):
     27        (BenchmarkSuite.AverageAbovePercentile):
     28        (BenchmarkSuite.GeometricMeanLatency):
     29        (BenchmarkSuite.FormatScore):
     30        (BenchmarkSuite.prototype.NotifyStep):
     31        (BenchmarkSuite.prototype.NotifyResult):
     32        (BenchmarkSuite.prototype.NotifyError):
     33        (BenchmarkSuite.prototype.RunSingleBenchmark):
     34        (RunNextSetup):
     35        (RunNextBenchmark):
     36        (RunNextTearDown):
     37        (BenchmarkSuite.prototype.RunStep):
     38        (GeneratePayloadTree):
     39        (GenerateKey):
     40        (SplayUpdateStats):
     41        (InsertNewNode):
     42        (SplaySetup):
     43        (SplayTearDown):
     44        (SplayRun):
     45        (SplayTree):
     46        (SplayTree.prototype.isEmpty):
     47        (SplayTree.prototype.insert):
     48        (SplayTree.prototype.remove):
     49        (SplayTree.prototype.find):
     50        (SplayTree.prototype.findMax):
     51        (SplayTree.prototype.findGreatestLessThan):
     52        (SplayTree.prototype.exportKeys):
     53        (SplayTree.prototype.splay_):
     54        (SplayTree.Node):
     55        (SplayTree.Node.prototype.traverse_):
     56        (jscSetUp):
     57        (jscTearDown):
     58        (jscRun):
     59        (averageAbovePercentile):
     60        (printPercentile):
     61        * stress/splay-flash-access.js: Added.
     62        (performance.now):
     63        (this.Setup.setup.setup):
     64        (this.TearDown.tearDown.tearDown):
     65        (Benchmark):
     66        (BenchmarkResult):
     67        (BenchmarkResult.prototype.valueOf):
     68        (BenchmarkSuite):
     69        (alert):
     70        (Math.random):
     71        (BenchmarkSuite.ResetRNG):
     72        (RunStep):
     73        (BenchmarkSuite.RunSuites):
     74        (BenchmarkSuite.CountBenchmarks):
     75        (BenchmarkSuite.GeometricMean):
     76        (BenchmarkSuite.GeometricMeanTime):
     77        (BenchmarkSuite.AverageAbovePercentile):
     78        (BenchmarkSuite.GeometricMeanLatency):
     79        (BenchmarkSuite.FormatScore):
     80        (BenchmarkSuite.prototype.NotifyStep):
     81        (BenchmarkSuite.prototype.NotifyResult):
     82        (BenchmarkSuite.prototype.NotifyError):
     83        (BenchmarkSuite.prototype.RunSingleBenchmark):
     84        (RunNextSetup):
     85        (RunNextBenchmark):
     86        (RunNextTearDown):
     87        (BenchmarkSuite.prototype.RunStep):
     88        (GeneratePayloadTree):
     89        (GenerateKey):
     90        (SplayUpdateStats):
     91        (InsertNewNode):
     92        (SplaySetup):
     93        (SplayTearDown):
     94        (SplayRun):
     95        (SplayTree):
     96        (SplayTree.prototype.isEmpty):
     97        (SplayTree.prototype.insert):
     98        (SplayTree.prototype.remove):
     99        (SplayTree.prototype.find):
     100        (SplayTree.prototype.findMax):
     101        (SplayTree.prototype.findGreatestLessThan):
     102        (SplayTree.prototype.exportKeys):
     103        (SplayTree.prototype.splay_):
     104        (SplayTree.Node):
     105        (SplayTree.Node.prototype.traverse_):
     106        (jscSetUp):
     107        (jscTearDown):
     108        (jscRun):
     109        (averageAbovePercentile):
     110        (printPercentile):
     111
    11122017-02-21  Ryan Haddad  <ryanhaddad@apple.com>
    2113
  • trunk/Source/JavaScriptCore/CMakeLists.txt

    r212775 r212778  
    476476    heap/CodeBlockSet.cpp
    477477    heap/CollectionScope.cpp
     478    heap/CollectorPhase.cpp
    478479    heap/ConservativeRoots.cpp
    479480    heap/DeferGC.cpp
     
    483484    heap/FreeList.cpp
    484485    heap/GCActivityCallback.cpp
     486    heap/GCConductor.cpp
    485487    heap/GCLogging.cpp
    486488    heap/HandleSet.cpp
     
    492494    heap/HeapSnapshot.cpp
    493495    heap/HeapSnapshotBuilder.cpp
    494     heap/HeapStatistics.cpp
    495496    heap/HeapTimer.cpp
    496497    heap/HeapVerifier.cpp
  • trunk/Source/JavaScriptCore/ChangeLog

    r212775 r212778  
     12017-02-20  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The collector thread should only start when the mutator doesn't have heap access
     4        https://bugs.webkit.org/show_bug.cgi?id=167737
     5
     6        Reviewed by Keith Miller.
     7       
     8        This turns the collector thread's workflow into a state machine, so that the mutator thread can
     9        run it directly. This reduces the amount of synchronization we do with the collector thread, and
     10        means that most apps will never start the collector thread. The collector thread will still start
     11        when we need to finish collecting and we don't have heap access.
     12       
     13        In this new world, "stopping the world" means relinquishing control of collection to the mutator.
     14        This means tracking who is conducting collection. I use the GCConductor enum to say who is
     15        conducting. It's either GCConductor::Mutator or GCConductor::Collector. I use the term "conn" to
     16        refer to the concept of conducting (having the conn, relinquishing the conn, taking the conn).
     17        So, stopping the world means giving the mutator the conn. Releasing heap access means giving the
     18        collector the conn.
     19       
     20        This meant bringing back the conservative scan of the calling thread. It turns out that this
     21        scan was too slow to be called on each GC increment because apparently setjmp() now does system
     22        calls. So, I wrote our own callee save register saving for the GC. Then I had doubts about
     23        whether or not it was correct, so I also made it so that the GC only rarely asks for the register
     24        state. I think we still want to use my register saving code instead of setjmp because setjmp
     25        seems to save things we don't need, and that could make us overly conservative.
     26       
     27        It turns out that this new scheduling discipline makes the old space-time scheduler perform
     28        better than the new stochastic space-time scheduler on systems with fewer than 4 cores. This is
     29        because the mutator having the conn enables us to time the mutator<->collector context switches
     30        by polling. The OS is never involved. So, we can use super precise timing. This allows the old
     31        space-time schduler to shine like it hadn't before.
     32       
     33        The splay results imply that this is all a good thing. On 2-core systems, this reduces pause
     34        times by 40% and it increases throughput about 5%. On 1-core systems, this reduces pause times by
     35        half and reduces throughput by 8%. On 4-or-more-core systems, this doesn't seem to have much
     36        effect.
     37
     38        * CMakeLists.txt:
     39        * JavaScriptCore.xcodeproj/project.pbxproj:
     40        * bytecode/CodeBlock.cpp:
     41        (JSC::CodeBlock::visitChildren):
     42        * dfg/DFGWorklist.cpp:
     43        (JSC::DFG::Worklist::ThreadBody::ThreadBody):
     44        (JSC::DFG::Worklist::dump):
     45        (JSC::DFG::numberOfWorklists):
     46        (JSC::DFG::ensureWorklistForIndex):
     47        (JSC::DFG::existingWorklistForIndexOrNull):
     48        (JSC::DFG::existingWorklistForIndex):
     49        * dfg/DFGWorklist.h:
     50        (JSC::DFG::numberOfWorklists): Deleted.
     51        (JSC::DFG::ensureWorklistForIndex): Deleted.
     52        (JSC::DFG::existingWorklistForIndexOrNull): Deleted.
     53        (JSC::DFG::existingWorklistForIndex): Deleted.
     54        * heap/CollectingScope.h: Added.
     55        (JSC::CollectingScope::CollectingScope):
     56        (JSC::CollectingScope::~CollectingScope):
     57        * heap/CollectorPhase.cpp: Added.
     58        (JSC::worldShouldBeSuspended):
     59        (WTF::printInternal):
     60        * heap/CollectorPhase.h: Added.
     61        * heap/EdenGCActivityCallback.cpp:
     62        (JSC::EdenGCActivityCallback::lastGCLength):
     63        * heap/FullGCActivityCallback.cpp:
     64        (JSC::FullGCActivityCallback::doCollection):
     65        (JSC::FullGCActivityCallback::lastGCLength):
     66        * heap/GCConductor.cpp: Added.
     67        (JSC::gcConductorShortName):
     68        (WTF::printInternal):
     69        * heap/GCConductor.h: Added.
     70        * heap/GCFinalizationCallback.cpp: Added.
     71        (JSC::GCFinalizationCallback::GCFinalizationCallback):
     72        (JSC::GCFinalizationCallback::~GCFinalizationCallback):
     73        * heap/GCFinalizationCallback.h: Added.
     74        (JSC::GCFinalizationCallbackFuncAdaptor::GCFinalizationCallbackFuncAdaptor):
     75        (JSC::createGCFinalizationCallback):
     76        * heap/Heap.cpp:
     77        (JSC::Heap::Thread::Thread):
     78        (JSC::Heap::Heap):
     79        (JSC::Heap::lastChanceToFinalize):
     80        (JSC::Heap::gatherStackRoots):
     81        (JSC::Heap::updateObjectCounts):
     82        (JSC::Heap::sweepSynchronously):
     83        (JSC::Heap::collectAllGarbage):
     84        (JSC::Heap::collectAsync):
     85        (JSC::Heap::collectSync):
     86        (JSC::Heap::shouldCollectInCollectorThread):
     87        (JSC::Heap::collectInCollectorThread):
     88        (JSC::Heap::checkConn):
     89        (JSC::Heap::runNotRunningPhase):
     90        (JSC::Heap::runBeginPhase):
     91        (JSC::Heap::runFixpointPhase):
     92        (JSC::Heap::runConcurrentPhase):
     93        (JSC::Heap::runReloopPhase):
     94        (JSC::Heap::runEndPhase):
     95        (JSC::Heap::changePhase):
     96        (JSC::Heap::finishChangingPhase):
     97        (JSC::Heap::stopThePeriphery):
     98        (JSC::Heap::resumeThePeriphery):
     99        (JSC::Heap::stopTheMutator):
     100        (JSC::Heap::resumeTheMutator):
     101        (JSC::Heap::stopIfNecessarySlow):
     102        (JSC::Heap::collectInMutatorThread):
     103        (JSC::Heap::waitForCollector):
     104        (JSC::Heap::acquireAccessSlow):
     105        (JSC::Heap::releaseAccessSlow):
     106        (JSC::Heap::relinquishConn):
     107        (JSC::Heap::finishRelinquishingConn):
     108        (JSC::Heap::handleNeedFinalize):
     109        (JSC::Heap::notifyThreadStopping):
     110        (JSC::Heap::finalize):
     111        (JSC::Heap::addFinalizationCallback):
     112        (JSC::Heap::requestCollection):
     113        (JSC::Heap::waitForCollection):
     114        (JSC::Heap::updateAllocationLimits):
     115        (JSC::Heap::didFinishCollection):
     116        (JSC::Heap::collectIfNecessaryOrDefer):
     117        (JSC::Heap::notifyIsSafeToCollect):
     118        (JSC::Heap::preventCollection):
     119        (JSC::Heap::performIncrement):
     120        (JSC::Heap::markToFixpoint): Deleted.
     121        (JSC::Heap::shouldCollectInThread): Deleted.
     122        (JSC::Heap::collectInThread): Deleted.
     123        (JSC::Heap::stopTheWorld): Deleted.
     124        (JSC::Heap::resumeTheWorld): Deleted.
     125        * heap/Heap.h:
     126        (JSC::Heap::machineThreads):
     127        (JSC::Heap::lastFullGCLength):
     128        (JSC::Heap::lastEdenGCLength):
     129        (JSC::Heap::increaseLastFullGCLength):
     130        * heap/HeapInlines.h:
     131        (JSC::Heap::mutatorIsStopped): Deleted.
     132        * heap/HeapStatistics.cpp: Removed.
     133        * heap/HeapStatistics.h: Removed.
     134        * heap/HelpingGCScope.h: Removed.
     135        * heap/IncrementalSweeper.cpp:
     136        (JSC::IncrementalSweeper::stopSweeping):
     137        (JSC::IncrementalSweeper::willFinishSweeping): Deleted.
     138        * heap/IncrementalSweeper.h:
     139        * heap/MachineStackMarker.cpp:
     140        (JSC::MachineThreads::gatherFromCurrentThread):
     141        (JSC::MachineThreads::gatherConservativeRoots):
     142        (JSC::callWithCurrentThreadState):
     143        * heap/MachineStackMarker.h:
     144        * heap/MarkedAllocator.cpp:
     145        (JSC::MarkedAllocator::allocateSlowCaseImpl):
     146        * heap/MarkedBlock.cpp:
     147        (JSC::MarkedBlock::Handle::sweep):
     148        * heap/MarkedSpace.cpp:
     149        (JSC::MarkedSpace::sweep):
     150        * heap/MutatorState.cpp:
     151        (WTF::printInternal):
     152        * heap/MutatorState.h:
     153        * heap/RegisterState.h: Added.
     154        * heap/RunningScope.h: Added.
     155        (JSC::RunningScope::RunningScope):
     156        (JSC::RunningScope::~RunningScope):
     157        * heap/SlotVisitor.cpp:
     158        (JSC::SlotVisitor::SlotVisitor):
     159        (JSC::SlotVisitor::drain):
     160        (JSC::SlotVisitor::drainFromShared):
     161        (JSC::SlotVisitor::drainInParallelPassively):
     162        (JSC::SlotVisitor::donateAll):
     163        (JSC::SlotVisitor::donate):
     164        * heap/SlotVisitor.h:
     165        (JSC::SlotVisitor::codeName):
     166        * heap/StochasticSpaceTimeMutatorScheduler.cpp:
     167        (JSC::StochasticSpaceTimeMutatorScheduler::beginCollection):
     168        (JSC::StochasticSpaceTimeMutatorScheduler::synchronousDrainingDidStall):
     169        (JSC::StochasticSpaceTimeMutatorScheduler::timeToStop):
     170        * heap/SweepingScope.h: Added.
     171        (JSC::SweepingScope::SweepingScope):
     172        (JSC::SweepingScope::~SweepingScope):
     173        * jit/JITWorklist.cpp:
     174        (JSC::JITWorklist::Thread::Thread):
     175        * jsc.cpp:
     176        (GlobalObject::finishCreation):
     177        (functionFlashHeapAccess):
     178        * runtime/InitializeThreading.cpp:
     179        (JSC::initializeThreading):
     180        * runtime/JSCellInlines.h:
     181        (JSC::JSCell::classInfo):
     182        * runtime/Options.cpp:
     183        (JSC::overrideDefaults):
     184        * runtime/Options.h:
     185        * runtime/TestRunnerUtils.cpp:
     186        (JSC::finalizeStatsAtEndOfTesting):
     187
    11882017-02-21  Saam Barati  <sbarati@apple.com>
    2189
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r212775 r212778  
    270270                0F2BDC4F15228BF300CD8910 /* DFGValueSource.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */; };
    271271                0F2BDC5115228FFD00CD8910 /* DFGVariableEvent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */; };
     272                0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C63A91E4FA42C00C13839 /* RunningScope.h */; settings = {ATTRIBUTES = (Private, ); }; };
    272273                0F2D4DDD19832D34007D4B19 /* DebuggerScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */; };
    273274                0F2D4DDE19832D34007D4B19 /* DebuggerScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    635636                0FA762061DB9243100B7A2FD /* MutatorState.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA762021DB9242300B7A2FD /* MutatorState.cpp */; };
    636637                0FA762071DB9243300B7A2FD /* MutatorState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762031DB9242300B7A2FD /* MutatorState.h */; settings = {ATTRIBUTES = (Private, ); }; };
    637                 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */; };
    638638                0FA7620B1DB959F900B7A2FD /* AllocatingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */; };
    639639                0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; };
     
    736736                0FCEFADF180738C000472CE4 /* FTLLocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */; };
    737737                0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
     738                0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */; settings = {ATTRIBUTES = (Private, ); }; };
     739                0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5E81E43D3470006AB08 /* GCConductor.h */; settings = {ATTRIBUTES = (Private, ); }; };
     740                0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */; };
     741                0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */; };
     742                0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5ED1E468A540006AB08 /* SweepingScope.h */; };
     743                0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5EF1E46BF230006AB08 /* RegisterState.h */; settings = {ATTRIBUTES = (Private, ); }; };
     744                0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */; };
    738745                0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
    739746                0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; };
     
    12061213                14B8EC720A5652090062BE54 /* CoreFoundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 6560A4CF04B3B3E7008AE952 /* CoreFoundation.framework */; };
    12071214                14BA78F113AAB88F005B7C2C /* SlotVisitor.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */; settings = {ATTRIBUTES = (Private, ); }; };
    1208                 14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; };
     1215                14BA7A9713AADFF8005B7C2C /* Heap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 14BA7A9513AADFF8005B7C2C /* Heap.cpp */; settings = {COMPILER_FLAGS = "-fno-optimize-sibling-calls"; }; };
    12091216                14BA7A9813AADFF8005B7C2C /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14BA7A9613AADFF8005B7C2C /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; };
    12101217                14BD59C50A3E8F9F00BAF59C /* JavaScriptCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 932F5BD90822A1C700736975 /* JavaScriptCore.framework */; };
     
    21802187                C225494315F7DBAA0065E898 /* SlotVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C225494215F7DBAA0065E898 /* SlotVisitor.cpp */; };
    21812188                C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F77008E1402FDD60078EB39 /* SamplingCounter.h */; settings = {ATTRIBUTES = (Private, ); }; };
    2182                 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */; };
    2183                 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */ = {isa = PBXBuildFile; fileRef = C24D31E1161CD695002AA4DB /* HeapStatistics.h */; settings = {ATTRIBUTES = (Private, ); }; };
    21842189                C25D709B16DE99F400FCA6BC /* JSManagedValue.mm in Sources */ = {isa = PBXBuildFile; fileRef = C25D709916DE99F400FCA6BC /* JSManagedValue.mm */; };
    21852190                C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */ = {isa = PBXBuildFile; fileRef = C25D709A16DE99F400FCA6BC /* JSManagedValue.h */; settings = {ATTRIBUTES = (Public, ); }; };
     
    27482753                0F2BDC4E15228BE700CD8910 /* DFGValueSource.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValueSource.cpp; path = dfg/DFGValueSource.cpp; sourceTree = "<group>"; };
    27492754                0F2BDC5015228FFA00CD8910 /* DFGVariableEvent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableEvent.cpp; path = dfg/DFGVariableEvent.cpp; sourceTree = "<group>"; };
     2755                0F2C63A91E4FA42C00C13839 /* RunningScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RunningScope.h; sourceTree = "<group>"; };
    27502756                0F2D4DDB19832D34007D4B19 /* DebuggerScope.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DebuggerScope.cpp; sourceTree = "<group>"; };
    27512757                0F2D4DDC19832D34007D4B19 /* DebuggerScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerScope.h; sourceTree = "<group>"; };
     
    31073113                0FA762021DB9242300B7A2FD /* MutatorState.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MutatorState.cpp; sourceTree = "<group>"; };
    31083114                0FA762031DB9242300B7A2FD /* MutatorState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MutatorState.h; sourceTree = "<group>"; };
    3109                 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HelpingGCScope.h; sourceTree = "<group>"; };
    31103115                0FA7620A1DB959F600B7A2FD /* AllocatingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AllocatingScope.h; sourceTree = "<group>"; };
    31113116                0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; };
     
    32203225                0FCEFADD180738C000472CE4 /* FTLLocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLocation.cpp; path = ftl/FTLLocation.cpp; sourceTree = "<group>"; };
    32213226                0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; };
     3227                0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CollectorPhase.cpp; sourceTree = "<group>"; };
     3228                0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectorPhase.h; sourceTree = "<group>"; };
     3229                0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCConductor.cpp; sourceTree = "<group>"; };
     3230                0FD0E5E81E43D3470006AB08 /* GCConductor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCConductor.h; sourceTree = "<group>"; };
     3231                0FD0E5ED1E468A540006AB08 /* SweepingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SweepingScope.h; sourceTree = "<group>"; };
     3232                0FD0E5EF1E46BF230006AB08 /* RegisterState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterState.h; sourceTree = "<group>"; };
     3233                0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CollectingScope.h; sourceTree = "<group>"; };
    32223234                0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; };
    32233235                0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; };
     
    46954707                C2181FC118A948FB0025A235 /* JSExportTests.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = JSExportTests.mm; path = API/tests/JSExportTests.mm; sourceTree = "<group>"; };
    46964708                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SlotVisitor.cpp; sourceTree = "<group>"; };
    4697                 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapStatistics.cpp; sourceTree = "<group>"; };
    4698                 C24D31E1161CD695002AA4DB /* HeapStatistics.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapStatistics.h; sourceTree = "<group>"; };
    46994709                C25D709916DE99F400FCA6BC /* JSManagedValue.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = JSManagedValue.mm; sourceTree = "<group>"; };
    47004710                C25D709A16DE99F400FCA6BC /* JSManagedValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSManagedValue.h; sourceTree = "<group>"; };
     
    57615771                                0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */,
    57625772                                0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */,
     5773                                0FD0E5F11E46C8AD0006AB08 /* CollectingScope.h */,
    57635774                                0FA762001DB9242300B7A2FD /* CollectionScope.cpp */,
    57645775                                0FA762011DB9242300B7A2FD /* CollectionScope.h */,
     5776                                0FD0E5E51E43D3470006AB08 /* CollectorPhase.cpp */,
     5777                                0FD0E5E61E43D3470006AB08 /* CollectorPhase.h */,
    57655778                                146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */,
    57665779                                149DAAF212EB559D0083B12B /* ConservativeRoots.h */,
     
    57805793                                2AACE63B18CA5A0300ED0191 /* GCActivityCallback.h */,
    57815794                                BCBE2CAD14E985AA000593AD /* GCAssertions.h */,
     5795                                0FD0E5E71E43D3470006AB08 /* GCConductor.cpp */,
     5796                                0FD0E5E81E43D3470006AB08 /* GCConductor.h */,
    57825797                                0FB4767C1D99AEA7008EA6CB /* GCDeferralContext.h */,
    57835798                                0FB4767D1D99AEA7008EA6CB /* GCDeferralContextInlines.h */,
     
    58155830                                A5311C341C77CEAC00E6B1B6 /* HeapSnapshotBuilder.cpp */,
    58165831                                A5311C351C77CEAC00E6B1B6 /* HeapSnapshotBuilder.h */,
    5817                                 C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */,
    5818                                 C24D31E1161CD695002AA4DB /* HeapStatistics.h */,
    58195832                                C2E526BB1590EF000054E48D /* HeapTimer.cpp */,
    58205833                                C2E526BC1590EF000054E48D /* HeapTimer.h */,
     
    58225835                                FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */,
    58235836                                FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */,
    5824                                 0FA762081DB9283C00B7A2FD /* HelpingGCScope.h */,
    58255837                                C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */,
    58265838                                C25F8BCC157544A900245B71 /* IncrementalSweeper.h */,
     
    58605872                                ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
    58615873                                0FBB73B61DEF3AAC002C009E /* PreventCollectionScope.h */,
     5874                                0FD0E5EF1E46BF230006AB08 /* RegisterState.h */,
    58625875                                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
     5876                                0F2C63A91E4FA42C00C13839 /* RunningScope.h */,
    58635877                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
    58645878                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
     
    58755889                                0F7DF1321E2970D50095951B /* Subspace.h */,
    58765890                                0F7DF1331E2970D50095951B /* SubspaceInlines.h */,
     5891                                0FD0E5ED1E468A540006AB08 /* SweepingScope.h */,
    58775892                                0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */,
    58785893                                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */,
     
    79968011                                DC9A0C201D2D9CB30085124E /* B3CaseCollection.h in Headers */,
    79978012                                DC9A0C1F1D2D9CB10085124E /* B3CaseCollectionInlines.h in Headers */,
     8013                                0FD0E5EE1E468A570006AB08 /* SweepingScope.h in Headers */,
    79988014                                0F338DFA1BE96AA80013C88F /* B3CCallValue.h in Headers */,
    79998015                                0F33FCFB1C1625BE00323F67 /* B3CFG.h in Headers */,
     
    82718287                                0F2017801DCADC3500EA5950 /* DFGFlowIndexing.h in Headers */,
    82728288                                0F2017821DCADD4200EA5950 /* DFGFlowMap.h in Headers */,
     8289                                0F2C63AA1E4FA42E00C13839 /* RunningScope.h in Headers */,
    82738290                                0F9D339717FFC4E60073C2BC /* DFGFlushedAt.h in Headers */,
    82748291                                A7D89CF817A0B8CC00773AD8 /* DFGFlushFormat.h in Headers */,
     
    85638580                                A54C2AB11C6544F200A18D78 /* HeapSnapshot.h in Headers */,
    85648581                                A5311C361C77CEC500E6B1B6 /* HeapSnapshotBuilder.h in Headers */,
    8565                                 C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */,
    85668582                                C2E526BE1590EF000054E48D /* HeapTimer.h in Headers */,
     8583                                0FD0E5EA1E43D34D0006AB08 /* GCConductor.h in Headers */,
    85678584                                0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */,
    85688585                                FE7BA6101A1A7CEC00F1F7B4 /* HeapVerifier.h in Headers */,
    8569                                 0FA762091DB9283E00B7A2FD /* HelpingGCScope.h in Headers */,
    85708586                                0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */,
    85718587                                DC2143071CA32E55000A8869 /* ICStats.h in Headers */,
     
    86288644                                A18193E41B4E0CDB00FC1029 /* IntlCollatorPrototype.lut.h in Headers */,
    86298645                                A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */,
     8646                                0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */,
    86308647                                A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */,
    86318648                                A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */,
     
    86378654                                A1D793011B43864B004516F5 /* IntlNumberFormatPrototype.h in Headers */,
    86388655                                A125846F1B45A36000CC7F6C /* IntlNumberFormatPrototype.lut.h in Headers */,
     8656                                0FD0E5F01E46BF250006AB08 /* RegisterState.h in Headers */,
    86398657                                A12BBFF21B044A8B00664B69 /* IntlObject.h in Headers */,
    86408658                                708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */,
     
    92179235                                AD2FCC161DB59CB200B3E736 /* WebAssemblyCompileErrorConstructor.lut.h in Headers */,
    92189236                                AD2FCBEF1DB58DAD00B3E736 /* WebAssemblyCompileErrorPrototype.h in Headers */,
     9237                                0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */,
    92199238                                AD2FCC171DB59CB200B3E736 /* WebAssemblyCompileErrorPrototype.lut.h in Headers */,
    92209239                                AD4937D41DDD27DE0077C807 /* WebAssemblyFunction.h in Headers */,
     
    1019110210                                A54C2AB01C6544EE00A18D78 /* HeapSnapshot.cpp in Sources */,
    1019210211                                A5311C371C77CECA00E6B1B6 /* HeapSnapshotBuilder.cpp in Sources */,
    10193                                 C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */,
    1019410212                                C2E526BD1590EF000054E48D /* HeapTimer.cpp in Sources */,
    1019510213                                FE7BA60F1A1A7CEC00F1F7B4 /* HeapVerifier.cpp in Sources */,
     
    1025610274                                FE187A0E1C030D640038BBCA /* JITDivGenerator.cpp in Sources */,
    1025710275                                0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */,
     10276                                0FD0E5EB1E43D3500006AB08 /* CollectorPhase.cpp in Sources */,
    1025810277                                0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */,
    1025910278                                FE3A06BD1C11040D00390FDD /* JITLeftShiftGenerator.cpp in Sources */,
     
    1050410523                                6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */,
    1050510524                                0FC3141518146D7000033232 /* RegisterSet.cpp in Sources */,
     10525                                0FD0E5EC1E43D3530006AB08 /* GCConductor.cpp in Sources */,
    1050610526                                A57D23ED1891B5540031C7FA /* RegularExpression.cpp in Sources */,
    1050710527                                992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */,
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r212616 r212778  
    25332533    if (m_instructions.size()) {
    25342534        unsigned refCount = m_instructions.refCount();
    2535         RELEASE_ASSERT(refCount);
     2535        if (!refCount) {
     2536            dataLog("CodeBlock: ", RawPointer(this), "\n");
     2537            dataLog("m_instructions.data(): ", RawPointer(m_instructions.data()), "\n");
     2538            dataLog("refCount: ", refCount, "\n");
     2539            RELEASE_ASSERT_NOT_REACHED();
     2540        }
    25362541        visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount);
    25372542    }
  • trunk/Source/JavaScriptCore/dfg/DFGWorklist.cpp

    r212616 r212778  
    4141class Worklist::ThreadBody : public AutomaticThread {
    4242public:
    43     ThreadBody(const LockHolder& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)
     43    ThreadBody(const AbstractLocker& locker, Worklist& worklist, ThreadData& data, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition, int relativePriority)
    4444        : AutomaticThread(locker, lock, condition)
    4545        , m_worklist(worklist)
     
    5050   
    5151protected:
    52     PollResult poll(const LockHolder& locker) override
     52    PollResult poll(const AbstractLocker& locker) override
    5353    {
    5454        if (m_worklist.m_queue.isEmpty())
     
    151151    }
    152152   
    153     void threadIsStopping(const LockHolder&) override
     153    void threadIsStopping(const AbstractLocker&) override
    154154    {
    155155        // We're holding the Worklist::m_lock, so we should be careful not to deadlock.
     
    480480}
    481481
    482 void Worklist::dump(const LockHolder&, PrintStream& out) const
     482void Worklist::dump(const AbstractLocker&, PrintStream& out) const
    483483{
    484484    out.print(
     
    536536}
    537537
     538unsigned numberOfWorklists() { return 2; }
     539
     540Worklist& ensureWorklistForIndex(unsigned index)
     541{
     542    switch (index) {
     543    case 0:
     544        return ensureGlobalDFGWorklist();
     545    case 1:
     546        return ensureGlobalFTLWorklist();
     547    default:
     548        RELEASE_ASSERT_NOT_REACHED();
     549        return ensureGlobalDFGWorklist();
     550    }
     551}
     552
     553Worklist* existingWorklistForIndexOrNull(unsigned index)
     554{
     555    switch (index) {
     556    case 0:
     557        return existingGlobalDFGWorklistOrNull();
     558    case 1:
     559        return existingGlobalFTLWorklistOrNull();
     560    default:
     561        RELEASE_ASSERT_NOT_REACHED();
     562        return 0;
     563    }
     564}
     565
     566Worklist& existingWorklistForIndex(unsigned index)
     567{
     568    Worklist* result = existingWorklistForIndexOrNull(index);
     569    RELEASE_ASSERT(result);
     570    return *result;
     571}
     572
    538573void completeAllPlansForVM(VM& vm)
    539574{
  • trunk/Source/JavaScriptCore/dfg/DFGWorklist.h

    r212616 r212778  
    9494    void removeAllReadyPlansForVM(VM&, Vector<RefPtr<Plan>, 8>&);
    9595
    96     void dump(const LockHolder&, PrintStream&) const;
     96    void dump(const AbstractLocker&, PrintStream&) const;
    9797   
    9898    CString m_threadName;
     
    133133
    134134// Simplify doing things for all worklists.
    135 inline unsigned numberOfWorklists() { return 2; }
    136 inline Worklist& ensureWorklistForIndex(unsigned index)
    137 {
    138     switch (index) {
    139     case 0:
    140         return ensureGlobalDFGWorklist();
    141     case 1:
    142         return ensureGlobalFTLWorklist();
    143     default:
    144         RELEASE_ASSERT_NOT_REACHED();
    145         return ensureGlobalDFGWorklist();
    146     }
    147 }
    148 inline Worklist* existingWorklistForIndexOrNull(unsigned index)
    149 {
    150     switch (index) {
    151     case 0:
    152         return existingGlobalDFGWorklistOrNull();
    153     case 1:
    154         return existingGlobalFTLWorklistOrNull();
    155     default:
    156         RELEASE_ASSERT_NOT_REACHED();
    157         return 0;
    158     }
    159 }
    160 inline Worklist& existingWorklistForIndex(unsigned index)
    161 {
    162     Worklist* result = existingWorklistForIndexOrNull(index);
    163     RELEASE_ASSERT(result);
    164     return *result;
    165 }
     135unsigned numberOfWorklists();
     136Worklist& ensureWorklistForIndex(unsigned index);
     137Worklist* existingWorklistForIndexOrNull(unsigned index);
     138Worklist& existingWorklistForIndex(unsigned index);
    166139
    167140#endif // ENABLE(DFG_JIT)
  • trunk/Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp

    r212616 r212778  
    11/*
    2  * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4646double EdenGCActivityCallback::lastGCLength()
    4747{
    48     return m_vm->heap.lastEdenGCLength();
     48    return m_vm->heap.lastEdenGCLength().seconds();
    4949}
    5050
  • trunk/Source/JavaScriptCore/heap/FullGCActivityCallback.cpp

    r212616 r212778  
    11/*
    2  * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    5151    if (heap.isPagedOut(startTime + pagingTimeOut)) {
    5252        cancel();
    53         heap.increaseLastFullGCLength(pagingTimeOut);
     53        heap.increaseLastFullGCLength(Seconds(pagingTimeOut));
    5454        return;
    5555    }
     
    6161double FullGCActivityCallback::lastGCLength()
    6262{
    63     return m_vm->heap.lastFullGCLength();
     63    return m_vm->heap.lastFullGCLength().seconds();
    6464}
    6565
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r212624 r212778  
    2424#include "CodeBlock.h"
    2525#include "CodeBlockSetInlines.h"
     26#include "CollectingScope.h"
    2627#include "ConservativeRoots.h"
    2728#include "DFGWorklistInlines.h"
     
    3839#include "HeapProfiler.h"
    3940#include "HeapSnapshot.h"
    40 #include "HeapStatistics.h"
    4141#include "HeapVerifier.h"
    42 #include "HelpingGCScope.h"
    4342#include "IncrementalSweeper.h"
    4443#include "Interpreter.h"
     
    4948#include "JSLock.h"
    5049#include "JSVirtualMachineInternal.h"
     50#include "MachineStackMarker.h"
    5151#include "MarkedSpaceInlines.h"
    5252#include "MarkingConstraintSet.h"
     
    5858#include "StochasticSpaceTimeMutatorScheduler.h"
    5959#include "StopIfNecessaryTimer.h"
     60#include "SweepingScope.h"
    6061#include "SynchronousStopTheWorldMutatorScheduler.h"
    6162#include "TypeProfilerLog.h"
     
    207208class Heap::Thread : public AutomaticThread {
    208209public:
    209     Thread(const LockHolder& locker, Heap& heap)
     210    Thread(const AbstractLocker& locker, Heap& heap)
    210211        : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition)
    211212        , m_heap(heap)
     
    214215   
    215216protected:
    216     PollResult poll(const LockHolder& locker) override
     217    PollResult poll(const AbstractLocker& locker) override
    217218    {
    218219        if (m_heap.m_threadShouldStop) {
     
    220221            return PollResult::Stop;
    221222        }
    222         if (m_heap.shouldCollectInThread(locker))
     223        if (m_heap.shouldCollectInCollectorThread(locker))
    223224            return PollResult::Work;
    224225        return PollResult::Wait;
     
    227228    WorkResult work() override
    228229    {
    229         m_heap.collectInThread();
     230        m_heap.collectInCollectorThread();
    230231        return WorkResult::Continue;
    231232    }
     
    258259    , m_extraMemorySize(0)
    259260    , m_deprecatedExtraMemorySize(0)
    260     , m_machineThreads(this)
    261     , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this))
    262     , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this))
     261    , m_machineThreads(std::make_unique<MachineThreads>(this))
     262    , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this, "C"))
     263    , m_mutatorSlotVisitor(std::make_unique<SlotVisitor>(*this, "M"))
    263264    , m_mutatorMarkStack(std::make_unique<MarkStackArray>())
    264265    , m_raceMarkStack(std::make_unique<MarkStackArray>())
     
    334335void Heap::lastChanceToFinalize()
    335336{
     337    MonotonicTime before;
     338    if (Options::logGC()) {
     339        before = MonotonicTime::now();
     340        dataLog("[GC<", RawPointer(this), ">: shutdown ");
     341    }
     342   
    336343    RELEASE_ASSERT(!m_vm->entryScope);
    337344    RELEASE_ASSERT(m_mutatorState == MutatorState::Running);
     
    346353    }
    347354   
    348     // Carefully bring the thread down. We need to use waitForCollector() until we know that there
    349     // won't be any other collections.
     355    if (Options::logGC())
     356        dataLog("1");
     357   
     358    // Prevent new collections from being started. This is probably not even necessary, since we're not
     359    // going to call into anything that starts collections. Still, this makes the algorithm more
     360    // obviously sound.
     361    m_isSafeToCollect = false;
     362   
     363    if (Options::logGC())
     364        dataLog("2");
     365
     366    bool isCollecting;
     367    {
     368        auto locker = holdLock(*m_threadLock);
     369        RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
     370        isCollecting = m_lastServedTicket < m_lastGrantedTicket;
     371    }
     372    if (isCollecting) {
     373        if (Options::logGC())
     374            dataLog("...]\n");
     375       
     376        // Wait for the current collection to finish.
     377        waitForCollector(
     378            [&] (const AbstractLocker&) -> bool {
     379                RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
     380                return m_lastServedTicket == m_lastGrantedTicket;
     381            });
     382       
     383        if (Options::logGC())
     384            dataLog("[GC<", RawPointer(this), ">: shutdown ");
     385    }
     386    if (Options::logGC())
     387        dataLog("3");
     388
     389    RELEASE_ASSERT(m_requests.isEmpty());
     390    RELEASE_ASSERT(m_lastServedTicket == m_lastGrantedTicket);
     391   
     392    // Carefully bring the thread down.
    350393    bool stopped = false;
    351394    {
    352395        LockHolder locker(*m_threadLock);
    353396        stopped = m_thread->tryStop(locker);
    354         if (!stopped) {
    355             m_threadShouldStop = true;
     397        m_threadShouldStop = true;
     398        if (!stopped)
    356399            m_threadCondition->notifyOne(locker);
    357         }
    358     }
    359     if (!stopped) {
    360         waitForCollector(
    361             [&] (const LockHolder&) -> bool {
    362                 return m_threadIsStopping;
    363             });
    364         // It's now safe to join the thread, since we know that there will not be any more collections.
     400    }
     401
     402    if (Options::logGC())
     403        dataLog("4");
     404   
     405    if (!stopped)
    365406        m_thread->join();
    366     }
     407   
     408    if (Options::logGC())
     409        dataLog("5 ");
    367410   
    368411    m_arrayBuffers.lastChanceToFinalize();
     
    373416
    374417    sweepAllLogicallyEmptyWeakBlocks();
     418   
     419    if (Options::logGC())
     420        dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
    375421}
    376422
     
    526572}
    527573
    528 void Heap::markToFixpoint(double gcStartTime)
    529 {
    530     TimingScope markToFixpointTimingScope(*this, "Heap::markToFixpoint");
    531    
    532     if (m_collectionScope == CollectionScope::Full) {
    533         m_opaqueRoots.clear();
    534         m_collectorSlotVisitor->clearMarkStacks();
    535         m_mutatorMarkStack->clear();
    536     }
    537 
    538     RELEASE_ASSERT(m_raceMarkStack->isEmpty());
    539 
    540     beginMarking();
    541 
    542     forEachSlotVisitor(
    543         [&] (SlotVisitor& visitor) {
    544             visitor.didStartMarking();
    545         });
    546 
    547     m_parallelMarkersShouldExit = false;
    548 
    549     m_helperClient.setFunction(
    550         [this] () {
    551             SlotVisitor* slotVisitor;
    552             {
    553                 LockHolder locker(m_parallelSlotVisitorLock);
    554                 if (m_availableParallelSlotVisitors.isEmpty()) {
    555                     std::unique_ptr<SlotVisitor> newVisitor =
    556                         std::make_unique<SlotVisitor>(*this);
    557                    
    558                     if (Options::optimizeParallelSlotVisitorsForStoppedMutator())
    559                         newVisitor->optimizeForStoppedMutator();
    560                    
    561                     newVisitor->didStartMarking();
    562                    
    563                     slotVisitor = newVisitor.get();
    564                     m_parallelSlotVisitors.append(WTFMove(newVisitor));
    565                 } else
    566                     slotVisitor = m_availableParallelSlotVisitors.takeLast();
    567             }
    568 
    569             WTF::registerGCThread(GCThreadType::Helper);
    570 
    571             {
    572                 ParallelModeEnabler parallelModeEnabler(*slotVisitor);
    573                 slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
    574             }
    575 
    576             {
    577                 LockHolder locker(m_parallelSlotVisitorLock);
    578                 m_availableParallelSlotVisitors.append(slotVisitor);
    579             }
    580         });
    581 
    582     SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
    583 
    584     m_constraintSet->didStartMarking();
    585    
    586     m_scheduler->beginCollection();
    587     if (Options::logGC())
    588         m_scheduler->log();
    589    
    590     // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"
    591     // checks because bootstrap would have put things into the visitor. So, we should fall
    592     // through to draining.
    593    
    594     if (!slotVisitor.didReachTermination()) {
    595         dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");
    596         dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");
    597         dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");
    598         dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");
    599         dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");
    600         dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");
    601         dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");
    602         dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");
    603         RELEASE_ASSERT_NOT_REACHED();
    604     }
    605    
    606     for (;;) {
    607         if (Options::logGC())
    608             dataLog("v=", bytesVisited() / 1024, "kb o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");
    609        
    610         if (slotVisitor.didReachTermination()) {
    611             m_scheduler->didReachTermination();
    612            
    613             assertSharedMarkStacksEmpty();
    614            
    615             slotVisitor.mergeIfNecessary();
    616             for (auto& parallelVisitor : m_parallelSlotVisitors)
    617                 parallelVisitor->mergeIfNecessary();
    618            
    619             // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
    620             // we don't have to execute root constraints again unless the mutator did run. At a
    621             // minimum, we could use this for work estimates - but it's probably more than just an
    622             // estimate.
    623             // https://bugs.webkit.org/show_bug.cgi?id=166828
    624            
    625             // FIXME: We should take advantage of the fact that we could timeout. This only comes
    626             // into play if we're executing constraints for the first time. But that will matter
    627             // when we have deep stacks or a lot of DOM stuff.
    628             // https://bugs.webkit.org/show_bug.cgi?id=166831
    629            
    630             // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
    631             // add their own using Heap::addMarkingConstraint().
    632             bool converged =
    633                 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
    634             if (converged && slotVisitor.isEmpty()) {
    635                 assertSharedMarkStacksEmpty();
    636                 break;
    637             }
    638            
    639             m_scheduler->didExecuteConstraints();
    640         }
    641        
    642         if (Options::logGC())
    643             dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");
    644        
    645         {
    646             ParallelModeEnabler enabler(slotVisitor);
    647             slotVisitor.drainInParallel(m_scheduler->timeToResume());
    648         }
    649        
    650         m_scheduler->synchronousDrainingDidStall();
    651 
    652         if (slotVisitor.didReachTermination())
    653             continue;
    654        
    655         if (!m_scheduler->shouldResume())
    656             continue;
    657        
    658         m_scheduler->willResume();
    659        
    660         if (Options::logGC()) {
    661             double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();
    662             dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
    663         }
    664 
    665         // Forgive the mutator for its past failures to keep up.
    666         // FIXME: Figure out if moving this to different places results in perf changes.
    667         m_incrementBalance = 0;
    668        
    669         resumeTheWorld();
    670        
    671         {
    672             ParallelModeEnabler enabler(slotVisitor);
    673             slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());
    674         }
    675 
    676         stopTheWorld();
    677        
    678         if (Options::logGC())
    679             dataLog("[GC: ");
    680        
    681         m_scheduler->didStop();
    682        
    683         if (Options::logGC())
    684             m_scheduler->log();
    685     }
    686    
    687     m_scheduler->endCollection();
    688 
    689     {
    690         std::lock_guard<Lock> lock(m_markingMutex);
    691         m_parallelMarkersShouldExit = true;
    692         m_markingConditionVariable.notifyAll();
    693     }
    694     m_helperClient.finish();
    695 
    696     iterateExecutingAndCompilingCodeBlocks(
    697         [&] (CodeBlock* codeBlock) {
    698             writeBarrier(codeBlock);
    699         });
    700        
    701     updateObjectCounts(gcStartTime);
    702     endMarking();
    703 }
    704 
    705574void Heap::gatherStackRoots(ConservativeRoots& roots)
    706575{
    707     m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
     576    m_machineThreads->gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, m_currentThreadState);
    708577}
    709578
     
    805674}
    806675
    807 void Heap::updateObjectCounts(double gcStartTime)
    808 {
    809     if (Options::logGC() == GCLogging::Verbose) {
    810         dataLogF("\nNumber of live Objects after GC %lu, took %.6f secs\n", static_cast<unsigned long>(visitCount()), WTF::monotonicallyIncreasingTime() - gcStartTime);
    811     }
    812    
     676void Heap::updateObjectCounts()
     677{
    813678    if (m_collectionScope == CollectionScope::Full)
    814679        m_totalBytesVisited = 0;
     
    1034899    double before = 0;
    1035900    if (Options::logGC()) {
    1036         dataLog("[Full sweep: ", capacity() / 1024, "kb ");
     901        dataLog("Full sweep: ", capacity() / 1024, "kb ");
    1037902        before = currentTimeMS();
    1038903    }
     
    1041906    if (Options::logGC()) {
    1042907        double after = currentTimeMS();
    1043         dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms] ");
     908        dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms");
    1044909    }
    1045910}
     
    1054919    DeferGCForAWhile deferGC(*this);
    1055920    if (UNLIKELY(Options::useImmortalObjects()))
    1056         sweeper()->willFinishSweeping();
     921        sweeper()->stopSweeping();
    1057922
    1058923    bool alreadySweptInCollectSync = Options::sweepSynchronously();
    1059924    if (!alreadySweptInCollectSync) {
     925        if (Options::logGC())
     926            dataLog("[GC<", RawPointer(this), ">: ");
    1060927        sweepSynchronously();
    1061928        if (Options::logGC())
    1062             dataLog("\n");
     929            dataLog("]\n");
    1063930    }
    1064931    m_objectSpace.assertNoUnswept();
     
    1109976}
    1110977
    1111 bool Heap::shouldCollectInThread(const LockHolder&)
     978bool Heap::shouldCollectInCollectorThread(const AbstractLocker&)
    1112979{
    1113980    RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket));
    1114981    RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
    1115982   
    1116     return !m_requests.isEmpty();
    1117 }
    1118 
    1119 void Heap::collectInThread()
     983    if (false)
     984        dataLog("Mutator has the conn = ", !!(m_worldState.load() & mutatorHasConnBit), "\n");
     985   
     986    return !m_requests.isEmpty() && !(m_worldState.load() & mutatorHasConnBit);
     987}
     988
     989void Heap::collectInCollectorThread()
     990{
     991    for (;;) {
     992        RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Collector, nullptr);
     993        switch (result) {
     994        case RunCurrentPhaseResult::Finished:
     995            return;
     996        case RunCurrentPhaseResult::Continue:
     997            break;
     998        case RunCurrentPhaseResult::NeedCurrentThreadState:
     999            RELEASE_ASSERT_NOT_REACHED();
     1000            break;
     1001        }
     1002    }
     1003}
     1004
     1005void Heap::checkConn(GCConductor conn)
     1006{
     1007    switch (conn) {
     1008    case GCConductor::Mutator:
     1009        RELEASE_ASSERT(m_worldState.load() & mutatorHasConnBit);
     1010        return;
     1011    case GCConductor::Collector:
     1012        RELEASE_ASSERT(!(m_worldState.load() & mutatorHasConnBit));
     1013        return;
     1014    }
     1015    RELEASE_ASSERT_NOT_REACHED();
     1016}
     1017
     1018auto Heap::runCurrentPhase(GCConductor conn, CurrentThreadState* currentThreadState) -> RunCurrentPhaseResult
     1019{
     1020    checkConn(conn);
     1021    m_currentThreadState = currentThreadState;
     1022   
     1023    // If the collector transfers the conn to the mutator, it leaves us in between phases.
     1024    if (!finishChangingPhase(conn)) {
     1025        // A mischevious mutator could repeatedly relinquish the conn back to us. We try to avoid doing
     1026        // this, but it's probably not the end of the world if it did happen.
     1027        if (false)
     1028            dataLog("Conn bounce-back.\n");
     1029        return RunCurrentPhaseResult::Finished;
     1030    }
     1031   
     1032    bool result = false;
     1033    switch (m_currentPhase) {
     1034    case CollectorPhase::NotRunning:
     1035        result = runNotRunningPhase(conn);
     1036        break;
     1037       
     1038    case CollectorPhase::Begin:
     1039        result = runBeginPhase(conn);
     1040        break;
     1041       
     1042    case CollectorPhase::Fixpoint:
     1043        if (!currentThreadState && conn == GCConductor::Mutator)
     1044            return RunCurrentPhaseResult::NeedCurrentThreadState;
     1045       
     1046        result = runFixpointPhase(conn);
     1047        break;
     1048       
     1049    case CollectorPhase::Concurrent:
     1050        result = runConcurrentPhase(conn);
     1051        break;
     1052       
     1053    case CollectorPhase::Reloop:
     1054        result = runReloopPhase(conn);
     1055        break;
     1056       
     1057    case CollectorPhase::End:
     1058        result = runEndPhase(conn);
     1059        break;
     1060    }
     1061
     1062    return result ? RunCurrentPhaseResult::Continue : RunCurrentPhaseResult::Finished;
     1063}
     1064
     1065NEVER_INLINE bool Heap::runNotRunningPhase(GCConductor conn)
     1066{
     1067    // Check m_requests since the mutator calls this to poll what's going on.
     1068    {
     1069        auto locker = holdLock(*m_threadLock);
     1070        if (m_requests.isEmpty())
     1071            return false;
     1072    }
     1073   
     1074    return changePhase(conn, CollectorPhase::Begin);
     1075}
     1076
     1077NEVER_INLINE bool Heap::runBeginPhase(GCConductor conn)
    11201078{
    11211079    m_currentGCStartTime = MonotonicTime::now();
    1122    
     1080       
    11231081    std::optional<CollectionScope> scope;
    11241082    {
     
    11271085        scope = m_requests.first();
    11281086    }
    1129    
    1130     SuperSamplerScope superSamplerScope(false);
    1131     TimingScope collectImplTimingScope(scope, "Heap::collectInThread");
    1132    
    1133 #if ENABLE(ALLOCATION_LOGGING)
    1134     dataLogF("JSC GC starting collection.\n");
    1135 #endif
    1136    
    1137     stopTheWorld();
    1138    
    1139     if (false)
    1140         dataLog("GC START!\n");
    1141 
    1142     MonotonicTime before;
    1143     if (Options::logGC()) {
    1144         dataLog("[GC: START ", capacity() / 1024, "kb ");
    1145         before = MonotonicTime::now();
    1146     }
    1147    
    1148     double gcStartTime;
    1149    
    1150     ASSERT(m_isSafeToCollect);
     1087       
     1088    if (Options::logGC())
     1089        dataLog("[GC<", RawPointer(this), ">: START ", gcConductorShortName(conn), " ", capacity() / 1024, "kb ");
     1090
     1091    m_beforeGC = MonotonicTime::now();
     1092
    11511093    if (m_collectionScope) {
    11521094        dataLog("Collection scope already set during GC: ", *m_collectionScope, "\n");
    11531095        RELEASE_ASSERT_NOT_REACHED();
    11541096    }
    1155    
     1097       
    11561098    willStartCollection(scope);
    1157     collectImplTimingScope.setScope(*this);
    1158    
    1159     gcStartTime = WTF::monotonicallyIncreasingTime();
     1099       
    11601100    if (m_verifier) {
    11611101        // Verify that live objects from the last GC cycle haven't been corrupted by
     
    11661106        m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
    11671107    }
    1168    
     1108       
    11691109    prepareForMarking();
    1170    
    1171     markToFixpoint(gcStartTime);
    1172    
     1110       
     1111    if (m_collectionScope == CollectionScope::Full) {
     1112        m_opaqueRoots.clear();
     1113        m_collectorSlotVisitor->clearMarkStacks();
     1114        m_mutatorMarkStack->clear();
     1115    }
     1116
     1117    RELEASE_ASSERT(m_raceMarkStack->isEmpty());
     1118
     1119    beginMarking();
     1120
     1121    forEachSlotVisitor(
     1122        [&] (SlotVisitor& visitor) {
     1123            visitor.didStartMarking();
     1124        });
     1125
     1126    m_parallelMarkersShouldExit = false;
     1127
     1128    m_helperClient.setFunction(
     1129        [this] () {
     1130            SlotVisitor* slotVisitor;
     1131            {
     1132                LockHolder locker(m_parallelSlotVisitorLock);
     1133                if (m_availableParallelSlotVisitors.isEmpty()) {
     1134                    std::unique_ptr<SlotVisitor> newVisitor = std::make_unique<SlotVisitor>(
     1135                        *this, toCString("P", m_parallelSlotVisitors.size() + 1));
     1136                   
     1137                    if (Options::optimizeParallelSlotVisitorsForStoppedMutator())
     1138                        newVisitor->optimizeForStoppedMutator();
     1139                   
     1140                    newVisitor->didStartMarking();
     1141                   
     1142                    slotVisitor = newVisitor.get();
     1143                    m_parallelSlotVisitors.append(WTFMove(newVisitor));
     1144                } else
     1145                    slotVisitor = m_availableParallelSlotVisitors.takeLast();
     1146            }
     1147
     1148            WTF::registerGCThread(GCThreadType::Helper);
     1149
     1150            {
     1151                ParallelModeEnabler parallelModeEnabler(*slotVisitor);
     1152                slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
     1153            }
     1154
     1155            {
     1156                LockHolder locker(m_parallelSlotVisitorLock);
     1157                m_availableParallelSlotVisitors.append(slotVisitor);
     1158            }
     1159        });
     1160
     1161    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
     1162
     1163    m_constraintSet->didStartMarking();
     1164   
     1165    m_scheduler->beginCollection();
     1166    if (Options::logGC())
     1167        m_scheduler->log();
     1168   
     1169    // After this, we will almost certainly fall through all of the "slotVisitor.isEmpty()"
     1170    // checks because bootstrap would have put things into the visitor. So, we should fall
     1171    // through to draining.
     1172   
     1173    if (!slotVisitor.didReachTermination()) {
     1174        dataLog("Fatal: SlotVisitor should think that GC should terminate before constraint solving, but it does not think this.\n");
     1175        dataLog("slotVisitor.isEmpty(): ", slotVisitor.isEmpty(), "\n");
     1176        dataLog("slotVisitor.collectorMarkStack().isEmpty(): ", slotVisitor.collectorMarkStack().isEmpty(), "\n");
     1177        dataLog("slotVisitor.mutatorMarkStack().isEmpty(): ", slotVisitor.mutatorMarkStack().isEmpty(), "\n");
     1178        dataLog("m_numberOfActiveParallelMarkers: ", m_numberOfActiveParallelMarkers, "\n");
     1179        dataLog("m_sharedCollectorMarkStack->isEmpty(): ", m_sharedCollectorMarkStack->isEmpty(), "\n");
     1180        dataLog("m_sharedMutatorMarkStack->isEmpty(): ", m_sharedMutatorMarkStack->isEmpty(), "\n");
     1181        dataLog("slotVisitor.didReachTermination(): ", slotVisitor.didReachTermination(), "\n");
     1182        RELEASE_ASSERT_NOT_REACHED();
     1183    }
     1184       
     1185    return changePhase(conn, CollectorPhase::Fixpoint);
     1186}
     1187
     1188NEVER_INLINE bool Heap::runFixpointPhase(GCConductor conn)
     1189{
     1190    RELEASE_ASSERT(conn == GCConductor::Collector || m_currentThreadState);
     1191   
     1192    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
     1193   
     1194    if (Options::logGC()) {
     1195        HashMap<const char*, size_t> visitMap;
     1196        forEachSlotVisitor(
     1197            [&] (SlotVisitor& slotVisitor) {
     1198                visitMap.add(slotVisitor.codeName(), slotVisitor.bytesVisited() / 1024);
     1199            });
     1200       
     1201        auto perVisitorDump = sortedMapDump(
     1202            visitMap,
     1203            [] (const char* a, const char* b) -> bool {
     1204                return strcmp(a, b) < 0;
     1205            },
     1206            ":", " ");
     1207       
     1208        dataLog("v=", bytesVisited() / 1024, "kb (", perVisitorDump, ") o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");
     1209    }
     1210       
     1211    if (slotVisitor.didReachTermination()) {
     1212        m_scheduler->didReachTermination();
     1213           
     1214        assertSharedMarkStacksEmpty();
     1215           
     1216        slotVisitor.mergeIfNecessary();
     1217        for (auto& parallelVisitor : m_parallelSlotVisitors)
     1218            parallelVisitor->mergeIfNecessary();
     1219           
     1220        // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
     1221        // we don't have to execute root constraints again unless the mutator did run. At a
     1222        // minimum, we could use this for work estimates - but it's probably more than just an
     1223        // estimate.
     1224        // https://bugs.webkit.org/show_bug.cgi?id=166828
     1225           
     1226        // FIXME: We should take advantage of the fact that we could timeout. This only comes
     1227        // into play if we're executing constraints for the first time. But that will matter
     1228        // when we have deep stacks or a lot of DOM stuff.
     1229        // https://bugs.webkit.org/show_bug.cgi?id=166831
     1230           
     1231        // Wondering what this does? Look at Heap::addCoreConstraints(). The DOM and others can also
     1232        // add their own using Heap::addMarkingConstraint().
     1233        bool converged =
     1234            m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
     1235        if (converged && slotVisitor.isEmpty()) {
     1236            assertSharedMarkStacksEmpty();
     1237            return changePhase(conn, CollectorPhase::End);
     1238        }
     1239           
     1240        m_scheduler->didExecuteConstraints();
     1241    }
     1242       
     1243    if (Options::logGC())
     1244        dataLog(slotVisitor.collectorMarkStack().size(), "+", m_mutatorMarkStack->size() + slotVisitor.mutatorMarkStack().size(), " ");
     1245       
     1246    {
     1247        ParallelModeEnabler enabler(slotVisitor);
     1248        slotVisitor.drainInParallel(m_scheduler->timeToResume());
     1249    }
     1250       
     1251    m_scheduler->synchronousDrainingDidStall();
     1252
     1253    if (slotVisitor.didReachTermination())
     1254        return true; // This is like relooping to the top if runFixpointPhase().
     1255       
     1256    if (!m_scheduler->shouldResume())
     1257        return true;
     1258
     1259    m_scheduler->willResume();
     1260       
     1261    if (Options::logGC()) {
     1262        double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();
     1263        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
     1264    }
     1265
     1266    // Forgive the mutator for its past failures to keep up.
     1267    // FIXME: Figure out if moving this to different places results in perf changes.
     1268    m_incrementBalance = 0;
     1269       
     1270    return changePhase(conn, CollectorPhase::Concurrent);
     1271}
     1272
     1273NEVER_INLINE bool Heap::runConcurrentPhase(GCConductor conn)
     1274{
     1275    SlotVisitor& slotVisitor = *m_collectorSlotVisitor;
     1276
     1277    switch (conn) {
     1278    case GCConductor::Mutator: {
     1279        // When the mutator has the conn, we poll runConcurrentPhase() on every time someone says
     1280        // stopIfNecessary(), so on every allocation slow path. When that happens we poll if it's time
     1281        // to stop and do some work.
     1282        if (slotVisitor.didReachTermination()
     1283            || m_scheduler->shouldStop())
     1284            return changePhase(conn, CollectorPhase::Reloop);
     1285       
     1286        // We could be coming from a collector phase that stuffed our SlotVisitor, so make sure we donate
     1287        // everything. This is super cheap if the SlotVisitor is already empty.
     1288        slotVisitor.donateAll();
     1289        return false;
     1290    }
     1291    case GCConductor::Collector: {
     1292        {
     1293            ParallelModeEnabler enabler(slotVisitor);
     1294            slotVisitor.drainInParallelPassively(m_scheduler->timeToStop());
     1295        }
     1296        return changePhase(conn, CollectorPhase::Reloop);
     1297    } }
     1298   
     1299    RELEASE_ASSERT_NOT_REACHED();
     1300    return false;
     1301}
     1302
     1303NEVER_INLINE bool Heap::runReloopPhase(GCConductor conn)
     1304{
     1305    if (Options::logGC())
     1306        dataLog("[GC<", RawPointer(this), ">: ", gcConductorShortName(conn), " ");
     1307   
     1308    m_scheduler->didStop();
     1309   
     1310    if (Options::logGC())
     1311        m_scheduler->log();
     1312   
     1313    return changePhase(conn, CollectorPhase::Fixpoint);
     1314}
     1315
     1316NEVER_INLINE bool Heap::runEndPhase(GCConductor conn)
     1317{
     1318    m_scheduler->endCollection();
     1319       
     1320    {
     1321        auto locker = holdLock(m_markingMutex);
     1322        m_parallelMarkersShouldExit = true;
     1323        m_markingConditionVariable.notifyAll();
     1324    }
     1325    m_helperClient.finish();
     1326   
     1327    iterateExecutingAndCompilingCodeBlocks(
     1328        [&] (CodeBlock* codeBlock) {
     1329            writeBarrier(codeBlock);
     1330        });
     1331       
     1332    updateObjectCounts();
     1333    endMarking();
     1334       
    11731335    if (m_verifier) {
    11741336        m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
     
    11961358    updateAllocationLimits();
    11971359
    1198     didFinishCollection(gcStartTime);
     1360    didFinishCollection();
    11991361   
    12001362    if (m_verifier) {
     
    12091371   
    12101372    if (Options::logGC()) {
    1211         MonotonicTime after = MonotonicTime::now();
    1212         double thisPauseMS = (after - m_stopTime).milliseconds();
    1213         dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), "ms END]\n");
     1373        double thisPauseMS = (m_afterGC - m_stopTime).milliseconds();
     1374        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (m_afterGC - m_beforeGC).milliseconds(), "ms END]\n");
    12141375    }
    12151376   
    12161377    {
    1217         LockHolder locker(*m_threadLock);
     1378        auto locker = holdLock(*m_threadLock);
    12181379        m_requests.removeFirst();
    12191380        m_lastServedTicket++;
     
    12261387
    12271388    setNeedFinalize();
    1228     resumeTheWorld();
    1229    
     1389
    12301390    m_lastGCStartTime = m_currentGCStartTime;
    12311391    m_lastGCEndTime = MonotonicTime::now();
    1232 }
    1233 
    1234 void Heap::stopTheWorld()
    1235 {
    1236     RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped);
    1237     waitWhileNeedFinalize();
    1238     stopTheMutator();
     1392       
     1393    return changePhase(conn, CollectorPhase::NotRunning);
     1394}
     1395
     1396bool Heap::changePhase(GCConductor conn, CollectorPhase nextPhase)
     1397{
     1398    checkConn(conn);
     1399
     1400    m_nextPhase = nextPhase;
     1401
     1402    return finishChangingPhase(conn);
     1403}
     1404
     1405NEVER_INLINE bool Heap::finishChangingPhase(GCConductor conn)
     1406{
     1407    checkConn(conn);
     1408   
     1409    if (m_nextPhase == m_currentPhase)
     1410        return true;
     1411
     1412    if (false)
     1413        dataLog(conn, ": Going to phase: ", m_nextPhase, " (from ", m_currentPhase, ")\n");
     1414   
     1415    bool suspendedBefore = worldShouldBeSuspended(m_currentPhase);
     1416    bool suspendedAfter = worldShouldBeSuspended(m_nextPhase);
     1417   
     1418    if (suspendedBefore != suspendedAfter) {
     1419        if (suspendedBefore) {
     1420            RELEASE_ASSERT(!suspendedAfter);
     1421           
     1422            resumeThePeriphery();
     1423            if (conn == GCConductor::Collector)
     1424                resumeTheMutator();
     1425            else
     1426                handleNeedFinalize();
     1427        } else {
     1428            RELEASE_ASSERT(!suspendedBefore);
     1429            RELEASE_ASSERT(suspendedAfter);
     1430           
     1431            if (conn == GCConductor::Collector) {
     1432                waitWhileNeedFinalize();
     1433                if (!stopTheMutator()) {
     1434                    if (false)
     1435                        dataLog("Returning false.\n");
     1436                    return false;
     1437                }
     1438            } else {
     1439                sanitizeStackForVM(m_vm);
     1440                handleNeedFinalize();
     1441            }
     1442            stopThePeriphery(conn);
     1443        }
     1444    }
     1445   
     1446    m_currentPhase = m_nextPhase;
     1447    return true;
     1448}
     1449
     1450void Heap::stopThePeriphery(GCConductor conn)
     1451{
     1452    if (m_collectorBelievesThatTheWorldIsStopped) {
     1453        dataLog("FATAL: world already stopped.\n");
     1454        RELEASE_ASSERT_NOT_REACHED();
     1455    }
    12391456   
    12401457    if (m_mutatorDidRun)
     
    12421459   
    12431460    m_mutatorDidRun = false;
    1244    
     1461
    12451462    suspendCompilerThreads();
    12461463    m_collectorBelievesThatTheWorldIsStopped = true;
     
    12541471    {
    12551472        DeferGCForAWhile awhile(*this);
    1256         if (JITWorklist::instance()->completeAllForVM(*m_vm))
     1473        if (JITWorklist::instance()->completeAllForVM(*m_vm)
     1474            && conn == GCConductor::Collector)
    12571475            setGCDidJIT();
    12581476    }
     
    12671485}
    12681486
    1269 void Heap::resumeTheWorld()
     1487NEVER_INLINE void Heap::resumeThePeriphery()
    12701488{
    12711489    // Calling resumeAllocating does the Right Thing depending on whether this is the end of a
     
    12781496    m_barriersExecuted = 0;
    12791497   
    1280     RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped);
     1498    if (!m_collectorBelievesThatTheWorldIsStopped) {
     1499        dataLog("Fatal: collector does not believe that the world is stopped.\n");
     1500        RELEASE_ASSERT_NOT_REACHED();
     1501    }
    12811502    m_collectorBelievesThatTheWorldIsStopped = false;
    12821503   
     
    13161537   
    13171538    resumeCompilerThreads();
    1318     resumeTheMutator();
    1319 }
    1320 
    1321 void Heap::stopTheMutator()
     1539}
     1540
     1541bool Heap::stopTheMutator()
    13221542{
    13231543    for (;;) {
    13241544        unsigned oldState = m_worldState.load();
    1325         if ((oldState & stoppedBit)
    1326             && (oldState & shouldStopBit))
    1327             return;
    1328        
    1329         // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could
    1330         // switch to that if it turned out to be less confusing, but then it would not give the
    1331         // mutator the opportunity to react to the world being stopped.
    1332         if (oldState & mutatorWaitingBit) {
    1333             if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
    1334                 ParkingLot::unparkAll(&m_worldState);
     1545        if (oldState & stoppedBit) {
     1546            RELEASE_ASSERT(!(oldState & hasAccessBit));
     1547            RELEASE_ASSERT(!(oldState & mutatorWaitingBit));
     1548            RELEASE_ASSERT(!(oldState & mutatorHasConnBit));
     1549            return true;
     1550        }
     1551       
     1552        if (oldState & mutatorHasConnBit) {
     1553            RELEASE_ASSERT(!(oldState & hasAccessBit));
     1554            RELEASE_ASSERT(!(oldState & stoppedBit));
     1555            return false;
     1556        }
     1557
     1558        if (!(oldState & hasAccessBit)) {
     1559            RELEASE_ASSERT(!(oldState & mutatorHasConnBit));
     1560            RELEASE_ASSERT(!(oldState & mutatorWaitingBit));
     1561            // We can stop the world instantly.
     1562            if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit))
     1563                return true;
    13351564            continue;
    13361565        }
    13371566       
    1338         if (!(oldState & hasAccessBit)
    1339             || (oldState & stoppedBit)) {
    1340             // We can stop the world instantly.
    1341             if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit))
    1342                 return;
    1343             continue;
    1344         }
    1345        
     1567        // Transfer the conn to the mutator and bail.
    13461568        RELEASE_ASSERT(oldState & hasAccessBit);
    13471569        RELEASE_ASSERT(!(oldState & stoppedBit));
    1348         m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit);
    1349         m_stopIfNecessaryTimer->scheduleSoon();
    1350         ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit);
    1351     }
    1352 }
    1353 
    1354 void Heap::resumeTheMutator()
    1355 {
     1570        unsigned newState = (oldState | mutatorHasConnBit) & ~mutatorWaitingBit;
     1571        if (m_worldState.compareExchangeWeak(oldState, newState)) {
     1572            if (false)
     1573                dataLog("Handed off the conn.\n");
     1574            m_stopIfNecessaryTimer->scheduleSoon();
     1575            ParkingLot::unparkAll(&m_worldState);
     1576            return false;
     1577        }
     1578    }
     1579}
     1580
     1581NEVER_INLINE void Heap::resumeTheMutator()
     1582{
     1583    if (false)
     1584        dataLog("Resuming the mutator.\n");
    13561585    for (;;) {
    13571586        unsigned oldState = m_worldState.load();
    1358         RELEASE_ASSERT(oldState & shouldStopBit);
    1359        
    1360         if (!(oldState & hasAccessBit)) {
    1361             // We can resume the world instantly.
    1362             if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) {
    1363                 ParkingLot::unparkAll(&m_worldState);
    1364                 return;
    1365             }
    1366             continue;
    1367         }
    1368        
    1369         // We can tell the world to resume.
    1370         if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) {
     1587        if (!!(oldState & hasAccessBit) != !(oldState & stoppedBit)) {
     1588            dataLog("Fatal: hasAccess = ", !!(oldState & hasAccessBit), ", stopped = ", !!(oldState & stoppedBit), "\n");
     1589            RELEASE_ASSERT_NOT_REACHED();
     1590        }
     1591        if (oldState & mutatorHasConnBit) {
     1592            dataLog("Fatal: mutator has the conn.\n");
     1593            RELEASE_ASSERT_NOT_REACHED();
     1594        }
     1595       
     1596        if (!(oldState & stoppedBit)) {
     1597            if (false)
     1598                dataLog("Returning because not stopped.\n");
     1599            return;
     1600        }
     1601       
     1602        if (m_worldState.compareExchangeWeak(oldState, oldState & ~stoppedBit)) {
     1603            if (false)
     1604                dataLog("CASing and returning.\n");
    13711605            ParkingLot::unparkAll(&m_worldState);
    13721606            return;
     
    13901624{
    13911625    RELEASE_ASSERT(oldState & hasAccessBit);
     1626    RELEASE_ASSERT(!(oldState & stoppedBit));
    13921627   
    13931628    // It's possible for us to wake up with finalization already requested but the world not yet
    13941629    // resumed. If that happens, we can't run finalization yet.
    1395     if (!(oldState & stoppedBit)
    1396         && handleNeedFinalize(oldState))
     1630    if (handleNeedFinalize(oldState))
    13971631        return true;
    1398    
    1399     if (!(oldState & shouldStopBit) && !m_scheduler->shouldStop()) {
    1400         if (!(oldState & stoppedBit))
    1401             return false;
    1402         m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit);
    1403         return true;
    1404     }
    1405    
    1406     sanitizeStackForVM(m_vm);
    1407 
    1408     if (verboseStop) {
    1409         dataLog("Stopping!\n");
    1410         WTFReportBacktrace();
    1411     }
    1412     m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit);
    1413     ParkingLot::unparkAll(&m_worldState);
    1414     ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit);
    1415     return true;
     1632
     1633    // FIXME: When entering the concurrent phase, we could arrange for this branch not to fire, and then
     1634    // have the SlotVisitor do things to the m_worldState to make this branch fire again. That would
     1635    // prevent us from polling this so much. Ideally, stopIfNecessary would ignore the mutatorHasConnBit
     1636    // and there would be some other bit indicating whether we were in some GC phase other than the
     1637    // NotRunning or Concurrent ones.
     1638    if (oldState & mutatorHasConnBit)
     1639        collectInMutatorThread();
     1640   
     1641    return false;
     1642}
     1643
     1644NEVER_INLINE void Heap::collectInMutatorThread()
     1645{
     1646    CollectingScope collectingScope(*this);
     1647    for (;;) {
     1648        RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, nullptr);
     1649        switch (result) {
     1650        case RunCurrentPhaseResult::Finished:
     1651            return;
     1652        case RunCurrentPhaseResult::Continue:
     1653            break;
     1654        case RunCurrentPhaseResult::NeedCurrentThreadState:
     1655            sanitizeStackForVM(m_vm);
     1656            auto lambda = [&] (CurrentThreadState& state) {
     1657                for (;;) {
     1658                    RunCurrentPhaseResult result = runCurrentPhase(GCConductor::Mutator, &state);
     1659                    switch (result) {
     1660                    case RunCurrentPhaseResult::Finished:
     1661                        return;
     1662                    case RunCurrentPhaseResult::Continue:
     1663                        break;
     1664                    case RunCurrentPhaseResult::NeedCurrentThreadState:
     1665                        RELEASE_ASSERT_NOT_REACHED();
     1666                        break;
     1667                    }
     1668                }
     1669            };
     1670            callWithCurrentThreadState(scopedLambda<void(CurrentThreadState&)>(WTFMove(lambda)));
     1671            return;
     1672        }
     1673    }
    14161674}
    14171675
     
    14261684            if (!done) {
    14271685                setMutatorWaiting();
     1686               
    14281687                // At this point, the collector knows that we intend to wait, and he will clear the
    14291688                // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit
     
    14321691            }
    14331692        }
    1434 
     1693       
    14351694        // If we're in a stop-the-world scenario, we need to wait for that even if done is true.
    14361695        unsigned oldState = m_worldState.load();
     
    14381697            continue;
    14391698       
     1699        // FIXME: We wouldn't need this if stopIfNecessarySlow() had a mode where it knew to just
     1700        // do the collection.
     1701        relinquishConn();
     1702       
    14401703        if (done) {
    14411704            clearMutatorWaiting(); // Clean up just in case.
     
    14541717        RELEASE_ASSERT(!(oldState & hasAccessBit));
    14551718       
    1456         if (oldState & shouldStopBit) {
    1457             RELEASE_ASSERT(oldState & stoppedBit);
     1719        if (oldState & stoppedBit) {
    14581720            if (verboseStop) {
    14591721                dataLog("Stopping in acquireAccess!\n");
     
    14711733            handleNeedFinalize();
    14721734            m_mutatorDidRun = true;
     1735            stopIfNecessary();
    14731736            return;
    14741737        }
     
    14801743    for (;;) {
    14811744        unsigned oldState = m_worldState.load();
    1482         RELEASE_ASSERT(oldState & hasAccessBit);
    1483         RELEASE_ASSERT(!(oldState & stoppedBit));
     1745        if (!(oldState & hasAccessBit)) {
     1746            dataLog("FATAL: Attempting to release access but the mutator does not have access.\n");
     1747            RELEASE_ASSERT_NOT_REACHED();
     1748        }
     1749        if (oldState & stoppedBit) {
     1750            dataLog("FATAL: Attempting to release access but the mutator is stopped.\n");
     1751            RELEASE_ASSERT_NOT_REACHED();
     1752        }
    14841753       
    14851754        if (handleNeedFinalize(oldState))
    14861755            continue;
    14871756       
    1488         if (oldState & shouldStopBit) {
    1489             unsigned newState = (oldState & ~hasAccessBit) | stoppedBit;
    1490             if (m_worldState.compareExchangeWeak(oldState, newState)) {
    1491                 ParkingLot::unparkAll(&m_worldState);
    1492                 return;
    1493             }
    1494             continue;
    1495         }
    1496        
    1497         RELEASE_ASSERT(!(oldState & shouldStopBit));
    1498        
    1499         if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit))
     1757        unsigned newState = oldState & ~(hasAccessBit | mutatorHasConnBit);
     1758       
     1759        if ((oldState & mutatorHasConnBit)
     1760            && m_nextPhase != m_currentPhase) {
     1761            // This means that the collector thread had given us the conn so that we would do something
     1762            // for it. Stop ourselves as we release access. This ensures that acquireAccess blocks. In
     1763            // the meantime, since we're handing the conn over, the collector will be awoken and it is
     1764            // sure to have work to do.
     1765            newState |= stoppedBit;
     1766        }
     1767
     1768        if (m_worldState.compareExchangeWeak(oldState, newState)) {
     1769            if (oldState & mutatorHasConnBit)
     1770                finishRelinquishingConn();
    15001771            return;
    1501     }
     1772        }
     1773    }
     1774}
     1775
     1776bool Heap::relinquishConn(unsigned oldState)
     1777{
     1778    RELEASE_ASSERT(oldState & hasAccessBit);
     1779    RELEASE_ASSERT(!(oldState & stoppedBit));
     1780   
     1781    if (!(oldState & mutatorHasConnBit))
     1782        return false; // Done.
     1783   
     1784    if (m_threadShouldStop)
     1785        return false;
     1786   
     1787    if (!m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorHasConnBit))
     1788        return true; // Loop around.
     1789   
     1790    finishRelinquishingConn();
     1791    return true;
     1792}
     1793
     1794void Heap::finishRelinquishingConn()
     1795{
     1796    if (false)
     1797        dataLog("Relinquished the conn.\n");
     1798   
     1799    sanitizeStackForVM(m_vm);
     1800   
     1801    auto locker = holdLock(*m_threadLock);
     1802    if (!m_requests.isEmpty())
     1803        m_threadCondition->notifyOne(locker);
     1804    ParkingLot::unparkAll(&m_worldState);
     1805}
     1806
     1807void Heap::relinquishConn()
     1808{
     1809    while (relinquishConn(m_worldState.load())) { }
    15021810}
    15031811
     
    15141822}
    15151823
    1516 bool Heap::handleNeedFinalize(unsigned oldState)
     1824NEVER_INLINE bool Heap::handleNeedFinalize(unsigned oldState)
    15171825{
    15181826    RELEASE_ASSERT(oldState & hasAccessBit);
     
    15811889}
    15821890
    1583 void Heap::notifyThreadStopping(const LockHolder&)
     1891void Heap::notifyThreadStopping(const AbstractLocker&)
    15841892{
    15851893    m_threadIsStopping = true;
     
    15931901    if (Options::logGC()) {
    15941902        before = MonotonicTime::now();
    1595         dataLog("[GC: finalize ");
     1903        dataLog("[GC<", RawPointer(this), ">: finalize ");
    15961904    }
    15971905   
    15981906    {
    1599         HelpingGCScope helpingGCScope(*this);
     1907        SweepingScope helpingGCScope(*this);
    16001908        deleteUnmarkedCompiledCode();
    16011909        deleteSourceProviderCaches();
     
    16051913    if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
    16061914        cache->clear();
    1607 
     1915   
    16081916    if (Options::sweepSynchronously())
    16091917        sweepSynchronously();
     
    16231931   
    16241932    LockHolder locker(*m_threadLock);
     1933    // We may be able to steal the conn. That only works if the collector is definitely not running
     1934    // right now. This is an optimization that prevents the collector thread from ever starting in most
     1935    // cases.
     1936    ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
     1937    if (m_lastServedTicket == m_lastGrantedTicket) {
     1938        if (false)
     1939            dataLog("Taking the conn.\n");
     1940        m_worldState.exchangeOr(mutatorHasConnBit);
     1941    }
     1942   
    16251943    m_requests.append(scope);
    16261944    m_lastGrantedTicket++;
    1627     m_threadCondition->notifyOne(locker);
     1945    if (!(m_worldState.load() & mutatorHasConnBit))
     1946        m_threadCondition->notifyOne(locker);
    16281947    return m_lastGrantedTicket;
    16291948}
     
    16321951{
    16331952    waitForCollector(
    1634         [&] (const LockHolder&) -> bool {
     1953        [&] (const AbstractLocker&) -> bool {
    16351954            return m_lastServedTicket >= ticket;
    16361955        });
     
    17712090        dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n");
    17722091   
    1773     if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
    1774         HeapStatistics::exitWithFailure();
    1775 
    17762092    if (m_collectionScope == CollectionScope::Full) {
    17772093        // To avoid pathological GC churn in very small and very large heaps, we set
     
    18262142}
    18272143
    1828 void Heap::didFinishCollection(double gcStartTime)
    1829 {
    1830     double gcEndTime = WTF::monotonicallyIncreasingTime();
     2144void Heap::didFinishCollection()
     2145{
     2146    m_afterGC = MonotonicTime::now();
    18312147    CollectionScope scope = *m_collectionScope;
    18322148    if (scope == CollectionScope::Full)
    1833         m_lastFullGCLength = gcEndTime - gcStartTime;
     2149        m_lastFullGCLength = m_afterGC - m_beforeGC;
    18342150    else
    1835         m_lastEdenGCLength = gcEndTime - gcStartTime;
     2151        m_lastEdenGCLength = m_afterGC - m_beforeGC;
    18362152
    18372153#if ENABLE(RESOURCE_USAGE)
    18382154    ASSERT(externalMemorySize() <= extraMemorySize());
    18392155#endif
    1840 
    1841     if (Options::recordGCPauseTimes())
    1842         HeapStatistics::recordGCPauseTime(gcStartTime, gcEndTime);
    1843 
    1844     if (Options::dumpObjectStatistics())
    1845         HeapStatistics::dumpObjectStatistics(this);
    18462156
    18472157    if (HeapProfiler* heapProfiler = m_vm->heapProfiler()) {
     
    20712381    if (!m_isSafeToCollect)
    20722382        return;
    2073     if (mutatorState() == MutatorState::HelpingGC)
     2383    switch (mutatorState()) {
     2384    case MutatorState::Running:
     2385    case MutatorState::Allocating:
     2386        break;
     2387    case MutatorState::Sweeping:
     2388    case MutatorState::Collecting:
    20742389        return;
     2390    }
    20752391    if (!Options::useGC())
    20762392        return;
     
    20812397        else if (isDeferred())
    20822398            m_didDeferGCWork = true;
    2083         else {
     2399        else
    20842400            stopIfNecessary();
    2085             // FIXME: Check if the scheduler wants us to stop.
    2086             // https://bugs.webkit.org/show_bug.cgi?id=166827
    2087         }
    20882401    }
    20892402   
     
    21002413    else if (isDeferred())
    21012414        m_didDeferGCWork = true;
    2102     else
     2415    else {
    21032416        collectAsync();
     2417        stopIfNecessary(); // This will immediately start the collection if we have the conn.
     2418    }
    21042419}
    21052420
     
    23042619void Heap::notifyIsSafeToCollect()
    23052620{
     2621    MonotonicTime before;
     2622    if (Options::logGC()) {
     2623        before = MonotonicTime::now();
     2624        dataLog("[GC<", RawPointer(this), ">: starting ");
     2625    }
     2626   
    23062627    addCoreConstraints();
    23072628   
     
    23382659            });
    23392660    }
     2661   
     2662    if (Options::logGC())
     2663        dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
    23402664}
    23412665
     
    23502674    // Wait for all collections to finish.
    23512675    waitForCollector(
    2352         [&] (const LockHolder&) -> bool {
     2676        [&] (const AbstractLocker&) -> bool {
    23532677            ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
    23542678            return m_lastServedTicket == m_lastGrantedTicket;
     
    24032727    targetBytes = std::min(targetBytes, Options::gcIncrementMaxBytes());
    24042728
    2405     MonotonicTime before;
    2406     if (Options::logGC()) {
    2407         dataLog("[GC: increment t=", targetBytes / 1024, "kb ");
    2408         before = MonotonicTime::now();
    2409     }
    2410 
    24112729    SlotVisitor& slotVisitor = *m_mutatorSlotVisitor;
    24122730    ParallelModeEnabler parallelModeEnabler(slotVisitor);
     
    24142732    // incrementBalance may go negative here because it'll remember how many bytes we overshot.
    24152733    m_incrementBalance -= bytesVisited;
    2416 
    2417     if (Options::logGC()) {
    2418         MonotonicTime after = MonotonicTime::now();
    2419         dataLog("p=", (after - before).milliseconds(), "ms b=", m_incrementBalance / 1024, "kb]\n");
    2420     }
    24212734}
    24222735
  • trunk/Source/JavaScriptCore/heap/Heap.h

    r212616 r212778  
    2525#include "CellState.h"
    2626#include "CollectionScope.h"
     27#include "CollectorPhase.h"
    2728#include "DeleteAllCodeEffort.h"
     29#include "GCConductor.h"
    2830#include "GCIncomingRefCountedSet.h"
    2931#include "HandleSet.h"
     
    3133#include "HeapObserver.h"
    3234#include "ListableHandler.h"
    33 #include "MachineStackMarker.h"
    3435#include "MarkedBlock.h"
    3536#include "MarkedBlockSet.h"
     
    5455class CodeBlock;
    5556class CodeBlockSet;
     57class CollectingScope;
     58class ConservativeRoots;
    5659class GCDeferralContext;
    5760class EdenGCActivityCallback;
     
    6366class HeapProfiler;
    6467class HeapVerifier;
    65 class HelpingGCScope;
    6668class IncrementalSweeper;
    6769class JITStubRoutine;
     
    7072class JSValue;
    7173class LLIntOffsetsExtractor;
     74class MachineThreads;
    7275class MarkStackArray;
    7376class MarkedAllocator;
     
    7679class MarkingConstraintSet;
    7780class MutatorScheduler;
     81class RunningScope;
    7882class SlotVisitor;
    7983class SpaceTimeMutatorScheduler;
    8084class StopIfNecessaryTimer;
     85class SweepingScope;
    8186class VM;
     87struct CurrentThreadState;
    8288
    8389namespace DFG {
     
    132138
    133139    MarkedSpace& objectSpace() { return m_objectSpace; }
    134     MachineThreads& machineThreads() { return m_machineThreads; }
     140    MachineThreads& machineThreads() { return *m_machineThreads; }
    135141
    136142    SlotVisitor& collectorSlotVisitor() { return *m_collectorSlotVisitor; }
     
    148154    std::optional<CollectionScope> collectionScope() const { return m_collectionScope; }
    149155    bool hasHeapAccess() const;
    150     bool mutatorIsStopped() const;
    151156    bool collectorBelievesThatTheWorldIsStopped() const;
    152157
     
    230235    void didFinishIterating();
    231236
    232     double lastFullGCLength() const { return m_lastFullGCLength; }
    233     double lastEdenGCLength() const { return m_lastEdenGCLength; }
    234     void increaseLastFullGCLength(double amount) { m_lastFullGCLength += amount; }
     237    Seconds lastFullGCLength() const { return m_lastFullGCLength; }
     238    Seconds lastEdenGCLength() const { return m_lastEdenGCLength; }
     239    void increaseLastFullGCLength(Seconds amount) { m_lastFullGCLength += amount; }
    235240
    236241    size_t sizeBeforeLastEdenCollection() const { return m_sizeBeforeLastEdenCollect; }
     
    320325    void stopIfNecessary();
    321326   
     327    // This gives the conn to the collector.
     328    void relinquishConn();
     329   
    322330    bool mayNeedToStop();
    323331
     
    344352    JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
    345353#endif // USE(CF)
    346 
     354   
    347355private:
    348356    friend class AllocatingScope;
    349357    friend class CodeBlock;
     358    friend class CollectingScope;
    350359    friend class DeferGC;
    351360    friend class DeferGCForAWhile;
     
    356365    friend class HeapUtil;
    357366    friend class HeapVerifier;
    358     friend class HelpingGCScope;
    359367    friend class JITStubRoutine;
    360368    friend class LLIntOffsetsExtractor;
     
    362370    friend class MarkedAllocator;
    363371    friend class MarkedBlock;
     372    friend class RunningScope;
    364373    friend class SlotVisitor;
    365374    friend class SpaceTimeMutatorScheduler;
    366375    friend class StochasticSpaceTimeMutatorScheduler;
     376    friend class SweepingScope;
    367377    friend class IncrementalSweeper;
    368378    friend class HeapStatistics;
     
    383393    JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t);
    384394   
    385     bool shouldCollectInThread(const LockHolder&);
    386     void collectInThread();
    387    
    388     void stopTheWorld();
    389     void resumeTheWorld();
    390    
    391     void stopTheMutator();
     395    bool shouldCollectInCollectorThread(const AbstractLocker&);
     396    void collectInCollectorThread();
     397   
     398    void checkConn(GCConductor);
     399
     400    enum class RunCurrentPhaseResult {
     401        Finished,
     402        Continue,
     403        NeedCurrentThreadState
     404    };
     405    RunCurrentPhaseResult runCurrentPhase(GCConductor, CurrentThreadState*);
     406   
     407    // Returns true if we should keep doing things.
     408    bool runNotRunningPhase(GCConductor);
     409    bool runBeginPhase(GCConductor);
     410    bool runFixpointPhase(GCConductor);
     411    bool runConcurrentPhase(GCConductor);
     412    bool runReloopPhase(GCConductor);
     413    bool runEndPhase(GCConductor);
     414    bool changePhase(GCConductor, CollectorPhase);
     415    bool finishChangingPhase(GCConductor);
     416   
     417    void collectInMutatorThread();
     418   
     419    void stopThePeriphery(GCConductor);
     420    void resumeThePeriphery();
     421   
     422    // Returns true if the mutator is stopped, false if the mutator has the conn now.
     423    bool stopTheMutator();
    392424    void resumeTheMutator();
    393425   
     
    402434   
    403435    bool handleGCDidJIT(unsigned);
     436    void handleGCDidJIT();
     437   
    404438    bool handleNeedFinalize(unsigned);
    405     void handleGCDidJIT();
    406439    void handleNeedFinalize();
     440   
     441    bool relinquishConn(unsigned);
     442    void finishRelinquishingConn();
    407443   
    408444    void setGCDidJIT();
     
    412448    void setMutatorWaiting();
    413449    void clearMutatorWaiting();
    414     void notifyThreadStopping(const LockHolder&);
     450    void notifyThreadStopping(const AbstractLocker&);
    415451   
    416452    typedef uint64_t Ticket;
     
    422458    void prepareForMarking();
    423459   
    424     void markToFixpoint(double gcStartTime);
    425460    void gatherStackRoots(ConservativeRoots&);
    426461    void gatherJSStackRoots(ConservativeRoots&);
     
    429464    void visitCompilerWorklistWeakReferences();
    430465    void removeDeadCompilerWorklistEntries();
    431     void updateObjectCounts(double gcStartTime);
     466    void updateObjectCounts();
    432467    void endMarking();
    433468
     
    444479    JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*);
    445480    void updateAllocationLimits();
    446     void didFinishCollection(double gcStartTime);
     481    void didFinishCollection();
    447482    void resumeCompilerThreads();
    448483    void gatherExtraHeapSnapshotData(HeapProfiler&);
     
    511546    std::unique_ptr<HashSet<MarkedArgumentBuffer*>> m_markListSet;
    512547
    513     MachineThreads m_machineThreads;
     548    std::unique_ptr<MachineThreads> m_machineThreads;
    514549   
    515550    std::unique_ptr<SlotVisitor> m_collectorSlotVisitor;
     
    545580
    546581    VM* m_vm;
    547     double m_lastFullGCLength;
    548     double m_lastEdenGCLength;
     582    Seconds m_lastFullGCLength;
     583    Seconds m_lastEdenGCLength;
    549584
    550585    Vector<ExecutableBase*> m_executables;
     
    602637    std::unique_ptr<MutatorScheduler> m_scheduler;
    603638   
    604     static const unsigned shouldStopBit = 1u << 0u;
    605     static const unsigned stoppedBit = 1u << 1u;
     639    static const unsigned mutatorHasConnBit = 1u << 0u; // Must also be protected by threadLock.
     640    static const unsigned stoppedBit = 1u << 1u; // Only set when !hasAccessBit
    606641    static const unsigned hasAccessBit = 1u << 2u;
    607642    static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid.
     
    610645    Atomic<unsigned> m_worldState;
    611646    bool m_collectorBelievesThatTheWorldIsStopped { false };
     647    MonotonicTime m_beforeGC;
     648    MonotonicTime m_afterGC;
    612649    MonotonicTime m_stopTime;
    613650   
     
    615652    Ticket m_lastServedTicket { 0 };
    616653    Ticket m_lastGrantedTicket { 0 };
     654    CollectorPhase m_currentPhase { CollectorPhase::NotRunning };
     655    CollectorPhase m_nextPhase { CollectorPhase::NotRunning };
    617656    bool m_threadShouldStop { false };
    618657    bool m_threadIsStopping { false };
     
    633672   
    634673    uintptr_t m_barriersExecuted { 0 };
     674   
     675    CurrentThreadState* m_currentThreadState { nullptr };
    635676};
    636677
  • trunk/Source/JavaScriptCore/heap/HeapInlines.h

    r212616 r212778  
    6262}
    6363
    64 inline bool Heap::mutatorIsStopped() const
    65 {
    66     unsigned state = m_worldState.load();
    67     bool shouldStop = state & shouldStopBit;
    68     bool stopped = state & stoppedBit;
    69     // I only got it right when I considered all four configurations of shouldStop/stopped:
    70     // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we
    71     //     should return false.
    72     // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested
    73     //     that we resume, so we should return false.
    74     // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.
    75     //     The mutator should be able to do whatever it wants in this state, as if we were not
    76     //     stopped. So return false.
    77     // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is
    78     //     stopped, so return true.
    79     return shouldStop & stopped;
    80 }
    81 
    8264inline bool Heap::collectorBelievesThatTheWorldIsStopped() const
    8365{
  • trunk/Source/JavaScriptCore/heap/IncrementalSweeper.cpp

    r212616 r212778  
    9898}
    9999
    100 void IncrementalSweeper::willFinishSweeping()
     100void IncrementalSweeper::stopSweeping()
    101101{
    102102    m_currentAllocator = nullptr;
  • trunk/Source/JavaScriptCore/heap/IncrementalSweeper.h

    r212616 r212778  
    3838    JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*);
    3939
    40     void startSweeping();
     40    JS_EXPORT_PRIVATE void startSweeping();
    4141
    4242    JS_EXPORT_PRIVATE void doWork() override;
    4343    bool sweepNextBlock();
    44     void willFinishSweeping();
     44    JS_EXPORT_PRIVATE void stopSweeping();
    4545
    4646private:
  • trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp

    r212616 r212778  
    11/*
    2  *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
     2 *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
    33 *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
    44 *  Copyright (C) 2009 Acision BV. All rights reserved.
     
    312312        delete t;
    313313    }
     314}
     315
     316SUPPRESS_ASAN
     317void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState& currentThreadState)
     318{
     319    if (currentThreadState.registerState) {
     320        void* registersBegin = currentThreadState.registerState;
     321        void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(currentThreadState.registerState + 1)));
     322        conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks);
     323    }
     324
     325    conservativeRoots.add(currentThreadState.stackTop, currentThreadState.stackOrigin, jitStubRoutines, codeBlocks);
    314326}
    315327
     
    10211033}
    10221034
    1023 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks)
    1024 {
     1035void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, CurrentThreadState* currentThreadState)
     1036{
     1037    if (currentThreadState)
     1038        gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, *currentThreadState);
     1039
    10251040    size_t size;
    10261041    size_t capacity = 0;
     
    10371052}
    10381053
     1054NEVER_INLINE int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>& lambda)
     1055{
     1056    DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(state);
     1057    lambda(state);
     1058    return 42; // Suppress tail call optimization.
     1059}
     1060
    10391061} // namespace JSC
  • trunk/Source/JavaScriptCore/heap/MachineStackMarker.h

    r212616 r212778  
    22 *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
    4  *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
     4 *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
    55 *
    66 *  This library is free software; you can redistribute it and/or
     
    2222#pragma once
    2323
    24 #include <setjmp.h>
     24#include "RegisterState.h"
    2525#include <wtf/Lock.h>
    2626#include <wtf/Noncopyable.h>
     27#include <wtf/ScopedLambda.h>
    2728#include <wtf/ThreadSpecific.h>
    2829
     
    5859class JITStubRoutineSet;
    5960
     61struct CurrentThreadState {
     62    void* stackOrigin { nullptr };
     63    void* stackTop { nullptr };
     64    RegisterState* registerState { nullptr };
     65};
     66   
    6067class MachineThreads {
    6168    WTF_MAKE_NONCOPYABLE(MachineThreads);
    6269public:
    63     typedef jmp_buf RegisterState;
    64 
    6570    MachineThreads(Heap*);
    6671    ~MachineThreads();
    6772
    68     void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&);
     73    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState*);
    6974
    7075    JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads.
     
    146151
    147152private:
     153    void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, CurrentThreadState&);
     154
    148155    void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*);
    149156    bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*);
     
    162169};
    163170
     171#define DECLARE_AND_COMPUTE_CURRENT_THREAD_STATE(stateName) \
     172    CurrentThreadState stateName; \
     173    stateName.stackTop = &stateName; \
     174    stateName.stackOrigin = wtfThreadData().stack().origin(); \
     175    ALLOCATE_AND_GET_REGISTER_STATE(stateName ## _registerState); \
     176    stateName.registerState = &stateName ## _registerState
     177
     178// The return value is meaningless. We just use it to suppress tail call optimization.
     179int callWithCurrentThreadState(const ScopedLambda<void(CurrentThreadState&)>&);
     180
    164181} // namespace JSC
    165182
    166 #if COMPILER(GCC_OR_CLANG)
    167 #define REGISTER_BUFFER_ALIGNMENT __attribute__ ((aligned (sizeof(void*))))
    168 #else
    169 #define REGISTER_BUFFER_ALIGNMENT
    170 #endif
    171 
    172 // ALLOCATE_AND_GET_REGISTER_STATE() is a macro so that it is always "inlined" even in debug builds.
    173 #if COMPILER(MSVC)
    174 #pragma warning(push)
    175 #pragma warning(disable: 4611)
    176 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
    177     MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
    178     setjmp(registers)
    179 #pragma warning(pop)
    180 #else
    181 #define ALLOCATE_AND_GET_REGISTER_STATE(registers) \
    182     MachineThreads::RegisterState registers REGISTER_BUFFER_ALIGNMENT; \
    183     setjmp(registers)
    184 #endif
  • trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp

    r212616 r212778  
    217217    m_heap->collectIfNecessaryOrDefer(deferralContext);
    218218   
     219    // Goofy corner case: the GC called a callback and now this allocator has a currentBlock. This only
     220    // happens when running WebKit tests, which inject a callback into the GC's finalization.
     221    if (UNLIKELY(m_currentBlock)) {
     222        if (crashOnFailure)
     223            return allocate(deferralContext);
     224        return tryAllocate(deferralContext);
     225    }
     226   
    219227    void* result = tryAllocateWithoutCollecting();
    220228   
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp

    r212616 r212778  
    2727#include "MarkedBlock.h"
    2828
    29 #include "HelpingGCScope.h"
    3029#include "JSCell.h"
    3130#include "JSDestructibleObject.h"
     
    3332#include "MarkedBlockInlines.h"
    3433#include "SuperSampler.h"
     34#include "SweepingScope.h"
    3535
    3636namespace JSC {
     
    410410FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
    411411{
    412     // FIXME: Maybe HelpingGCScope should just be called SweepScope?
    413     HelpingGCScope helpingGCScope(*heap());
     412    SweepingScope sweepingScope(*heap());
    414413   
    415414    m_allocator->setIsUnswept(NoLockingNecessary, this, false);
  • trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp

    r212616 r212778  
    226226void MarkedSpace::sweep()
    227227{
    228     m_heap->sweeper()->willFinishSweeping();
     228    m_heap->sweeper()->stopSweeping();
    229229    forEachAllocator(
    230230        [&] (MarkedAllocator& allocator) -> IterationStatus {
  • trunk/Source/JavaScriptCore/heap/MutatorState.cpp

    r212616 r212778  
    4242        out.print("Allocating");
    4343        return;
    44     case MutatorState::HelpingGC:
    45         out.print("HelpingGC");
     44    case MutatorState::Sweeping:
     45        out.print("Sweeping");
     46        return;
     47    case MutatorState::Collecting:
     48        out.print("Collecting");
    4649        return;
    4750    }
  • trunk/Source/JavaScriptCore/heap/MutatorState.h

    r212616 r212778  
    3535    Allocating,
    3636   
    37     // The mutator was asked by the GC to do some work.
    38     HelpingGC
     37    // The mutator is sweeping.
     38    Sweeping,
     39   
     40    // The mutator is collecting.
     41    Collecting
    3942};
    4043
  • trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp

    r212616 r212778  
    3939#include "JSCInlines.h"
    4040#include "SlotVisitorInlines.h"
     41#include "StopIfNecessaryTimer.h"
    4142#include "SuperSampler.h"
    4243#include "VM.h"
     
    7677#endif
    7778
    78 SlotVisitor::SlotVisitor(Heap& heap)
     79SlotVisitor::SlotVisitor(Heap& heap, CString codeName)
    7980    : m_bytesVisited(0)
    8081    , m_visitCount(0)
     
    8283    , m_markingVersion(MarkedSpace::initialVersion)
    8384    , m_heap(heap)
     85    , m_codeName(codeName)
    8486#if !ASSERT_DISABLED
    8587    , m_isCheckingForDefaultMarkViolation(false)
     
    470472}
    471473
    472 void SlotVisitor::drain(MonotonicTime timeout)
    473 {
    474     RELEASE_ASSERT(m_isInParallelMode);
     474NEVER_INLINE void SlotVisitor::drain(MonotonicTime timeout)
     475{
     476    if (!m_isInParallelMode) {
     477        dataLog("FATAL: attempting to drain when not in parallel mode.\n");
     478        RELEASE_ASSERT_NOT_REACHED();
     479    }
    475480   
    476481    auto locker = holdLock(m_rightToRun);
     
    582587}
    583588
    584 SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)
     589NEVER_INLINE SlotVisitor::SharedDrainResult SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode, MonotonicTime timeout)
    585590{
    586591    ASSERT(m_isInParallelMode);
     
    617622                    return SharedDrainResult::TimedOut;
    618623               
    619                 if (didReachTermination(locker))
     624                if (didReachTermination(locker)) {
    620625                    m_heap.m_markingConditionVariable.notifyAll();
     626                   
     627                    // If we're in concurrent mode, then we know that the mutator will eventually do
     628                    // the right thing because:
     629                    // - It's possible that the collector has the conn. In that case, the collector will
     630                    //   wake up from the notification above. This will happen if the app released heap
     631                    //   access. Native apps can spend a lot of time with heap access released.
     632                    // - It's possible that the mutator will allocate soon. Then it will check if we
     633                    //   reached termination. This is the most likely outcome in programs that allocate
     634                    //   a lot.
     635                    // - WebCore never releases access. But WebCore has a runloop. The runloop will check
     636                    //   if we reached termination.
     637                    // So, this tells the runloop that it's got things to do.
     638                    m_heap.m_stopIfNecessaryTimer->scheduleSoon();
     639                }
    621640
    622641                auto isReady = [&] () -> bool {
     
    660679    ASSERT(Options::numberOfGCMarkers());
    661680   
    662     if (!m_heap.hasHeapAccess()
     681    if (Options::numberOfGCMarkers() == 1
     682        || (m_heap.m_worldState.load() & Heap::mutatorWaitingBit)
     683        || !m_heap.hasHeapAccess()
    663684        || m_heap.collectorBelievesThatTheWorldIsStopped()) {
    664685        // This is an optimization over drainInParallel() when we have a concurrent mutator but
     
    685706void SlotVisitor::donateAll()
    686707{
     708    if (isEmpty())
     709        return;
     710   
    687711    donateAll(holdLock(m_heap.m_markingMutex));
    688712}
     
    756780void SlotVisitor::donate()
    757781{
    758     ASSERT(m_isInParallelMode);
     782    if (!m_isInParallelMode) {
     783        dataLog("FATAL: Attempting to donate when not in parallel mode.\n");
     784        RELEASE_ASSERT_NOT_REACHED();
     785    }
     786   
    759787    if (Options::numberOfGCMarkers() == 1)
    760788        return;
  • trunk/Source/JavaScriptCore/heap/SlotVisitor.h

    r212616 r212778  
    5757
    5858public:
    59     SlotVisitor(Heap&);
     59    SlotVisitor(Heap&, CString codeName);
    6060    ~SlotVisitor();
    6161
     
    168168
    169169    void donateAll();
     170   
     171    const char* codeName() const { return m_codeName.data(); }
    170172
    171173private:
     
    228230    bool m_canOptimizeForStoppedMutator { false };
    229231    Lock m_rightToRun;
     232   
     233    CString m_codeName;
    230234   
    231235public:
  • trunk/Source/JavaScriptCore/heap/StochasticSpaceTimeMutatorScheduler.cpp

    r212616 r212778  
    7777        Options::concurrentGCMaxHeadroom() *
    7878        std::max<double>(m_bytesAllocatedThisCycleAtTheBeginning, m_heap.m_maxEdenSize);
     79   
     80    if (Options::logGC())
     81        dataLog("ca=", m_bytesAllocatedThisCycleAtTheBeginning / 1024, "kb h=", (m_bytesAllocatedThisCycleAtTheEnd - m_bytesAllocatedThisCycleAtTheBeginning) / 1024, "kb ");
     82   
    7983    m_beforeConstraints = MonotonicTime::now();
    8084}
     
    118122   
    119123    double resumeProbability = mutatorUtilization(snapshot);
     124    if (resumeProbability < Options::epsilonMutatorUtilization()) {
     125        m_plannedResumeTime = MonotonicTime::infinity();
     126        return;
     127    }
     128   
    120129    bool shouldResume = m_random.get() < resumeProbability;
    121130   
     
    136145        return MonotonicTime::now();
    137146    case Resumed: {
    138         // Once we're running, we keep going.
    139         // FIXME: Maybe force stop when we run out of headroom?
     147        // Once we're running, we keep going unless we run out of headroom.
     148        Snapshot snapshot(*this);
     149        if (mutatorUtilization(snapshot) < Options::epsilonMutatorUtilization())
     150            return MonotonicTime::now();
    140151        return MonotonicTime::infinity();
    141152    } }
  • trunk/Source/JavaScriptCore/jit/JITWorklist.cpp

    r212616 r212778  
    100100class JITWorklist::Thread : public AutomaticThread {
    101101public:
    102     Thread(const LockHolder& locker, JITWorklist& worklist)
     102    Thread(const AbstractLocker& locker, JITWorklist& worklist)
    103103        : AutomaticThread(locker, worklist.m_lock, worklist.m_condition)
    104104        , m_worklist(worklist)
     
    108108   
    109109protected:
    110     PollResult poll(const LockHolder&) override
     110    PollResult poll(const AbstractLocker&) override
    111111    {
    112112        RELEASE_ASSERT(m_worklist.m_numAvailableThreads);
  • trunk/Source/JavaScriptCore/jsc.cpp

    r212620 r212778  
    3939#include "HeapProfiler.h"
    4040#include "HeapSnapshotBuilder.h"
    41 #include "HeapStatistics.h"
    4241#include "InitializeThreading.h"
    4342#include "Interpreter.h"
     
    10851084static EncodedJSValue JSC_HOST_CALL functionWaitForReport(ExecState*);
    10861085static EncodedJSValue JSC_HOST_CALL functionHeapCapacity(ExecState*);
     1086static EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState*);
    10871087
    10881088struct Script {
     
    13671367
    13681368        addFunction(vm, "heapCapacity", functionHeapCapacity, 0);
     1369        addFunction(vm, "flashHeapAccess", functionFlashHeapAccess, 0);
    13691370    }
    13701371   
     
    26492650}
    26502651
     2652EncodedJSValue JSC_HOST_CALL functionFlashHeapAccess(ExecState* exec)
     2653{
     2654    VM& vm = exec->vm();
     2655    auto scope = DECLARE_THROW_SCOPE(vm);
     2656   
     2657    vm.heap.releaseAccess();
     2658    if (exec->argumentCount() >= 1) {
     2659        double ms = exec->argument(0).toNumber(exec);
     2660        RETURN_IF_EXCEPTION(scope, encodedJSValue());
     2661        sleep(Seconds::fromMilliseconds(ms));
     2662    }
     2663    vm.heap.acquireAccess();
     2664    return JSValue::encode(jsUndefined());
     2665}
     2666
    26512667template<typename ValueType>
    26522668typename std::enable_if<!std::is_fundamental<ValueType>::value>::type addOption(VM&, JSObject*, Identifier, ValueType) { }
  • trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp

    r212616 r212778  
    3232#include "ExecutableAllocator.h"
    3333#include "Heap.h"
    34 #include "HeapStatistics.h"
    3534#include "Identifier.h"
    3635#include "JSDateMath.h"
     
    6160        WTF::initializeGCThreads();
    6261        Options::initialize();
    63         if (Options::recordGCPauseTimes())
    64             HeapStatistics::initialize();
    6562#if ENABLE(WRITE_BARRIER_PROFILING)
    6663        WriteBarrierCounters::initialize();
  • trunk/Source/JavaScriptCore/runtime/JSCellInlines.h

    r212616 r212778  
    281281    // independent of whether the mutator thread is sweeping or not. Hence, we also check for ownerThread() !=
    282282    // std::this_thread::get_id() to allow the GC thread or JIT threads to pass this assertion.
    283     ASSERT(vm.heap.mutatorState() == MutatorState::Running || vm.apiLock().ownerThread() != std::this_thread::get_id());
     283    ASSERT(vm.heap.mutatorState() != MutatorState::Sweeping || vm.apiLock().ownerThread() != std::this_thread::get_id());
    284284    return structure(vm)->classInfo();
    285285}
  • trunk/Source/JavaScriptCore/runtime/Options.cpp

    r212616 r212778  
    318318        Options::concurrentGCMaxHeadroom() = 1.4;
    319319        Options::minimumGCPauseMS() = 1;
    320         Options::gcIncrementScale() = 1;
     320        Options::useStochasticMutatorScheduler() = false;
     321        if (WTF::numberOfProcessorCores() <= 1)
     322            Options::gcIncrementScale() = 1;
     323        else
     324            Options::gcIncrementScale() = 0;
    321325    }
    322326}
  • trunk/Source/JavaScriptCore/runtime/Options.h

    r212616 r212778  
    201201    v(double, minimumMutatorUtilization, 0, Normal, nullptr) \
    202202    v(double, maximumMutatorUtilization, 0.7, Normal, nullptr) \
     203    v(double, epsilonMutatorUtilization, 0.01, Normal, nullptr) \
    203204    v(double, concurrentGCMaxHeadroom, 1.5, Normal, nullptr) \
    204205    v(double, concurrentGCPeriodMS, 2, Normal, nullptr) \
     
    345346    v(bool, useImmortalObjects, false, Normal, "debugging option to keep all objects alive forever") \
    346347    v(bool, sweepSynchronously, false, Normal, "debugging option to sweep all dead objects synchronously at GC end before resuming mutator") \
    347     v(bool, dumpObjectStatistics, false, Normal, nullptr) \
    348348    v(unsigned, maxSingleAllocationSize, 0, Configurable, "debugging option to limit individual allocations to a max size (0 = limit not set, N = limit size in bytes)") \
    349349    \
  • trunk/Source/JavaScriptCore/runtime/TestRunnerUtils.cpp

    r212616 r212778  
    11/*
    2  * Copyright (C) 2013-2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2929#include "CodeBlock.h"
    3030#include "FunctionCodeBlock.h"
    31 #include "HeapStatistics.h"
    3231#include "JSCInlines.h"
    3332#include "LLIntData.h"
     
    161160void finalizeStatsAtEndOfTesting()
    162161{
    163     if (Options::logHeapStatisticsAtExit())
    164         HeapStatistics::reportSuccess();
    165162    if (Options::reportLLIntStats())
    166163        LLInt::Data::finalizeStats();
  • trunk/Source/WTF/ChangeLog

    r212757 r212778  
     12017-02-20  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The collector thread should only start when the mutator doesn't have heap access
     4        https://bugs.webkit.org/show_bug.cgi?id=167737
     5
     6        Reviewed by Keith Miller.
     7       
     8        Extend the use of AbstractLocker so that we can use more locking idioms.
     9
     10        * wtf/AutomaticThread.cpp:
     11        (WTF::AutomaticThreadCondition::notifyOne):
     12        (WTF::AutomaticThreadCondition::notifyAll):
     13        (WTF::AutomaticThreadCondition::add):
     14        (WTF::AutomaticThreadCondition::remove):
     15        (WTF::AutomaticThreadCondition::contains):
     16        (WTF::AutomaticThread::AutomaticThread):
     17        (WTF::AutomaticThread::tryStop):
     18        (WTF::AutomaticThread::isWaiting):
     19        (WTF::AutomaticThread::notify):
     20        (WTF::AutomaticThread::start):
     21        (WTF::AutomaticThread::threadIsStopping):
     22        * wtf/AutomaticThread.h:
     23        * wtf/NumberOfCores.cpp:
     24        (WTF::numberOfProcessorCores):
     25        * wtf/ParallelHelperPool.cpp:
     26        (WTF::ParallelHelperClient::finish):
     27        (WTF::ParallelHelperClient::claimTask):
     28        (WTF::ParallelHelperPool::Thread::Thread):
     29        (WTF::ParallelHelperPool::didMakeWorkAvailable):
     30        (WTF::ParallelHelperPool::hasClientWithTask):
     31        (WTF::ParallelHelperPool::getClientWithTask):
     32        * wtf/ParallelHelperPool.h:
     33
    1342017-02-21  John Wilander  <wilander@apple.com>
    235
  • trunk/Source/WTF/wtf/AutomaticThread.cpp

    r212616 r212778  
    4646}
    4747
    48 void AutomaticThreadCondition::notifyOne(const LockHolder& locker)
     48void AutomaticThreadCondition::notifyOne(const AbstractLocker& locker)
    4949{
    5050    for (AutomaticThread* thread : m_threads) {
     
    6565}
    6666
    67 void AutomaticThreadCondition::notifyAll(const LockHolder& locker)
     67void AutomaticThreadCondition::notifyAll(const AbstractLocker& locker)
    6868{
    6969    m_condition.notifyAll();
     
    8282}
    8383
    84 void AutomaticThreadCondition::add(const LockHolder&, AutomaticThread* thread)
     84void AutomaticThreadCondition::add(const AbstractLocker&, AutomaticThread* thread)
    8585{
    8686    ASSERT(!m_threads.contains(thread));
     
    8888}
    8989
    90 void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)
     90void AutomaticThreadCondition::remove(const AbstractLocker&, AutomaticThread* thread)
    9191{
    9292    m_threads.removeFirst(thread);
     
    9494}
    9595
    96 bool AutomaticThreadCondition::contains(const LockHolder&, AutomaticThread* thread)
     96bool AutomaticThreadCondition::contains(const AbstractLocker&, AutomaticThread* thread)
    9797{
    9898    return m_threads.contains(thread);
    9999}
    100100
    101 AutomaticThread::AutomaticThread(const LockHolder& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)
     101AutomaticThread::AutomaticThread(const AbstractLocker& locker, Box<Lock> lock, RefPtr<AutomaticThreadCondition> condition)
    102102    : m_lock(lock)
    103103    , m_condition(condition)
     
    119119}
    120120
    121 bool AutomaticThread::tryStop(const LockHolder&)
     121bool AutomaticThread::tryStop(const AbstractLocker&)
    122122{
    123123    if (!m_isRunning)
     
    129129}
    130130
    131 bool AutomaticThread::isWaiting(const LockHolder& locker)
     131bool AutomaticThread::isWaiting(const AbstractLocker& locker)
    132132{
    133133    return hasUnderlyingThread(locker) && m_isWaiting;
    134134}
    135135
    136 bool AutomaticThread::notify(const LockHolder& locker)
     136bool AutomaticThread::notify(const AbstractLocker& locker)
    137137{
    138138    ASSERT_UNUSED(locker, hasUnderlyingThread(locker));
     
    148148}
    149149
    150 void AutomaticThread::start(const LockHolder&)
     150void AutomaticThread::start(const AbstractLocker&)
    151151{
    152152    RELEASE_ASSERT(m_isRunning);
     
    170170            }
    171171           
    172             auto stopImpl = [&] (const LockHolder& locker) {
     172            auto stopImpl = [&] (const AbstractLocker& locker) {
    173173                thread->threadIsStopping(locker);
    174174                thread->m_hasUnderlyingThread = false;
    175175            };
    176176           
    177             auto stopPermanently = [&] (const LockHolder& locker) {
     177            auto stopPermanently = [&] (const AbstractLocker& locker) {
    178178                m_isRunning = false;
    179179                m_isRunningCondition.notifyAll();
     
    181181            };
    182182           
    183             auto stopForTimeout = [&] (const LockHolder& locker) {
     183            auto stopForTimeout = [&] (const AbstractLocker& locker) {
    184184                stopImpl(locker);
    185185            };
     
    228228}
    229229
    230 void AutomaticThread::threadIsStopping(const LockHolder&)
     230void AutomaticThread::threadIsStopping(const AbstractLocker&)
    231231{
    232232}
  • trunk/Source/WTF/wtf/AutomaticThread.h

    r212616 r212778  
    7676    WTF_EXPORT_PRIVATE ~AutomaticThreadCondition();
    7777   
    78     WTF_EXPORT_PRIVATE void notifyOne(const LockHolder&);
    79     WTF_EXPORT_PRIVATE void notifyAll(const LockHolder&);
     78    WTF_EXPORT_PRIVATE void notifyOne(const AbstractLocker&);
     79    WTF_EXPORT_PRIVATE void notifyAll(const AbstractLocker&);
    8080   
    8181    // You can reuse this condition for other things, just as you would any other condition.
     
    9191    WTF_EXPORT_PRIVATE AutomaticThreadCondition();
    9292
    93     void add(const LockHolder&, AutomaticThread*);
    94     void remove(const LockHolder&, AutomaticThread*);
    95     bool contains(const LockHolder&, AutomaticThread*);
     93    void add(const AbstractLocker&, AutomaticThread*);
     94    void remove(const AbstractLocker&, AutomaticThread*);
     95    bool contains(const AbstractLocker&, AutomaticThread*);
    9696   
    9797    Condition m_condition;
     
    114114   
    115115    // Sometimes it's possible to optimize for the case that there is no underlying thread.
    116     bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }
     116    bool hasUnderlyingThread(const AbstractLocker&) const { return m_hasUnderlyingThread; }
    117117   
    118118    // This attempts to quickly stop the thread. This will succeed if the thread happens to not be
     
    120120    // thread is to first try this, and if that doesn't work, to tell the thread using your own
    121121    // mechanism (set some flag and then notify the condition).
    122     bool tryStop(const LockHolder&);
     122    bool tryStop(const AbstractLocker&);
    123123
    124     bool isWaiting(const LockHolder&);
     124    bool isWaiting(const AbstractLocker&);
    125125
    126     bool notify(const LockHolder&);
     126    bool notify(const AbstractLocker&);
    127127
    128128    void join();
     
    131131    // This logically creates the thread, but in reality the thread won't be created until someone
    132132    // calls AutomaticThreadCondition::notifyOne() or notifyAll().
    133     AutomaticThread(const LockHolder&, Box<Lock>, RefPtr<AutomaticThreadCondition>);
     133    AutomaticThread(const AbstractLocker&, Box<Lock>, RefPtr<AutomaticThreadCondition>);
    134134   
    135135    // To understand PollResult and WorkResult, imagine that poll() and work() are being called like
     
    160160   
    161161    enum class PollResult { Work, Stop, Wait };
    162     virtual PollResult poll(const LockHolder&) = 0;
     162    virtual PollResult poll(const AbstractLocker&) = 0;
    163163   
    164164    enum class WorkResult { Continue, Stop };
     
    169169    // can be sure that the default ones don't do anything (so you don't need a super call).
    170170    virtual void threadDidStart();
    171     virtual void threadIsStopping(const LockHolder&);
     171    virtual void threadIsStopping(const AbstractLocker&);
    172172   
    173173private:
    174174    friend class AutomaticThreadCondition;
    175175   
    176     void start(const LockHolder&);
     176    void start(const AbstractLocker&);
    177177   
    178178    Box<Lock> m_lock;
  • trunk/Source/WTF/wtf/NumberOfCores.cpp

    r212616 r212778  
    4848    if (s_numberOfCores > 0)
    4949        return s_numberOfCores;
     50   
     51    if (const char* coresEnv = getenv("WTF_numberOfProcessorCores")) {
     52        unsigned numberOfCores;
     53        if (sscanf(coresEnv, "%u", &numberOfCores) == 1) {
     54            s_numberOfCores = numberOfCores;
     55            return s_numberOfCores;
     56        } else
     57            fprintf(stderr, "WARNING: failed to parse WTF_numberOfProcessorCores=%s\n", coresEnv);
     58    }
    5059
    5160#if OS(DARWIN)
  • trunk/Source/WTF/wtf/ParallelHelperPool.cpp

    r212616 r212778  
    11/*
    2  * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    8989}
    9090
    91 void ParallelHelperClient::finish(const LockHolder&)
     91void ParallelHelperClient::finish(const AbstractLocker&)
    9292{
    9393    m_task = nullptr;
     
    9696}
    9797
    98 RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const LockHolder&)
     98RefPtr<SharedTask<void ()>> ParallelHelperClient::claimTask(const AbstractLocker&)
    9999{
    100100    if (!m_task)
     
    171171class ParallelHelperPool::Thread : public AutomaticThread {
    172172public:
    173     Thread(const LockHolder& locker, ParallelHelperPool& pool)
     173    Thread(const AbstractLocker& locker, ParallelHelperPool& pool)
    174174        : AutomaticThread(locker, pool.m_lock, pool.m_workAvailableCondition)
    175175        , m_pool(pool)
     
    178178   
    179179protected:
    180     PollResult poll(const LockHolder& locker) override
     180    PollResult poll(const AbstractLocker& locker) override
    181181    {
    182182        if (m_pool.m_isDying)
     
    204204};
    205205
    206 void ParallelHelperPool::didMakeWorkAvailable(const LockHolder& locker)
     206void ParallelHelperPool::didMakeWorkAvailable(const AbstractLocker& locker)
    207207{
    208208    while (m_numThreads > m_threads.size())
     
    211211}
    212212
    213 bool ParallelHelperPool::hasClientWithTask(const LockHolder& locker)
     213bool ParallelHelperPool::hasClientWithTask(const AbstractLocker& locker)
    214214{
    215215    return !!getClientWithTask(locker);
    216216}
    217217
    218 ParallelHelperClient* ParallelHelperPool::getClientWithTask(const LockHolder&)
     218ParallelHelperClient* ParallelHelperPool::getClientWithTask(const AbstractLocker&)
    219219{
    220220    // We load-balance by being random.
  • trunk/Source/WTF/wtf/ParallelHelperPool.h

    r212616 r212778  
    11/*
    2  * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    169169    friend class ParallelHelperPool;
    170170
    171     void finish(const LockHolder&);
    172     RefPtr<SharedTask<void ()>> claimTask(const LockHolder&);
     171    void finish(const AbstractLocker&);
     172    RefPtr<SharedTask<void ()>> claimTask(const AbstractLocker&);
    173173    void runTask(RefPtr<SharedTask<void ()>>);
    174174   
     
    194194    friend class Thread;
    195195
    196     void didMakeWorkAvailable(const LockHolder&);
    197 
    198     bool hasClientWithTask(const LockHolder&);
    199     ParallelHelperClient* getClientWithTask(const LockHolder&);
    200     ParallelHelperClient* waitForClientWithTask(const LockHolder&);
     196    void didMakeWorkAvailable(const AbstractLocker&);
     197
     198    bool hasClientWithTask(const AbstractLocker&);
     199    ParallelHelperClient* getClientWithTask(const AbstractLocker&);
     200    ParallelHelperClient* waitForClientWithTask(const AbstractLocker&);
    201201   
    202202    Box<Lock> m_lock; // AutomaticThread wants this in a box for safety.
  • trunk/Source/WebCore/ChangeLog

    r212776 r212778  
     12017-02-20  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The collector thread should only start when the mutator doesn't have heap access
     4        https://bugs.webkit.org/show_bug.cgi?id=167737
     5
     6        Reviewed by Keith Miller.
     7
     8        Added new tests in JSTests.
     9       
     10        The WebCore changes involve:
     11       
     12        - Refactoring around new header discipline.
     13       
     14        - Adding crazy GC APIs to window.internals to enable us to test the GC's runloop discipline.
     15
     16        * ForwardingHeaders/heap/GCFinalizationCallback.h: Added.
     17        * ForwardingHeaders/heap/IncrementalSweeper.h: Added.
     18        * ForwardingHeaders/heap/MachineStackMarker.h: Added.
     19        * ForwardingHeaders/heap/RunningScope.h: Added.
     20        * bindings/js/CommonVM.cpp:
     21        * testing/Internals.cpp:
     22        (WebCore::Internals::parserMetaData):
     23        (WebCore::Internals::isReadableStreamDisturbed):
     24        (WebCore::Internals::isGCRunning):
     25        (WebCore::Internals::addGCFinalizationCallback):
     26        (WebCore::Internals::stopSweeping):
     27        (WebCore::Internals::startSweeping):
     28        * testing/Internals.h:
     29        * testing/Internals.idl:
     30
    1312017-02-20  Simon Fraser  <simon.fraser@apple.com>
    232
  • trunk/Source/WebCore/bindings/js/CommonVM.cpp

    r212616 r212778  
    3131#include "WebCoreJSClientData.h"
    3232#include <heap/HeapInlines.h>
     33#include "heap/MachineStackMarker.h"
    3334#include <runtime/VM.h>
    3435#include <wtf/MainThread.h>
  • trunk/Tools/ChangeLog

    r212776 r212778  
     12017-02-20  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The collector thread should only start when the mutator doesn't have heap access
     4        https://bugs.webkit.org/show_bug.cgi?id=167737
     5
     6        Reviewed by Keith Miller.
     7       
     8        Make more tests collect continuously.
     9
     10        * Scripts/run-jsc-stress-tests:
     11
    1122017-02-20  Simon Fraser  <simon.fraser@apple.com>
    213
  • trunk/Tools/Scripts/run-jsc-stress-tests

    r212616 r212778  
    14691469
    14701470def runNoisyTestNoCJIT
    1471     runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS))
     1471    runNoisyTest("ftl-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS))
    14721472end
    14731473
    14741474def runNoisyTestEagerNoCJIT
    1475     runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS))
     1475    runNoisyTest("ftl-eager-no-cjit", "--validateBytecode=true", "--validateGraphAtEachPhase=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS))
    14761476end
    14771477
Note: See TracChangeset for help on using the changeset viewer.