Changeset 208306 in webkit


Ignore:
Timestamp:
Nov 2, 2016, 3:01:04 PM (9 years ago)
Author:
fpizlo@apple.com
Message:

The GC should be in a thread
https://bugs.webkit.org/show_bug.cgi?id=163562

Reviewed by Geoffrey Garen and Andreas Kling.
Source/JavaScriptCore:


In a concurrent GC, the work of collecting happens on a separate thread. This patch
implements this, and schedules the thread the way that a concurrent GC thread would be
scheduled. But, the GC isn't actually concurrent yet because it calls stopTheWorld() before
doing anything and calls resumeTheWorld() after it's done with everything. The next step will
be to make it really concurrent by basically calling stopTheWorld()/resumeTheWorld() around
bounded snippets of work while making most of the work happen with the world running. Our GC
will probably always have stop-the-world phases because the semantics of JSC weak references
call for it.

This implements concurrent GC scheduling. This means that there is no longer a
Heap::collect() API. Instead, you can call collectAsync() which makes sure that a GC is
scheduled (it will do nothing if one is scheduled or ongoing) or you can call collectSync()
to schedule a GC and wait for it to happen. I made our debugging stuff call collectSync().
It should be a goal to never call collectSync() except for debugging or benchmark harness
hacks.

The collector thread is an AutomaticThread, so it won't linger when not in use. It works on
a ticket-based system, like you would see at the DMV. A ticket is a 64-bit integer. There are
two ticket counters: last granted and last served. When you request a collection, last
granted is incremented and its new value given to you. When a collection completes, last
served is incremented. collectSync() waits until last served catches up to what last granted
had been at the time you requested a GC. This means that if you request a sync GC in the
middle of an async GC, you will wait for that async GC to finish and then you will request
and wait for your sync GC.

The synchronization between the collector thread and the main threads is complex. The
collector thread needs to be able to ask the main thread to stop. It needs to be able to do
some post-GC clean-up, like the synchronous CodeBlock and LargeAllocation sweeps, on the main
thread. The collector needs to be able to ask the main thread to execute a cross-modifying
code fence before running any JIT code, since the GC might aid the JIT worklist and run JIT
finalization. It's possible for the GC to want the main thread to run something at the same
time that the main thread wants to wait for the GC. The main thread needs to be able to run
non-JSC stuff without causing the GC to completely stall. The main thread needs to be able
to query its own state (is there a request to stop?) and change it (running JSC versus not)
quickly, since this may happen on hot paths. This kind of intertwined system of requests,
notifications, and state changes requires a combination of lock-free algorithms and waiting.
So, this is all implemented using a Atomic<unsigned> Heap::m_worldState, which has bits to
represent things being requested by the collector and the heap access state of the mutator. I
am borrowing a lot of terms that I've seen in other VMs that I've worked on. Here's what they
mean:

  • Stop the world: make sure that either the mutator is not running, or that it's not running code that could mess with the heap.


  • Heap access: the mutator is said to have heap access if it could mess with the heap.


If you stop the world and the mutator doesn't have heap access, all you're doing is making
sure that it will block when it tries to acquire heap access. This means that our GC is
already fully concurrent in cases where the GC is requested while the mutator has no heap
access. This probably won't happen, but if it did then it should just work. Usually, stopping
the world means that we state our shouldStop request with m_worldState, and a future call
to Heap::stopIfNecessary() will go to slow path and stop. The act of stopping or waiting to
acquire heap access is managed by using ParkingLot API directly on m_worldState. This works
out great because it would be very awkward to get the same functionality using locks and
condition variables, since we want stopIfNecessary/acquireAccess/requestAccess fast paths
that are single atomic instructions (load/CAS/CAS, respectively). The mutator will call these
things frequently. Currently we have Heap::stopIfNecessary() polling on every allocator slow
path, but we may want to make it even more frequent than that.

Currently only JSC API clients benefit from the heap access optimization. The DOM forces us
to assume that heap access is permanently on, since DOM manipulation doesn't always hold the
JSLock. We could still allow the GC to proceed when the runloop is idle by having the GC put
a task on the runloop that just calls stopIfNecessary().

This is perf neutral. The only behavior change that clients ought to observe is that marking
and the weak fixpoint happen on a separate thread. Marking was already parallel so it already
handled multiple threads, but now it _never_ runs on the main thread. The weak fixpoint
needed some help to be able to run on another thread - mostly because there was some code in
IndexedDB that was using thread specifics in the weak fixpoint.

  • API/JSBase.cpp:

(JSSynchronousEdenCollectForDebugging):

  • API/JSManagedValue.mm:

(-[JSManagedValue initWithValue:]):

  • heap/EdenGCActivityCallback.cpp:

(JSC::EdenGCActivityCallback::doCollection):

  • heap/FullGCActivityCallback.cpp:

(JSC::FullGCActivityCallback::doCollection):

  • heap/Heap.cpp:

(JSC::Heap::Thread::Thread):
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::markRoots):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::deleteUnmarkedCompiledCode):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectAsync):
(JSC::Heap::collectSync):
(JSC::Heap::shouldCollectInThread):
(JSC::Heap::collectInThread):
(JSC::Heap::stopTheWorld):
(JSC::Heap::resumeTheWorld):
(JSC::Heap::stopIfNecessarySlow):
(JSC::Heap::acquireAccessSlow):
(JSC::Heap::releaseAccessSlow):
(JSC::Heap::handleDidJIT):
(JSC::Heap::handleNeedFinalize):
(JSC::Heap::setDidJIT):
(JSC::Heap::setNeedFinalize):
(JSC::Heap::waitWhileNeedFinalize):
(JSC::Heap::finalize):
(JSC::Heap::requestCollection):
(JSC::Heap::waitForCollection):
(JSC::Heap::didFinishCollection):
(JSC::Heap::canCollect):
(JSC::Heap::shouldCollectHeuristic):
(JSC::Heap::shouldCollect):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::collectAccordingToDeferGCProbability):
(JSC::Heap::collect): Deleted.
(JSC::Heap::collectWithoutAnySweep): Deleted.
(JSC::Heap::collectImpl): Deleted.

  • heap/Heap.h:

(JSC::Heap::ReleaseAccessScope::ReleaseAccessScope):
(JSC::Heap::ReleaseAccessScope::~ReleaseAccessScope):

  • heap/HeapInlines.h:

(JSC::Heap::acquireAccess):
(JSC::Heap::releaseAccess):
(JSC::Heap::stopIfNecessary):

  • heap/MachineStackMarker.cpp:

(JSC::MachineThreads::gatherConservativeRoots):
(JSC::MachineThreads::gatherFromCurrentThread): Deleted.

  • heap/MachineStackMarker.h:
  • jit/JITWorklist.cpp:

(JSC::JITWorklist::completeAllForVM):

  • jit/JITWorklist.h:
  • jsc.cpp:

(functionFullGC):
(functionEdenGC):

  • runtime/InitializeThreading.cpp:

(JSC::initializeThreading):

  • runtime/JSLock.cpp:

(JSC::JSLock::didAcquireLock):
(JSC::JSLock::unlock):
(JSC::JSLock::willReleaseLock):

  • tools/JSDollarVMPrototype.cpp:

(JSC::JSDollarVMPrototype::edenGC):

Source/WebCore:

No new tests because existing tests cover this.

We now need to be more careful about using JSLock. This fixes some places that were not
holding it. New assertions in the GC are more likely to catch this than before.

  • bindings/js/WorkerScriptController.cpp:

(WebCore::WorkerScriptController::WorkerScriptController):

Source/WTF:


This fixes some bugs and adds a few features.

  • wtf/Atomics.h: The GC may do work on behalf of the JIT. If it does, the main thread needs to execute a cross-modifying code fence. This is cpuid on x86 and I believe it's isb on ARM. It would have been an isync on PPC and I think that isb is the ARM equivalent.

(WTF::arm_isb):
(WTF::crossModifyingCodeFence):
(WTF::x86_ortop):
(WTF::x86_cpuid):

  • wtf/AutomaticThread.cpp: I accidentally had AutomaticThreadCondition inherit from ThreadSafeRefCounted<AutomaticThread> [sic]. This never crashed before because all of our prior AutomaticThreadConditions were immortal.

(WTF::AutomaticThread::AutomaticThread):
(WTF::AutomaticThread::~AutomaticThread):
(WTF::AutomaticThread::start):

  • wtf/AutomaticThread.h:
  • wtf/MainThread.cpp: Need to allow initializeGCThreads() to be called separately because it's now more than just a debugging thing.

(WTF::initializeGCThreads):

Location:
trunk/Source
Files:
3 added
2 deleted
58 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/API/JSBase.cpp

    r207653 r208306  
    166166    ExecState* exec = toJS(ctx);
    167167    JSLockHolder locker(exec);
    168     exec->vm().heap.collect(CollectionScope::Eden);
     168    exec->vm().heap.collectSync(CollectionScope::Eden);
    169169}
    170170
  • trunk/Source/JavaScriptCore/API/JSManagedValue.mm

    r208227 r208306  
    11/*
    2  * Copyright (C) 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
  • trunk/Source/JavaScriptCore/CMakeLists.txt

    r208238 r208306  
    487487    heap/MutatorState.cpp
    488488    heap/SlotVisitor.cpp
     489    heap/StopIfNecessaryTimer.cpp
    489490    heap/Weak.cpp
    490491    heap/WeakBlock.cpp
  • trunk/Source/JavaScriptCore/ChangeLog

    r208299 r208306  
     12016-11-02  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The GC should be in a thread
     4        https://bugs.webkit.org/show_bug.cgi?id=163562
     5
     6        Reviewed by Geoffrey Garen and Andreas Kling.
     7       
     8        In a concurrent GC, the work of collecting happens on a separate thread. This patch
     9        implements this, and schedules the thread the way that a concurrent GC thread would be
     10        scheduled. But, the GC isn't actually concurrent yet because it calls stopTheWorld() before
     11        doing anything and calls resumeTheWorld() after it's done with everything. The next step will
     12        be to make it really concurrent by basically calling stopTheWorld()/resumeTheWorld() around
     13        bounded snippets of work while making most of the work happen with the world running. Our GC
     14        will probably always have stop-the-world phases because the semantics of JSC weak references
     15        call for it.
     16       
     17        This implements concurrent GC scheduling. This means that there is no longer a
     18        Heap::collect() API. Instead, you can call collectAsync() which makes sure that a GC is
     19        scheduled (it will do nothing if one is scheduled or ongoing) or you can call collectSync()
     20        to schedule a GC and wait for it to happen. I made our debugging stuff call collectSync().
     21        It should be a goal to never call collectSync() except for debugging or benchmark harness
     22        hacks.
     23       
     24        The collector thread is an AutomaticThread, so it won't linger when not in use. It works on
     25        a ticket-based system, like you would see at the DMV. A ticket is a 64-bit integer. There are
     26        two ticket counters: last granted and last served. When you request a collection, last
     27        granted is incremented and its new value given to you. When a collection completes, last
     28        served is incremented. collectSync() waits until last served catches up to what last granted
     29        had been at the time you requested a GC. This means that if you request a sync GC in the
     30        middle of an async GC, you will wait for that async GC to finish and then you will request
     31        and wait for your sync GC.
     32       
     33        The synchronization between the collector thread and the main threads is complex. The
     34        collector thread needs to be able to ask the main thread to stop. It needs to be able to do
     35        some post-GC clean-up, like the synchronous CodeBlock and LargeAllocation sweeps, on the main
     36        thread. The collector needs to be able to ask the main thread to execute a cross-modifying
     37        code fence before running any JIT code, since the GC might aid the JIT worklist and run JIT
     38        finalization. It's possible for the GC to want the main thread to run something at the same
     39        time that the main thread wants to wait for the GC. The main thread needs to be able to run
     40        non-JSC stuff without causing the GC to completely stall. The main thread needs to be able
     41        to query its own state (is there a request to stop?) and change it (running JSC versus not)
     42        quickly, since this may happen on hot paths. This kind of intertwined system of requests,
     43        notifications, and state changes requires a combination of lock-free algorithms and waiting.
     44        So, this is all implemented using a Atomic<unsigned> Heap::m_worldState, which has bits to
     45        represent things being requested by the collector and the heap access state of the mutator. I
     46        am borrowing a lot of terms that I've seen in other VMs that I've worked on. Here's what they
     47        mean:
     48       
     49        - Stop the world: make sure that either the mutator is not running, or that it's not running
     50          code that could mess with the heap.
     51       
     52        - Heap access: the mutator is said to have heap access if it could mess with the heap.
     53       
     54        If you stop the world and the mutator doesn't have heap access, all you're doing is making
     55        sure that it will block when it tries to acquire heap access. This means that our GC is
     56        already fully concurrent in cases where the GC is requested while the mutator has no heap
     57        access. This probably won't happen, but if it did then it should just work. Usually, stopping
     58        the world means that we state our shouldStop request with m_worldState, and a future call
     59        to Heap::stopIfNecessary() will go to slow path and stop. The act of stopping or waiting to
     60        acquire heap access is managed by using ParkingLot API directly on m_worldState. This works
     61        out great because it would be very awkward to get the same functionality using locks and
     62        condition variables, since we want stopIfNecessary/acquireAccess/requestAccess fast paths
     63        that are single atomic instructions (load/CAS/CAS, respectively). The mutator will call these
     64        things frequently. Currently we have Heap::stopIfNecessary() polling on every allocator slow
     65        path, but we may want to make it even more frequent than that.
     66       
     67        Currently only JSC API clients benefit from the heap access optimization. The DOM forces us
     68        to assume that heap access is permanently on, since DOM manipulation doesn't always hold the
     69        JSLock. We could still allow the GC to proceed when the runloop is idle by having the GC put
     70        a task on the runloop that just calls stopIfNecessary().
     71       
     72        This is perf neutral. The only behavior change that clients ought to observe is that marking
     73        and the weak fixpoint happen on a separate thread. Marking was already parallel so it already
     74        handled multiple threads, but now it _never_ runs on the main thread. The weak fixpoint
     75        needed some help to be able to run on another thread - mostly because there was some code in
     76        IndexedDB that was using thread specifics in the weak fixpoint.
     77
     78        * API/JSBase.cpp:
     79        (JSSynchronousEdenCollectForDebugging):
     80        * API/JSManagedValue.mm:
     81        (-[JSManagedValue initWithValue:]):
     82        * heap/EdenGCActivityCallback.cpp:
     83        (JSC::EdenGCActivityCallback::doCollection):
     84        * heap/FullGCActivityCallback.cpp:
     85        (JSC::FullGCActivityCallback::doCollection):
     86        * heap/Heap.cpp:
     87        (JSC::Heap::Thread::Thread):
     88        (JSC::Heap::Heap):
     89        (JSC::Heap::lastChanceToFinalize):
     90        (JSC::Heap::markRoots):
     91        (JSC::Heap::gatherStackRoots):
     92        (JSC::Heap::deleteUnmarkedCompiledCode):
     93        (JSC::Heap::collectAllGarbage):
     94        (JSC::Heap::collectAsync):
     95        (JSC::Heap::collectSync):
     96        (JSC::Heap::shouldCollectInThread):
     97        (JSC::Heap::collectInThread):
     98        (JSC::Heap::stopTheWorld):
     99        (JSC::Heap::resumeTheWorld):
     100        (JSC::Heap::stopIfNecessarySlow):
     101        (JSC::Heap::acquireAccessSlow):
     102        (JSC::Heap::releaseAccessSlow):
     103        (JSC::Heap::handleDidJIT):
     104        (JSC::Heap::handleNeedFinalize):
     105        (JSC::Heap::setDidJIT):
     106        (JSC::Heap::setNeedFinalize):
     107        (JSC::Heap::waitWhileNeedFinalize):
     108        (JSC::Heap::finalize):
     109        (JSC::Heap::requestCollection):
     110        (JSC::Heap::waitForCollection):
     111        (JSC::Heap::didFinishCollection):
     112        (JSC::Heap::canCollect):
     113        (JSC::Heap::shouldCollectHeuristic):
     114        (JSC::Heap::shouldCollect):
     115        (JSC::Heap::collectIfNecessaryOrDefer):
     116        (JSC::Heap::collectAccordingToDeferGCProbability):
     117        (JSC::Heap::collect): Deleted.
     118        (JSC::Heap::collectWithoutAnySweep): Deleted.
     119        (JSC::Heap::collectImpl): Deleted.
     120        * heap/Heap.h:
     121        (JSC::Heap::ReleaseAccessScope::ReleaseAccessScope):
     122        (JSC::Heap::ReleaseAccessScope::~ReleaseAccessScope):
     123        * heap/HeapInlines.h:
     124        (JSC::Heap::acquireAccess):
     125        (JSC::Heap::releaseAccess):
     126        (JSC::Heap::stopIfNecessary):
     127        * heap/MachineStackMarker.cpp:
     128        (JSC::MachineThreads::gatherConservativeRoots):
     129        (JSC::MachineThreads::gatherFromCurrentThread): Deleted.
     130        * heap/MachineStackMarker.h:
     131        * jit/JITWorklist.cpp:
     132        (JSC::JITWorklist::completeAllForVM):
     133        * jit/JITWorklist.h:
     134        * jsc.cpp:
     135        (functionFullGC):
     136        (functionEdenGC):
     137        * runtime/InitializeThreading.cpp:
     138        (JSC::initializeThreading):
     139        * runtime/JSLock.cpp:
     140        (JSC::JSLock::didAcquireLock):
     141        (JSC::JSLock::unlock):
     142        (JSC::JSLock::willReleaseLock):
     143        * tools/JSDollarVMPrototype.cpp:
     144        (JSC::JSDollarVMPrototype::edenGC):
     145
    11462016-11-02  Michael Saboff  <msaboff@apple.com>
    2147
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r208238 r208306  
    506506                0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; };
    507507                0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */; };
     508                0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */; };
     509                0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */; };
     510                0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */; };
    508511                0F7CF9561DC1258D0098CC12 /* AtomicsObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */; };
    509512                0F7CF9571DC125900098CC12 /* AtomicsObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */; };
     
    28652868                0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedBlockInlines.h; sourceTree = "<group>"; };
    28662869                0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; };
     2870                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ReleaseHeapAccessScope.h; sourceTree = "<group>"; };
     2871                0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StopIfNecessaryTimer.cpp; sourceTree = "<group>"; };
     2872                0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StopIfNecessaryTimer.h; sourceTree = "<group>"; };
    28672873                0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AtomicsObject.cpp; sourceTree = "<group>"; };
    28682874                0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AtomicsObject.h; sourceTree = "<group>"; };
     
    56205626                                0FA762031DB9242300B7A2FD /* MutatorState.h */,
    56215627                                ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
     5628                                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
    56225629                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
    56235630                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
    56245631                                0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */,
     5632                                0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */,
     5633                                0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */,
    56255634                                142E3132134FF0A600AFADB5 /* Strong.h */,
    56265635                                145722851437E140005FDE26 /* StrongInlines.h */,
     
    78337842                                E3FFC8531DAD7D1500DEA53E /* DOMJITValue.h in Headers */,
    78347843                                0F9D36951AE9CC33000D4DFB /* DFGCleanUpPhase.h in Headers */,
     7844                                0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */,
    78357845                                A77A424017A0BBFD00A8DB81 /* DFGClobberize.h in Headers */,
    78367846                                0F37308D1C0BD29100052BFA /* B3PhiChildren.h in Headers */,
     
    86728682                                BC18C4640E16F5CD00B34460 /* SourceCode.h in Headers */,
    86738683                                0F7C39FD1C8F659500480151 /* RegExpObjectInlines.h in Headers */,
     8684                                0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */,
    86748685                                BC18C4630E16F5CD00B34460 /* SourceProvider.h in Headers */,
    86758686                                E49DC16C12EF294E00184A1F /* SourceProviderCache.h in Headers */,
     
    99079918                                7C184E2217BEE240007CB63A /* JSPromiseConstructor.cpp in Sources */,
    99089919                                7C008CDA187124BB00955C24 /* JSPromiseDeferred.cpp in Sources */,
     9920                                0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */,
    99099921                                7C184E1E17BEE22E007CB63A /* JSPromisePrototype.cpp in Sources */,
    99109922                                2A05ABD51961DF2400341750 /* JSPropertyNameEnumerator.cpp in Sources */,
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r208235 r208306  
    26082608    if (m_jitCode)
    26092609        visitor.reportExtraMemoryVisited(m_jitCode->size());
    2610     if (m_instructions.size())
    2611         visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
     2610    if (m_instructions.size()) {
     2611        unsigned refCount = m_instructions.refCount();
     2612        RELEASE_ASSERT(refCount);
     2613        visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount);
     2614    }
    26122615
    26132616    stronglyVisitStrongReferences(visitor);
  • trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp

    r204393 r208306  
    11/*
    2  * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    102102    plan->callback = callback;
    103103    if (Options::useConcurrentJIT()) {
    104         Worklist* worklist = ensureGlobalWorklistFor(mode);
     104        Worklist& worklist = ensureGlobalWorklistFor(mode);
    105105        if (logCompilationChanges(mode))
    106             dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist->queueLength(), ".\n");
    107         worklist->enqueue(plan);
     106            dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist.queueLength(), ".\n");
     107        worklist.enqueue(plan);
    108108        return CompilationDeferred;
    109109    }
  • trunk/Source/JavaScriptCore/dfg/DFGWorklist.cpp

    r207653 r208306  
    3434#include "DeferGC.h"
    3535#include "JSCInlines.h"
     36#include "ReleaseHeapAccessScope.h"
    3637#include <mutex>
    3738
     
    105106       
    106107        // There's no way for the GC to be safepointing since we own rightToRun.
    107         RELEASE_ASSERT(m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
     108        if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
     109            dataLog("Heap is stoped but here we are! (1)\n");
     110            RELEASE_ASSERT_NOT_REACHED();
     111        }
    108112        m_plan->compileInThread(*m_longLivedState, &m_data);
    109         RELEASE_ASSERT(m_plan->stage == Plan::Cancelled || m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
     113        if (m_plan->stage != Plan::Cancelled) {
     114            if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
     115                dataLog("Heap is stopped but here we are! (2)\n");
     116                RELEASE_ASSERT_NOT_REACHED();
     117            }
     118        }
    110119       
    111120        {
     
    125134            m_worklist.m_planCompiled.notifyAll();
    126135        }
    127         RELEASE_ASSERT(m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
     136        RELEASE_ASSERT(!m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped());
    128137       
    129138        return WorkResult::Continue;
     
    239248{
    240249    DeferGC deferGC(vm.heap);
     250   
     251    // While we are waiting for the compiler to finish, the collector might have already suspended
     252    // the compiler and then it will be waiting for us to stop. That's a deadlock. We avoid that
     253    // deadlock by relinquishing our heap access, so that the collector pretends that we are stopped
     254    // even if we aren't.
     255    ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap);
     256   
    241257    // Wait for all of the plans for the given VM to complete. The idea here
    242258    // is that we want all of the caller VM's plans to be done. We don't care
     
    484500static Worklist* theGlobalDFGWorklist;
    485501
    486 Worklist* ensureGlobalDFGWorklist()
     502Worklist& ensureGlobalDFGWorklist()
    487503{
    488504    static std::once_flag initializeGlobalWorklistOnceFlag;
     
    490506        theGlobalDFGWorklist = &Worklist::create("DFG Worklist", Options::numberOfDFGCompilerThreads(), Options::priorityDeltaOfDFGCompilerThreads()).leakRef();
    491507    });
     508    return *theGlobalDFGWorklist;
     509}
     510
     511Worklist* existingGlobalDFGWorklistOrNull()
     512{
    492513    return theGlobalDFGWorklist;
    493514}
    494515
    495 Worklist* existingGlobalDFGWorklistOrNull()
    496 {
    497     return theGlobalDFGWorklist;
    498 }
    499 
    500516static Worklist* theGlobalFTLWorklist;
    501517
    502 Worklist* ensureGlobalFTLWorklist()
     518Worklist& ensureGlobalFTLWorklist()
    503519{
    504520    static std::once_flag initializeGlobalWorklistOnceFlag;
     
    506522        theGlobalFTLWorklist = &Worklist::create("FTL Worklist", Options::numberOfFTLCompilerThreads(), Options::priorityDeltaOfFTLCompilerThreads()).leakRef();
    507523    });
     524    return *theGlobalFTLWorklist;
     525}
     526
     527Worklist* existingGlobalFTLWorklistOrNull()
     528{
    508529    return theGlobalFTLWorklist;
    509530}
    510531
    511 Worklist* existingGlobalFTLWorklistOrNull()
    512 {
    513     return theGlobalFTLWorklist;
    514 }
    515 
    516 Worklist* ensureGlobalWorklistFor(CompilationMode mode)
     532Worklist& ensureGlobalWorklistFor(CompilationMode mode)
    517533{
    518534    switch (mode) {
    519535    case InvalidCompilationMode:
    520536        RELEASE_ASSERT_NOT_REACHED();
    521         return 0;
     537        return ensureGlobalDFGWorklist();
    522538    case DFGMode:
    523539        return ensureGlobalDFGWorklist();
     
    527543    }
    528544    RELEASE_ASSERT_NOT_REACHED();
    529     return 0;
     545    return ensureGlobalDFGWorklist();
    530546}
    531547
     
    533549{
    534550    for (unsigned i = DFG::numberOfWorklists(); i--;) {
    535         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i))
     551        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i))
    536552            worklist->completeAllPlansForVM(vm);
    537553    }
     
    541557{
    542558    for (unsigned i = DFG::numberOfWorklists(); i--;) {
    543         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i))
     559        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i))
    544560            worklist->rememberCodeBlocks(vm);
    545561    }
  • trunk/Source/JavaScriptCore/dfg/DFGWorklist.h

    r207545 r208306  
    122122
    123123// For DFGMode compilations.
    124 Worklist* ensureGlobalDFGWorklist();
     124Worklist& ensureGlobalDFGWorklist();
    125125Worklist* existingGlobalDFGWorklistOrNull();
    126126
    127127// For FTLMode and FTLForOSREntryMode compilations.
    128 Worklist* ensureGlobalFTLWorklist();
     128Worklist& ensureGlobalFTLWorklist();
    129129Worklist* existingGlobalFTLWorklistOrNull();
    130130
    131 Worklist* ensureGlobalWorklistFor(CompilationMode);
     131Worklist& ensureGlobalWorklistFor(CompilationMode);
    132132
    133133// Simplify doing things for all worklists.
    134134inline unsigned numberOfWorklists() { return 2; }
    135 inline Worklist* worklistForIndexOrNull(unsigned index)
     135inline Worklist& ensureWorklistForIndex(unsigned index)
     136{
     137    switch (index) {
     138    case 0:
     139        return ensureGlobalDFGWorklist();
     140    case 1:
     141        return ensureGlobalFTLWorklist();
     142    default:
     143        RELEASE_ASSERT_NOT_REACHED();
     144        return ensureGlobalDFGWorklist();
     145    }
     146}
     147inline Worklist* existingWorklistForIndexOrNull(unsigned index)
    136148{
    137149    switch (index) {
     
    145157    }
    146158}
     159inline Worklist& existingWorklistForIndex(unsigned index)
     160{
     161    Worklist* result = existingWorklistForIndexOrNull(index);
     162    RELEASE_ASSERT(result);
     163    return *result;
     164}
    147165
    148166void completeAllPlansForVM(VM&);
  • trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp

    r207653 r208306  
    6666    if (safepointResult.didGetCancelled())
    6767        return;
    68     RELEASE_ASSERT(state.graph.m_vm.heap.mutatorState() != MutatorState::HelpingGC);
     68    RELEASE_ASSERT(!state.graph.m_vm.heap.collectorBelievesThatTheWorldIsStopped());
    6969   
    7070    if (state.allocationFailed)
  • trunk/Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp

    r207855 r208306  
    4040void EdenGCActivityCallback::doCollection()
    4141{
    42     m_vm->heap.collect(CollectionScope::Eden);
     42    m_vm->heap.collectAsync(CollectionScope::Eden);
    4343}
    4444
  • trunk/Source/JavaScriptCore/heap/FullGCActivityCallback.cpp

    r207855 r208306  
    5656#endif
    5757
    58     heap.collect(CollectionScope::Full);
     58    heap.collectAsync(CollectionScope::Full);
    5959}
    6060
  • trunk/Source/JavaScriptCore/heap/GCActivityCallback.h

    r207855 r208306  
    11/*
    2  * Copyright (C) 2010 Apple Inc. All rights reserved.
     2 * Copyright (C) 2010, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4141class Heap;
    4242
    43 class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer, public ThreadSafeRefCounted<GCActivityCallback> {
    44     WTF_MAKE_FAST_ALLOCATED;
     43class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer {
    4544public:
    4645    static RefPtr<FullGCActivityCallback> createFullTimer(Heap*);
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r208209 r208306  
    5353#include "ShadowChicken.h"
    5454#include "SuperSampler.h"
     55#include "StopIfNecessaryTimer.h"
    5556#include "TypeProfilerLog.h"
    5657#include "UnlinkedCodeBlock.h"
     
    189190
    190191} // anonymous namespace
     192
     193class Heap::Thread : public AutomaticThread {
     194public:
     195    Thread(const LockHolder& locker, Heap& heap)
     196        : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition)
     197        , m_heap(heap)
     198    {
     199    }
     200   
     201protected:
     202    PollResult poll(const LockHolder& locker) override
     203    {
     204        if (m_heap.m_threadShouldStop) {
     205            m_heap.notifyThreadStopping(locker);
     206            return PollResult::Stop;
     207        }
     208        if (m_heap.shouldCollectInThread(locker))
     209            return PollResult::Work;
     210        return PollResult::Wait;
     211    }
     212   
     213    WorkResult work() override
     214    {
     215        m_heap.collectInThread();
     216        return WorkResult::Continue;
     217    }
     218   
     219    void threadDidStart() override
     220    {
     221        WTF::registerGCThread(GCThreadType::Main);
     222    }
     223
     224private:
     225    Heap& m_heap;
     226};
    191227
    192228Heap::Heap(VM* vm, HeapType heapType)
     
    225261    , m_fullActivityCallback(GCActivityCallback::createFullTimer(this))
    226262    , m_edenActivityCallback(GCActivityCallback::createEdenTimer(this))
    227     , m_sweeper(std::make_unique<IncrementalSweeper>(this))
     263    , m_sweeper(adoptRef(new IncrementalSweeper(this)))
     264    , m_stopIfNecessaryTimer(adoptRef(new StopIfNecessaryTimer(vm)))
    228265    , m_deferralDepth(0)
    229266#if USE(FOUNDATION)
     
    231268#endif
    232269    , m_helperClient(&heapHelperPool())
    233 {
     270    , m_threadLock(Box<Lock>::create())
     271    , m_threadCondition(AutomaticThreadCondition::create())
     272{
     273    m_worldState.store(0);
     274   
    234275    if (Options::verifyHeap())
    235276        m_verifier = std::make_unique<HeapVerifier>(this, Options::numberOfGCCyclesToRecordForVerification());
     277   
     278    LockHolder locker(*m_threadLock);
     279    m_thread = adoptRef(new Thread(locker, *this));
    236280}
    237281
     
    252296{
    253297    RELEASE_ASSERT(!m_vm->entryScope);
    254     RELEASE_ASSERT(!m_collectionScope);
    255298    RELEASE_ASSERT(m_mutatorState == MutatorState::Running);
    256 
     299   
     300    // Carefully bring the thread down. We need to use waitForCollector() until we know that there
     301    // won't be any other collections.
     302    bool stopped = false;
     303    {
     304        LockHolder locker(*m_threadLock);
     305        stopped = m_thread->tryStop(locker);
     306        if (!stopped) {
     307            m_threadShouldStop = true;
     308            m_threadCondition->notifyOne(locker);
     309        }
     310    }
     311    if (!stopped) {
     312        waitForCollector(
     313            [&] (const LockHolder&) -> bool {
     314                return m_threadIsStopping;
     315            });
     316        // It's now safe to join the thread, since we know that there will not be any more collections.
     317        m_thread->join();
     318    }
     319   
    257320    m_arrayBuffers.lastChanceToFinalize();
    258321    m_codeBlocks->lastChanceToFinalize();
     
    382445}
    383446
    384 void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
     447void Heap::markRoots(double gcStartTime)
    385448{
    386449    TimingScope markRootsTimingScope(*this, "Heap::markRoots");
    387450   
    388     ASSERT(isValidThreadState(m_vm));
    389 
    390451    HeapRootVisitor heapRootVisitor(m_slotVisitor);
    391452   
     
    460521            ConservativeRoots conservativeRoots(*this);
    461522            SuperSamplerScope superSamplerScope(false);
    462             gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
     523            gatherStackRoots(conservativeRoots);
    463524            gatherJSStackRoots(conservativeRoots);
    464525            gatherScratchBufferRoots(conservativeRoots);
     
    500561}
    501562
    502 void Heap::gatherStackRoots(ConservativeRoots& roots, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
     563void Heap::gatherStackRoots(ConservativeRoots& roots)
    503564{
    504565    m_jitStubRoutines->clearMarks();
    505     m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
     566    m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
    506567}
    507568
     
    567628{
    568629#if ENABLE(DFG_JIT)
    569     for (auto worklist : m_suspendedCompilerWorklists)
    570         worklist->visitWeakReferences(m_slotVisitor);
     630    for (unsigned i = DFG::numberOfWorklists(); i--;)
     631        DFG::existingWorklistForIndex(i).visitWeakReferences(m_slotVisitor);
    571632
    572633    if (Options::logGC() == GCLogging::Verbose)
     
    578639{
    579640#if ENABLE(DFG_JIT)
    580     for (auto worklist : m_suspendedCompilerWorklists)
    581         worklist->removeDeadPlans(*m_vm);
     641    for (unsigned i = DFG::numberOfWorklists(); i--;)
     642        DFG::existingWorklistForIndex(i).removeDeadPlans(*m_vm);
    582643#endif
    583644}
     
    909970{
    910971    clearUnmarkedExecutables();
    911     m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_collectionScope);
     972    m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_lastCollectionScope);
    912973    m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines();
    913974}
     
    929990void Heap::collectAllGarbage()
    930991{
    931     SuperSamplerScope superSamplerScope(false);
    932992    if (!m_isSafeToCollect)
    933993        return;
    934 
    935     collectWithoutAnySweep(CollectionScope::Full);
     994   
     995    collectSync(CollectionScope::Full);
    936996
    937997    DeferGCForAWhile deferGC(*this);
     
    9561016}
    9571017
    958 void Heap::collect(Optional<CollectionScope> scope)
    959 {
    960     SuperSamplerScope superSamplerScope(false);
     1018void Heap::collectAsync(Optional<CollectionScope> scope)
     1019{
    9611020    if (!m_isSafeToCollect)
    9621021        return;
    963    
    964     collectWithoutAnySweep(scope);
    965 }
    966 
    967 NEVER_INLINE void Heap::collectWithoutAnySweep(Optional<CollectionScope> scope)
    968 {
    969     void* stackTop;
    970     ALLOCATE_AND_GET_REGISTER_STATE(registers);
    971 
    972     collectImpl(scope, wtfThreadData().stack().origin(), &stackTop, registers);
    973 
    974     sanitizeStackForVM(m_vm);
    975 }
    976 
    977 NEVER_INLINE void Heap::collectImpl(Optional<CollectionScope> scope, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
    978 {
     1022
     1023    bool alreadyRequested = false;
     1024    {
     1025        LockHolder locker(*m_threadLock);
     1026        for (Optional<CollectionScope> request : m_requests) {
     1027            if (scope) {
     1028                if (scope == CollectionScope::Eden) {
     1029                    alreadyRequested = true;
     1030                    break;
     1031                } else {
     1032                    RELEASE_ASSERT(scope == CollectionScope::Full);
     1033                    if (request == CollectionScope::Full) {
     1034                        alreadyRequested = true;
     1035                        break;
     1036                    }
     1037                }
     1038            } else {
     1039                if (!request || request == CollectionScope::Full) {
     1040                    alreadyRequested = true;
     1041                    break;
     1042                }
     1043            }
     1044        }
     1045    }
     1046    if (alreadyRequested)
     1047        return;
     1048
     1049    requestCollection(scope);
     1050}
     1051
     1052void Heap::collectSync(Optional<CollectionScope> scope)
     1053{
     1054    if (!m_isSafeToCollect)
     1055        return;
     1056   
     1057    waitForCollection(requestCollection(scope));
     1058}
     1059
     1060bool Heap::shouldCollectInThread(const LockHolder&)
     1061{
     1062    RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket));
     1063    RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
     1064   
     1065    return !m_requests.isEmpty();
     1066}
     1067
     1068void Heap::collectInThread()
     1069{
     1070    Optional<CollectionScope> scope;
     1071    {
     1072        LockHolder locker(*m_threadLock);
     1073        RELEASE_ASSERT(!m_requests.isEmpty());
     1074        scope = m_requests.first();
     1075    }
     1076   
    9791077    SuperSamplerScope superSamplerScope(false);
    980     TimingScope collectImplTimingScope(scope, "Heap::collectImpl");
     1078    TimingScope collectImplTimingScope(scope, "Heap::collectInThread");
    9811079   
    9821080#if ENABLE(ALLOCATION_LOGGING)
     
    9841082#endif
    9851083   
     1084    stopTheWorld();
     1085
    9861086    double before = 0;
    9871087    if (Options::logGC()) {
     
    10001100    {
    10011101        DeferGCForAWhile awhile(*this);
    1002         JITWorklist::instance()->completeAllForVM(*m_vm);
     1102        if (JITWorklist::instance()->completeAllForVM(*m_vm))
     1103            setGCDidJIT();
    10031104    }
    10041105#endif // ENABLE(JIT)
     
    10061107    vm()->shadowChicken().update(*vm(), vm()->topCallFrame);
    10071108   
    1008     RELEASE_ASSERT(!m_deferralDepth);
    1009     ASSERT(vm()->currentThreadIsHoldingAPILock());
    1010     RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
    10111109    ASSERT(m_isSafeToCollect);
    1012     RELEASE_ASSERT(!m_collectionScope);
    1013    
    1014     suspendCompilerThreads();
     1110    if (m_collectionScope) {
     1111        dataLog("Collection scope already set during GC: ", m_collectionScope, "\n");
     1112        RELEASE_ASSERT_NOT_REACHED();
     1113    }
     1114   
    10151115    willStartCollection(scope);
    1016     {
    1017         HelpingGCScope helpingHeapScope(*this);
    1018        
    1019         collectImplTimingScope.setScope(*this);
    1020        
    1021         gcStartTime = WTF::monotonicallyIncreasingTime();
    1022         if (m_verifier) {
    1023             // Verify that live objects from the last GC cycle haven't been corrupted by
    1024             // mutators before we begin this new GC cycle.
    1025             m_verifier->verify(HeapVerifier::Phase::BeforeGC);
     1116    collectImplTimingScope.setScope(*this);
     1117       
     1118    gcStartTime = WTF::monotonicallyIncreasingTime();
     1119    if (m_verifier) {
     1120        // Verify that live objects from the last GC cycle haven't been corrupted by
     1121        // mutators before we begin this new GC cycle.
     1122        m_verifier->verify(HeapVerifier::Phase::BeforeGC);
    10261123           
    1027             m_verifier->initializeGCCycle();
    1028             m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
    1029         }
    1030        
    1031         flushOldStructureIDTables();
    1032         stopAllocation();
    1033         prepareForMarking();
    1034         flushWriteBarrierBuffer();
    1035        
    1036         if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
    1037             cache->clear();
    1038        
    1039         markRoots(gcStartTime, stackOrigin, stackTop, calleeSavedRegisters);
    1040        
    1041         if (m_verifier) {
    1042             m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
    1043             m_verifier->verify(HeapVerifier::Phase::AfterMarking);
    1044         }
    1045        
    1046         if (vm()->typeProfiler())
    1047             vm()->typeProfiler()->invalidateTypeSetCache();
    1048        
    1049         reapWeakHandles();
    1050         pruneStaleEntriesFromWeakGCMaps();
    1051         sweepArrayBuffers();
    1052         snapshotUnswept();
    1053         finalizeUnconditionalFinalizers();
    1054         removeDeadCompilerWorklistEntries();
    1055         deleteUnmarkedCompiledCode();
    1056         deleteSourceProviderCaches();
    1057        
    1058         notifyIncrementalSweeper();
    1059         m_codeBlocks->writeBarrierCurrentlyExecuting(this);
    1060         m_codeBlocks->clearCurrentlyExecuting();
    1061        
    1062         prepareForAllocation();
    1063         updateAllocationLimits();
    1064     }
     1124        m_verifier->initializeGCCycle();
     1125        m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
     1126    }
     1127       
     1128    flushOldStructureIDTables();
     1129    stopAllocation();
     1130    prepareForMarking();
     1131    flushWriteBarrierBuffer();
     1132       
     1133    if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
     1134        cache->clear();
     1135       
     1136    markRoots(gcStartTime);
     1137       
     1138    if (m_verifier) {
     1139        m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
     1140        m_verifier->verify(HeapVerifier::Phase::AfterMarking);
     1141    }
     1142       
     1143    if (vm()->typeProfiler())
     1144        vm()->typeProfiler()->invalidateTypeSetCache();
     1145       
     1146    reapWeakHandles();
     1147    pruneStaleEntriesFromWeakGCMaps();
     1148    sweepArrayBuffers();
     1149    snapshotUnswept();
     1150    finalizeUnconditionalFinalizers();
     1151    removeDeadCompilerWorklistEntries();
     1152    notifyIncrementalSweeper();
     1153       
     1154    m_codeBlocks->writeBarrierCurrentlyExecuting(this);
     1155    m_codeBlocks->clearCurrentlyExecuting();
     1156       
     1157    prepareForAllocation();
     1158    updateAllocationLimits();
     1159
    10651160    didFinishCollection(gcStartTime);
    1066     resumeCompilerThreads();
    1067     sweepLargeAllocations();
    10681161   
    10691162    if (m_verifier) {
     
    10721165    }
    10731166
     1167    if (false) {
     1168        dataLog("Heap state after GC:\n");
     1169        m_objectSpace.dumpBits();
     1170    }
     1171   
    10741172    if (Options::logGC()) {
    10751173        double after = currentTimeMS();
     
    10771175    }
    10781176   
    1079     if (false) {
    1080         dataLog("Heap state after GC:\n");
    1081         m_objectSpace.dumpBits();
    1082     }
     1177    {
     1178        LockHolder locker(*m_threadLock);
     1179        m_requests.removeFirst();
     1180        m_lastServedTicket++;
     1181        clearMutatorWaiting();
     1182    }
     1183    ParkingLot::unparkAll(&m_worldState);
     1184
     1185    setNeedFinalize();
     1186    resumeTheWorld();
     1187}
     1188
     1189void Heap::stopTheWorld()
     1190{
     1191    RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped);
     1192    waitWhileNeedFinalize();
     1193    stopTheMutator();
     1194    suspendCompilerThreads();
     1195    m_collectorBelievesThatTheWorldIsStopped = true;
     1196}
     1197
     1198void Heap::resumeTheWorld()
     1199{
     1200    RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped);
     1201    m_collectorBelievesThatTheWorldIsStopped = false;
     1202    resumeCompilerThreads();
     1203    resumeTheMutator();
     1204}
     1205
     1206void Heap::stopTheMutator()
     1207{
     1208    for (;;) {
     1209        unsigned oldState = m_worldState.load();
     1210        if ((oldState & stoppedBit)
     1211            && (oldState & shouldStopBit))
     1212            return;
     1213       
     1214        // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could
     1215        // switch to that if it turned out to be less confusing, but then it would not give the
     1216        // mutator the opportunity to react to the world being stopped.
     1217        if (oldState & mutatorWaitingBit) {
     1218            if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
     1219                ParkingLot::unparkAll(&m_worldState);
     1220            continue;
     1221        }
     1222       
     1223        if (!(oldState & hasAccessBit)
     1224            || (oldState & stoppedBit)) {
     1225            // We can stop the world instantly.
     1226            if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit))
     1227                return;
     1228            continue;
     1229        }
     1230       
     1231        RELEASE_ASSERT(oldState & hasAccessBit);
     1232        RELEASE_ASSERT(!(oldState & stoppedBit));
     1233        m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit);
     1234        m_stopIfNecessaryTimer->scheduleSoon();
     1235        ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit);
     1236    }
     1237}
     1238
     1239void Heap::resumeTheMutator()
     1240{
     1241    for (;;) {
     1242        unsigned oldState = m_worldState.load();
     1243        RELEASE_ASSERT(oldState & shouldStopBit);
     1244       
     1245        if (!(oldState & hasAccessBit)) {
     1246            // We can resume the world instantly.
     1247            if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) {
     1248                ParkingLot::unparkAll(&m_worldState);
     1249                return;
     1250            }
     1251            continue;
     1252        }
     1253       
     1254        // We can tell the world to resume.
     1255        if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) {
     1256            ParkingLot::unparkAll(&m_worldState);
     1257            return;
     1258        }
     1259    }
     1260}
     1261
     1262void Heap::stopIfNecessarySlow()
     1263{
     1264    while (stopIfNecessarySlow(m_worldState.load())) { }
     1265    handleGCDidJIT();
     1266}
     1267
     1268bool Heap::stopIfNecessarySlow(unsigned oldState)
     1269{
     1270    RELEASE_ASSERT(oldState & hasAccessBit);
     1271   
     1272    if (handleNeedFinalize(oldState))
     1273        return true;
     1274   
     1275    if (!(oldState & shouldStopBit)) {
     1276        if (!(oldState & stoppedBit))
     1277            return false;
     1278        m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit);
     1279        return true;
     1280    }
     1281   
     1282    m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit);
     1283    ParkingLot::unparkAll(&m_worldState);
     1284    ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit);
     1285    return true;
     1286}
     1287
     1288template<typename Func>
     1289void Heap::waitForCollector(const Func& func)
     1290{
     1291    for (;;) {
     1292        bool done;
     1293        {
     1294            LockHolder locker(*m_threadLock);
     1295            done = func(locker);
     1296            if (!done) {
     1297                setMutatorWaiting();
     1298                // At this point, the collector knows that we intend to wait, and he will clear the
     1299                // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit
     1300                // prevents us from parking except if there is also stop-the-world. Unparking after
     1301                // clearing means that if the clearing happens after we park, then we will unpark.
     1302            }
     1303        }
     1304
     1305        // If we're in a stop-the-world scenario, we need to wait for that even if done is true.
     1306        unsigned oldState = m_worldState.load();
     1307        if (stopIfNecessarySlow(oldState))
     1308            continue;
     1309       
     1310        if (done) {
     1311            clearMutatorWaiting(); // Clean up just in case.
     1312            return;
     1313        }
     1314       
     1315        // If mutatorWaitingBit is still set then we want to wait.
     1316        ParkingLot::compareAndPark(&m_worldState, oldState | mutatorWaitingBit);
     1317    }
     1318}
     1319
     1320void Heap::acquireAccessSlow()
     1321{
     1322    for (;;) {
     1323        unsigned oldState = m_worldState.load();
     1324        RELEASE_ASSERT(!(oldState & hasAccessBit));
     1325       
     1326        if (oldState & shouldStopBit) {
     1327            RELEASE_ASSERT(oldState & stoppedBit);
     1328            // Wait until we're not stopped anymore.
     1329            ParkingLot::compareAndPark(&m_worldState, oldState);
     1330            continue;
     1331        }
     1332       
     1333        RELEASE_ASSERT(!(oldState & stoppedBit));
     1334        unsigned newState = oldState | hasAccessBit;
     1335        if (m_worldState.compareExchangeWeak(oldState, newState)) {
     1336            handleGCDidJIT();
     1337            handleNeedFinalize();
     1338            return;
     1339        }
     1340    }
     1341}
     1342
     1343void Heap::releaseAccessSlow()
     1344{
     1345    for (;;) {
     1346        unsigned oldState = m_worldState.load();
     1347        RELEASE_ASSERT(oldState & hasAccessBit);
     1348        RELEASE_ASSERT(!(oldState & stoppedBit));
     1349       
     1350        if (handleNeedFinalize(oldState))
     1351            continue;
     1352       
     1353        if (oldState & shouldStopBit) {
     1354            unsigned newState = (oldState & ~hasAccessBit) | stoppedBit;
     1355            if (m_worldState.compareExchangeWeak(oldState, newState)) {
     1356                ParkingLot::unparkAll(&m_worldState);
     1357                return;
     1358            }
     1359            continue;
     1360        }
     1361       
     1362        RELEASE_ASSERT(!(oldState & shouldStopBit));
     1363       
     1364        if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit))
     1365            return;
     1366    }
     1367}
     1368
     1369bool Heap::handleGCDidJIT(unsigned oldState)
     1370{
     1371    RELEASE_ASSERT(oldState & hasAccessBit);
     1372    if (!(oldState & gcDidJITBit))
     1373        return false;
     1374    if (m_worldState.compareExchangeWeak(oldState, oldState & ~gcDidJITBit)) {
     1375        WTF::crossModifyingCodeFence();
     1376        return true;
     1377    }
     1378    return true;
     1379}
     1380
     1381bool Heap::handleNeedFinalize(unsigned oldState)
     1382{
     1383    RELEASE_ASSERT(oldState & hasAccessBit);
     1384    if (!(oldState & needFinalizeBit))
     1385        return false;
     1386    if (m_worldState.compareExchangeWeak(oldState, oldState & ~needFinalizeBit)) {
     1387        finalize();
     1388        // Wake up anyone waiting for us to finalize. Note that they may have woken up already, in
     1389        // which case they would be waiting for us to release heap access.
     1390        ParkingLot::unparkAll(&m_worldState);
     1391        return true;
     1392    }
     1393    return true;
     1394}
     1395
     1396void Heap::handleGCDidJIT()
     1397{
     1398    while (handleGCDidJIT(m_worldState.load())) { }
     1399}
     1400
     1401void Heap::handleNeedFinalize()
     1402{
     1403    while (handleNeedFinalize(m_worldState.load())) { }
     1404}
     1405
     1406void Heap::setGCDidJIT()
     1407{
     1408    for (;;) {
     1409        unsigned oldState = m_worldState.load();
     1410        RELEASE_ASSERT(oldState & stoppedBit);
     1411        if (m_worldState.compareExchangeWeak(oldState, oldState | gcDidJITBit))
     1412            return;
     1413    }
     1414}
     1415
     1416void Heap::setNeedFinalize()
     1417{
     1418    for (;;) {
     1419        unsigned oldState = m_worldState.load();
     1420        if (m_worldState.compareExchangeWeak(oldState, oldState | needFinalizeBit)) {
     1421            m_stopIfNecessaryTimer->scheduleSoon();
     1422            return;
     1423        }
     1424    }
     1425}
     1426
     1427void Heap::waitWhileNeedFinalize()
     1428{
     1429    for (;;) {
     1430        unsigned oldState = m_worldState.load();
     1431        if (!(oldState & needFinalizeBit)) {
     1432            // This means that either there was no finalize request or the main thread will finalize
     1433            // with heap access, so a subsequent call to stopTheWorld() will return only when
     1434            // finalize finishes.
     1435            return;
     1436        }
     1437        ParkingLot::compareAndPark(&m_worldState, oldState);
     1438    }
     1439}
     1440
     1441unsigned Heap::setMutatorWaiting()
     1442{
     1443    for (;;) {
     1444        unsigned oldState = m_worldState.load();
     1445        unsigned newState = oldState | mutatorWaitingBit;
     1446        if (m_worldState.compareExchangeWeak(oldState, newState))
     1447            return newState;
     1448    }
     1449}
     1450
     1451void Heap::clearMutatorWaiting()
     1452{
     1453    for (;;) {
     1454        unsigned oldState = m_worldState.load();
     1455        if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
     1456            return;
     1457    }
     1458}
     1459
     1460void Heap::notifyThreadStopping(const LockHolder&)
     1461{
     1462    m_threadIsStopping = true;
     1463    clearMutatorWaiting();
     1464    ParkingLot::unparkAll(&m_worldState);
     1465}
     1466
     1467void Heap::finalize()
     1468{
     1469    HelpingGCScope helpingGCScope(*this);
     1470    deleteUnmarkedCompiledCode();
     1471    deleteSourceProviderCaches();
     1472    sweepLargeAllocations();
     1473}
     1474
     1475Heap::Ticket Heap::requestCollection(Optional<CollectionScope> scope)
     1476{
     1477    stopIfNecessary();
     1478   
     1479    ASSERT(vm()->currentThreadIsHoldingAPILock());
     1480    RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
     1481   
     1482    sanitizeStackForVM(m_vm);
     1483
     1484    LockHolder locker(*m_threadLock);
     1485    m_requests.append(scope);
     1486    m_lastGrantedTicket++;
     1487    m_threadCondition->notifyOne(locker);
     1488    return m_lastGrantedTicket;
     1489}
     1490
     1491void Heap::waitForCollection(Ticket ticket)
     1492{
     1493    waitForCollector(
     1494        [&] (const LockHolder&) -> bool {
     1495            return m_lastServedTicket >= ticket;
     1496        });
    10831497}
    10841498
     
    10911505{
    10921506#if ENABLE(DFG_JIT)
    1093     ASSERT(m_suspendedCompilerWorklists.isEmpty());
    1094     for (unsigned i = DFG::numberOfWorklists(); i--;) {
    1095         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
    1096             m_suspendedCompilerWorklists.append(worklist);
    1097             worklist->suspendAllThreads();
    1098         }
    1099     }
     1507    // We ensure the worklists so that it's not possible for the mutator to start a new worklist
     1508    // after we have suspended the ones that he had started before. That's not very expensive since
     1509    // the worklists use AutomaticThreads anyway.
     1510    for (unsigned i = DFG::numberOfWorklists(); i--;)
     1511        DFG::ensureWorklistForIndex(i).suspendAllThreads();
    11001512#endif
    11011513}
     
    13211733
    13221734    RELEASE_ASSERT(m_collectionScope);
     1735    m_lastCollectionScope = m_collectionScope;
    13231736    m_collectionScope = Nullopt;
    13241737
     
    13301743{
    13311744#if ENABLE(DFG_JIT)
    1332     for (auto worklist : m_suspendedCompilerWorklists)
    1333         worklist->resumeAllThreads();
    1334     m_suspendedCompilerWorklists.clear();
     1745    for (unsigned i = DFG::numberOfWorklists(); i--;)
     1746        DFG::existingWorklistForIndex(i).resumeAllThreads();
    13351747#endif
    13361748}
    13371749
    1338 void Heap::setFullActivityCallback(PassRefPtr<FullGCActivityCallback> activityCallback)
    1339 {
    1340     m_fullActivityCallback = activityCallback;
    1341 }
    1342 
    1343 void Heap::setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback> activityCallback)
    1344 {
    1345     m_edenActivityCallback = activityCallback;
    1346 }
    1347 
    13481750GCActivityCallback* Heap::fullActivityCallback()
    13491751{
     
    13541756{
    13551757    return m_edenActivityCallback.get();
    1356 }
    1357 
    1358 void Heap::setIncrementalSweeper(std::unique_ptr<IncrementalSweeper> sweeper)
    1359 {
    1360     m_sweeper = WTFMove(sweeper);
    13611758}
    13621759
     
    15471944}
    15481945
    1549 bool Heap::shouldCollect()
     1946bool Heap::canCollect()
    15501947{
    15511948    if (isDeferred())
     
    15551952    if (collectionScope() || mutatorState() == MutatorState::HelpingGC)
    15561953        return false;
     1954    return true;
     1955}
     1956
     1957bool Heap::shouldCollectHeuristic()
     1958{
    15571959    if (Options::gcMaxHeapSize())
    15581960        return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize();
    15591961    return m_bytesAllocatedThisCycle > m_maxEdenSize;
     1962}
     1963
     1964bool Heap::shouldCollect()
     1965{
     1966    return canCollect() && shouldCollectHeuristic();
    15601967}
    15611968
     
    15992006bool Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
    16002007{
    1601     if (!shouldCollect())
     2008    if (!canCollect())
     2009        return false;
     2010   
     2011    if (deferralContext) {
     2012        deferralContext->m_shouldGC |=
     2013            !!(m_worldState.load() & (shouldStopBit | needFinalizeBit | gcDidJITBit));
     2014    } else
     2015        stopIfNecessary();
     2016   
     2017    if (!shouldCollectHeuristic())
    16022018        return false;
    16032019
     
    16052021        deferralContext->m_shouldGC = true;
    16062022    else
    1607         collect();
     2023        collectAsync();
    16082024    return true;
    16092025}
     
    16152031
    16162032    if (randomNumber() < Options::deferGCProbability()) {
    1617         collect();
     2033        collectAsync();
    16182034        return;
    16192035    }
     
    16612077}
    16622078
     2079#if USE(CF)
     2080void Heap::setRunLoop(CFRunLoopRef runLoop)
     2081{
     2082    m_runLoop = runLoop;
     2083    m_fullActivityCallback->setRunLoop(runLoop);
     2084    m_edenActivityCallback->setRunLoop(runLoop);
     2085    m_sweeper->setRunLoop(runLoop);
     2086}
     2087#endif // USE(CF)
     2088
    16632089} // namespace JSC
  • trunk/Source/JavaScriptCore/heap/Heap.h

    r207855 r208306  
    4444#include "WriteBarrierBuffer.h"
    4545#include "WriteBarrierSupport.h"
     46#include <wtf/AutomaticThread.h>
     47#include <wtf/Deque.h>
    4648#include <wtf/HashCountedSet.h>
    4749#include <wtf/HashSet.h>
     
    7072class LLIntOffsetsExtractor;
    7173class MarkedArgumentBuffer;
     74class StopIfNecessaryTimer;
    7275class VM;
    7376
     
    131134    JS_EXPORT_PRIVATE GCActivityCallback* fullActivityCallback();
    132135    JS_EXPORT_PRIVATE GCActivityCallback* edenActivityCallback();
    133     JS_EXPORT_PRIVATE void setFullActivityCallback(PassRefPtr<FullGCActivityCallback>);
    134     JS_EXPORT_PRIVATE void setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback>);
    135136    JS_EXPORT_PRIVATE void setGarbageCollectionTimerEnabled(bool);
    136137
    137138    JS_EXPORT_PRIVATE IncrementalSweeper* sweeper();
    138     JS_EXPORT_PRIVATE void setIncrementalSweeper(std::unique_ptr<IncrementalSweeper>);
    139139
    140140    void addObserver(HeapObserver* observer) { m_observers.append(observer); }
     
    143143    MutatorState mutatorState() const { return m_mutatorState; }
    144144    Optional<CollectionScope> collectionScope() const { return m_collectionScope; }
     145    bool hasHeapAccess() const;
     146    bool mutatorIsStopped() const;
     147    bool collectorBelievesThatTheWorldIsStopped() const;
    145148
    146149    // We're always busy on the collection threads. On the main thread, this returns true if we're
     
    174177    JS_EXPORT_PRIVATE void collectAllGarbage();
    175178
     179    bool canCollect();
     180    bool shouldCollectHeuristic();
    176181    bool shouldCollect();
    177     JS_EXPORT_PRIVATE void collect(Optional<CollectionScope> = Nullopt);
     182   
     183    // Queue up a collection. Returns immediately. This will not queue a collection if a collection
     184    // of equal or greater strength exists. Full collections are stronger than Nullopt collections
     185    // and Nullopt collections are stronger than Eden collections. Nullopt means that the GC can
     186    // choose Eden or Full. This implies that if you request a GC while that GC is ongoing, nothing
     187    // will happen.
     188    JS_EXPORT_PRIVATE void collectAsync(Optional<CollectionScope> = Nullopt);
     189   
     190    // Queue up a collection and wait for it to complete. This won't return until you get your own
     191    // complete collection. For example, if there was an ongoing asynchronous collection at the time
     192    // you called this, then this would wait for that one to complete and then trigger your
     193    // collection and then return. In weird cases, there could be multiple GC requests in the backlog
     194    // and this will wait for that backlog before running its GC and returning.
     195    JS_EXPORT_PRIVATE void collectSync(Optional<CollectionScope> = Nullopt);
     196   
    178197    bool collectIfNecessaryOrDefer(GCDeferralContext* = nullptr); // Returns true if it did collect.
    179198    void collectAccordingToDeferGCProbability();
     
    271290    const unsigned* addressOfBarrierThreshold() const { return &m_barrierThreshold; }
    272291
     292    // If true, the GC believes that the mutator is currently messing with the heap. We call this
     293    // "having heap access". The GC may block if the mutator is in this state. If false, the GC may
     294    // currently be doing things to the heap that make the heap unsafe to access for the mutator.
     295    bool hasAccess() const;
     296   
     297    // If the mutator does not currently have heap access, this function will acquire it. If the GC
     298    // is currently using the lack of heap access to do dangerous things to the heap then this
     299    // function will block, waiting for the GC to finish. It's not valid to call this if the mutator
     300    // already has heap access. The mutator is required to precisely track whether or not it has
     301    // heap access.
     302    //
     303    // It's totally fine to acquireAccess() upon VM instantiation and keep it that way. This is how
     304    // WebCore uses us. For most other clients, JSLock does acquireAccess()/releaseAccess() for you.
     305    void acquireAccess();
     306   
     307    // Releases heap access. If the GC is blocking waiting to do bad things to the heap, it will be
     308    // allowed to run now.
     309    //
     310    // Ordinarily, you should use the ReleaseHeapAccessScope to release and then reacquire heap
     311    // access. You should do this anytime you're about do perform a blocking operation, like waiting
     312    // on the ParkingLot.
     313    void releaseAccess();
     314   
     315    // This is like a super optimized way of saying:
     316    //
     317    //     releaseAccess()
     318    //     acquireAccess()
     319    //
     320    // The fast path is an inlined relaxed load and branch. The slow path will block the mutator if
     321    // the GC wants to do bad things to the heap.
     322    //
     323    // All allocations logically call this. As an optimization to improve GC progress, you can call
     324    // this anywhere that you can afford a load-branch and where an object allocation would have been
     325    // safe.
     326    //
     327    // The GC will also push a stopIfNecessary() event onto the runloop of the thread that
     328    // instantiated the VM whenever it wants the mutator to stop. This means that if you never block
     329    // but instead use the runloop to wait for events, then you could safely run in a mode where the
     330    // mutator has permanent heap access (like the DOM does). If you have good event handling
     331    // discipline (i.e. you don't block the runloop) then you can be sure that stopIfNecessary() will
     332    // already be called for you at the right times.
     333    void stopIfNecessary();
     334   
    273335#if USE(CF)
    274336    CFRunLoopRef runLoop() const { return m_runLoop.get(); }
     337    JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
    275338#endif // USE(CF)
    276339
     
    297360    friend class VM;
    298361    friend class WeakSet;
     362
     363    class Thread;
     364    friend class Thread;
     365
    299366    template<typename T> friend void* allocateCell(Heap&);
    300367    template<typename T> friend void* allocateCell(Heap&, size_t);
    301368    template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*);
    302369    template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*, size_t);
    303 
    304     void collectWithoutAnySweep(Optional<CollectionScope> = Nullopt);
    305370
    306371    void* allocateWithDestructor(size_t); // For use with objects with destructors.
     
    320385    JS_EXPORT_PRIVATE void reportExtraMemoryAllocatedSlowCase(size_t);
    321386    JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t);
    322 
    323     void collectImpl(Optional<CollectionScope>, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
    324 
     387   
     388    bool shouldCollectInThread(const LockHolder&);
     389    void collectInThread();
     390   
     391    void stopTheWorld();
     392    void resumeTheWorld();
     393   
     394    void stopTheMutator();
     395    void resumeTheMutator();
     396   
     397    void stopIfNecessarySlow();
     398    bool stopIfNecessarySlow(unsigned extraStateBits);
     399   
     400    template<typename Func>
     401    void waitForCollector(const Func&);
     402   
     403    JS_EXPORT_PRIVATE void acquireAccessSlow();
     404    JS_EXPORT_PRIVATE void releaseAccessSlow();
     405   
     406    bool handleGCDidJIT(unsigned);
     407    bool handleNeedFinalize(unsigned);
     408    void handleGCDidJIT();
     409    void handleNeedFinalize();
     410   
     411    void setGCDidJIT();
     412    void setNeedFinalize();
     413    void waitWhileNeedFinalize();
     414   
     415    unsigned setMutatorWaiting();
     416    void clearMutatorWaiting();
     417    void notifyThreadStopping(const LockHolder&);
     418   
     419    typedef uint64_t Ticket;
     420    Ticket requestCollection(Optional<CollectionScope>);
     421    void waitForCollection(Ticket);
     422   
    325423    void suspendCompilerThreads();
    326424    void willStartCollection(Optional<CollectionScope>);
     
    330428    void prepareForMarking();
    331429   
    332     void markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
    333     void gatherStackRoots(ConservativeRoots&, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
     430    void markRoots(double gcStartTime);
     431    void gatherStackRoots(ConservativeRoots&);
    334432    void gatherJSStackRoots(ConservativeRoots&);
    335433    void gatherScratchBufferRoots(ConservativeRoots&);
     
    370468    void gatherExtraHeapSnapshotData(HeapProfiler&);
    371469    void removeDeadHeapSnapshotNodes(HeapProfiler&);
     470    void finalize();
    372471    void sweepLargeAllocations();
    373472   
     
    404503   
    405504    Optional<CollectionScope> m_collectionScope;
     505    Optional<CollectionScope> m_lastCollectionScope;
    406506    MutatorState m_mutatorState { MutatorState::Running };
    407507    StructureIDTable m_structureIDTable;
     
    454554    RefPtr<FullGCActivityCallback> m_fullActivityCallback;
    455555    RefPtr<GCActivityCallback> m_edenActivityCallback;
    456     std::unique_ptr<IncrementalSweeper> m_sweeper;
     556    RefPtr<IncrementalSweeper> m_sweeper;
     557    RefPtr<StopIfNecessaryTimer> m_stopIfNecessaryTimer;
    457558
    458559    Vector<HeapObserver*> m_observers;
    459560
    460561    unsigned m_deferralDepth;
    461     Vector<DFG::Worklist*> m_suspendedCompilerWorklists;
    462562
    463563    std::unique_ptr<HeapVerifier> m_verifier;
     
    491591    size_t m_externalMemorySize { 0 };
    492592#endif
     593   
     594    static const unsigned shouldStopBit = 1u << 0u;
     595    static const unsigned stoppedBit = 1u << 1u;
     596    static const unsigned hasAccessBit = 1u << 2u;
     597    static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid.
     598    static const unsigned needFinalizeBit = 1u << 4u;
     599    static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable.
     600    Atomic<unsigned> m_worldState;
     601    bool m_collectorBelievesThatTheWorldIsStopped { false };
     602   
     603    Deque<Optional<CollectionScope>> m_requests;
     604    Ticket m_lastServedTicket { 0 };
     605    Ticket m_lastGrantedTicket { 0 };
     606    bool m_threadShouldStop { false };
     607    bool m_threadIsStopping { false };
     608    Box<Lock> m_threadLock;
     609    RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock.
     610    RefPtr<AutomaticThread> m_thread;
    493611};
    494612
  • trunk/Source/JavaScriptCore/heap/HeapInlines.h

    r207714 r208306  
    5252}
    5353
     54inline bool Heap::hasHeapAccess() const
     55{
     56    return m_worldState.load() & hasAccessBit;
     57}
     58
     59inline bool Heap::mutatorIsStopped() const
     60{
     61    unsigned state = m_worldState.load();
     62    bool shouldStop = state & shouldStopBit;
     63    bool stopped = state & stoppedBit;
     64    // I only got it right when I considered all four configurations of shouldStop/stopped:
     65    // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we
     66    //     should return false.
     67    // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested
     68    //     that we resume, so we should return false.
     69    // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.
     70    //     The mutator should be able to do whatever it wants in this state, as if we were not
     71    //     stopped. So return false.
     72    // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is
     73    //     stopped, so return true.
     74    return shouldStop & stopped;
     75}
     76
     77inline bool Heap::collectorBelievesThatTheWorldIsStopped() const
     78{
     79    return m_collectorBelievesThatTheWorldIsStopped;
     80}
     81
    5482ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
    5583{
     
    311339}
    312340
     341inline void Heap::acquireAccess()
     342{
     343    if (m_worldState.compareExchangeWeak(0, hasAccessBit))
     344        return;
     345    acquireAccessSlow();
     346}
     347
     348inline bool Heap::hasAccess() const
     349{
     350    return m_worldState.loadRelaxed() & hasAccessBit;
     351}
     352
     353inline void Heap::releaseAccess()
     354{
     355    if (m_worldState.compareExchangeWeak(hasAccessBit, 0))
     356        return;
     357    releaseAccessSlow();
     358}
     359
     360inline void Heap::stopIfNecessary()
     361{
     362    if (m_worldState.loadRelaxed() == hasAccessBit)
     363        return;
     364    stopIfNecessarySlow();
     365}
     366
    313367} // namespace JSC
  • trunk/Source/JavaScriptCore/heap/HeapTimer.cpp

    r207855 r208306  
    100100{
    101101    CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + intervalInSeconds);
     102    m_isScheduled = true;
    102103}
    103104
     
    105106{
    106107    CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + s_decade);
     108    m_isScheduled = false;
    107109}
    108110
     
    153155    double targetTime = currentTime() + intervalInSeconds;
    154156    ecore_timer_interval_set(m_timer, targetTime);
     157    m_isScheduled = true;
    155158}
    156159
     
    158161{
    159162    ecore_timer_freeze(m_timer);
     163    m_isScheduled = false;
    160164}
    161165#elif USE(GLIB)
     
    220224    ASSERT(targetTime >= currentTime);
    221225    g_source_set_ready_time(m_timer.get(), targetTime);
     226    m_isScheduled = true;
    222227}
    223228
     
    225230{
    226231    g_source_set_ready_time(m_timer.get(), -1);
     232    m_isScheduled = false;
    227233}
    228234#else
  • trunk/Source/JavaScriptCore/heap/HeapTimer.h

    r207855 r208306  
    4545class VM;
    4646
    47 class HeapTimer {
     47class HeapTimer : public ThreadSafeRefCounted<HeapTimer> {
    4848public:
    4949    HeapTimer(VM*);
     
    5757    void scheduleTimer(double intervalInSeconds);
    5858    void cancelTimer();
     59    bool isScheduled() const { return m_isScheduled; }
    5960
    6061#if USE(CF)
     
    6667
    6768    RefPtr<JSLock> m_apiLock;
     69    bool m_isScheduled { false };
    6870#if USE(CF)
    6971    static const CFTimeInterval s_decade;
  • trunk/Source/JavaScriptCore/heap/IncrementalSweeper.cpp

    r207855 r208306  
    7272bool IncrementalSweeper::sweepNextBlock()
    7373{
     74    m_vm->heap.stopIfNecessary();
     75
    7476    MarkedBlock::Handle* block = nullptr;
    7577   
  • trunk/Source/JavaScriptCore/heap/IncrementalSweeper.h

    r207855 r208306  
    3535
    3636class IncrementalSweeper : public HeapTimer {
    37     WTF_MAKE_FAST_ALLOCATED;
    3837public:
    3938    JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*);
  • trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp

    r199762 r208306  
    11/*
    2  *  Copyright (C) 2003-2009, 2015 Apple Inc. All rights reserved.
     2 *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
    33 *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
    44 *  Copyright (C) 2009 Acision BV. All rights reserved.
     
    300300        delete t;
    301301    }
    302 }
    303 
    304 SUPPRESS_ASAN
    305 void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters)
    306 {
    307     void* registersBegin = &calleeSavedRegisters;
    308     void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(&calleeSavedRegisters + 1)));
    309     conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks);
    310 
    311     conservativeRoots.add(stackTop, stackOrigin, jitStubRoutines, codeBlocks);
    312302}
    313303
     
    10191009}
    10201010
    1021 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters)
    1022 {
    1023     gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
    1024 
     1011void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks)
     1012{
    10251013    size_t size;
    10261014    size_t capacity = 0;
  • trunk/Source/JavaScriptCore/heap/MachineStackMarker.h

    r206525 r208306  
    22 *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
    4  *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2015 Apple Inc. All rights reserved.
     4 *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
    55 *
    66 *  This library is free software; you can redistribute it and/or
     
    6666    ~MachineThreads();
    6767
    68     void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);
     68    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&);
    6969
    7070    JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads.
     
    146146
    147147private:
    148     void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);
    149 
    150148    void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*);
    151149    bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*);
  • trunk/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp

    r207444 r208306  
    456456void InspectorDebuggerAgent::setBreakpoint(JSC::Breakpoint& breakpoint, bool& existing)
    457457{
     458    JSC::JSLockHolder locker(m_scriptDebugServer.vm());
    458459    m_scriptDebugServer.setBreakpoint(breakpoint, existing);
    459460}
     
    470471            m_injectedScriptManager.releaseObjectGroup(objectGroupForBreakpointAction(action));
    471472
     473        JSC::JSLockHolder locker(m_scriptDebugServer.vm());
    472474        m_scriptDebugServer.removeBreakpointActions(breakpointID);
    473475        m_scriptDebugServer.removeBreakpoint(breakpointID);
     
    561563    m_breakReason = breakReason;
    562564    m_breakAuxData = WTFMove(data);
     565    JSC::JSLockHolder locker(m_scriptDebugServer.vm());
    563566    m_scriptDebugServer.setPauseOnNextStatement(true);
    564567}
     
    882885void InspectorDebuggerAgent::clearDebuggerBreakpointState()
    883886{
    884     m_scriptDebugServer.clearBreakpointActions();
    885     m_scriptDebugServer.clearBreakpoints();
    886     m_scriptDebugServer.clearBlacklist();
     887    {
     888        JSC::JSLockHolder holder(m_scriptDebugServer.vm());
     889        m_scriptDebugServer.clearBreakpointActions();
     890        m_scriptDebugServer.clearBreakpoints();
     891        m_scriptDebugServer.clearBlacklist();
     892    }
    887893
    888894    m_pausedScriptState = nullptr;
  • trunk/Source/JavaScriptCore/jit/JITWorklist.cpp

    r207566 r208306  
    159159}
    160160
    161 void JITWorklist::completeAllForVM(VM& vm)
    162 {
     161bool JITWorklist::completeAllForVM(VM& vm)
     162{
     163    bool result = false;
    163164    DeferGC deferGC(vm.heap);
    164165    for (;;) {
     
    187188                // whether we found some unfinished plans.
    188189                if (!didFindUnfinishedPlan)
    189                     return;
     190                    return result;
    190191               
    191192                m_condition->wait(*m_lock);
     
    193194        }
    194195       
     196        RELEASE_ASSERT(!myPlans.isEmpty());
     197        result = true;
    195198        finalizePlans(myPlans);
    196199    }
  • trunk/Source/JavaScriptCore/jit/JITWorklist.h

    r207566 r208306  
    5151    ~JITWorklist();
    5252   
    53     void completeAllForVM(VM&);
     53    bool completeAllForVM(VM&); // Return true if any JIT work happened.
    5454    void poll(VM&);
    5555   
  • trunk/Source/JavaScriptCore/jsc.cpp

    r208209 r208306  
    16781678{
    16791679    JSLockHolder lock(exec);
    1680     exec->heap()->collect(CollectionScope::Full);
     1680    exec->heap()->collectSync(CollectionScope::Full);
    16811681    return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastFullCollection()));
    16821682}
     
    16851685{
    16861686    JSLockHolder lock(exec);
    1687     exec->heap()->collect(CollectionScope::Eden);
     1687    exec->heap()->collectSync(CollectionScope::Eden);
    16881688    return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastEdenCollection()));
    16891689}
  • trunk/Source/JavaScriptCore/runtime/AtomicsObject.cpp

    r208209 r208306  
    3030#include "JSTypedArrays.h"
    3131#include "ObjectPrototype.h"
     32#include "ReleaseHeapAccessScope.h"
    3233#include "TypedArrayController.h"
    3334
     
    341342   
    342343    bool didPassValidation = false;
    343     ParkingLot::ParkResult result = ParkingLot::parkConditionally(
    344         ptr,
    345         [&] () -> bool {
    346             didPassValidation = WTF::atomicLoad(ptr) == expectedValue;
    347             return didPassValidation;
    348         },
    349         [] () { },
    350         timeout);
     344    ParkingLot::ParkResult result;
     345    {
     346        ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap);
     347        result = ParkingLot::parkConditionally(
     348            ptr,
     349            [&] () -> bool {
     350                didPassValidation = WTF::atomicLoad(ptr) == expectedValue;
     351                return didPassValidation;
     352            },
     353            [] () { },
     354            timeout);
     355    }
    351356    const char* resultString;
    352357    if (!didPassValidation)
  • trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp

    r198364 r208306  
    11/*
    2  * Copyright (C) 2008, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2008, 2015-2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4343#include "WriteBarrier.h"
    4444#include <mutex>
     45#include <wtf/MainThread.h>
     46#include <wtf/Threading.h>
    4547#include <wtf/dtoa.h>
    46 #include <wtf/Threading.h>
    4748#include <wtf/dtoa/cached-powers.h>
    4849
     
    5859        WTF::double_conversion::initialize();
    5960        WTF::initializeThreading();
     61        WTF::initializeGCThreads();
    6062        Options::initialize();
    6163        if (Options::recordGCPauseTimes())
  • trunk/Source/JavaScriptCore/runtime/JSLock.cpp

    r207653 r208306  
    11/*
    2  * Copyright (C) 2005, 2008, 2012, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2005, 2008, 2012, 2014, 2016 Apple Inc. All rights reserved.
    33 *
    44 * This library is free software; you can redistribute it and/or
     
    129129    if (!m_vm)
    130130        return;
     131   
     132    WTFThreadData& threadData = wtfThreadData();
     133    ASSERT(!m_entryAtomicStringTable);
     134    m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable());
     135    ASSERT(m_entryAtomicStringTable);
     136
     137    if (m_vm->heap.hasAccess())
     138        m_shouldReleaseHeapAccess = false;
     139    else {
     140        m_vm->heap.acquireAccess();
     141        m_shouldReleaseHeapAccess = true;
     142    }
    131143
    132144    RELEASE_ASSERT(!m_vm->stackPointerAtVMEntry());
     
    134146    m_vm->setStackPointerAtVMEntry(p);
    135147
    136     WTFThreadData& threadData = wtfThreadData();
    137148    m_vm->setLastStackTop(threadData.savedLastStackTop());
    138 
    139     ASSERT(!m_entryAtomicStringTable);
    140     m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable());
    141     ASSERT(m_entryAtomicStringTable);
    142149
    143150    m_vm->heap.machineThreads().addCurrentThread();
     
    168175
    169176    if (!m_lockCount) {
    170 
     177       
    171178        if (!m_hasExclusiveThread) {
    172179            m_ownerThreadID = std::thread::id();
     
    184191        vm->heap.releaseDelayedReleasedObjects();
    185192        vm->setStackPointerAtVMEntry(nullptr);
     193       
     194        if (m_shouldReleaseHeapAccess)
     195            vm->heap.releaseAccess();
    186196    }
    187197
  • trunk/Source/JavaScriptCore/runtime/JSLock.h

    r206525 r208306  
    11/*
    2  * Copyright (C) 2005, 2008, 2009, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2005, 2008, 2009, 2014, 2016 Apple Inc. All rights reserved.
    33 *
    44 * This library is free software; you can redistribute it and/or
     
    137137    unsigned m_lockDropDepth;
    138138    bool m_hasExclusiveThread;
     139    bool m_shouldReleaseHeapAccess;
    139140    VM* m_vm;
    140141    AtomicStringTable* m_entryAtomicStringTable;
  • trunk/Source/JavaScriptCore/runtime/VM.cpp

    r206658 r208306  
    355355    // no point to doing so.
    356356    for (unsigned i = DFG::numberOfWorklists(); i--;) {
    357         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
     357        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i)) {
    358358            worklist->removeNonCompilingPlansForVM(*this);
    359359            worklist->waitUntilAllPlansForVMAreReady(*this);
  • trunk/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp

    r208189 r208306  
    131131    if (!ensureCurrentThreadOwnsJSLock(exec))
    132132        return;
    133     exec->heap()->collect(CollectionScope::Eden);
     133    exec->heap()->collectSync(CollectionScope::Eden);
    134134}
    135135
  • trunk/Source/WTF/ChangeLog

    r208305 r208306  
     12016-11-02  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The GC should be in a thread
     4        https://bugs.webkit.org/show_bug.cgi?id=163562
     5
     6        Reviewed by Geoffrey Garen and Andreas Kling.
     7       
     8        This fixes some bugs and adds a few features.
     9
     10        * wtf/Atomics.h: The GC may do work on behalf of the JIT. If it does, the main thread needs to execute a cross-modifying code fence. This is cpuid on x86 and I believe it's isb on ARM. It would have been an isync on PPC and I think that isb is the ARM equivalent.
     11        (WTF::arm_isb):
     12        (WTF::crossModifyingCodeFence):
     13        (WTF::x86_ortop):
     14        (WTF::x86_cpuid):
     15        * wtf/AutomaticThread.cpp: I accidentally had AutomaticThreadCondition inherit from ThreadSafeRefCounted<AutomaticThread> [sic]. This never crashed before because all of our prior AutomaticThreadConditions were immortal.
     16        (WTF::AutomaticThread::AutomaticThread):
     17        (WTF::AutomaticThread::~AutomaticThread):
     18        (WTF::AutomaticThread::start):
     19        * wtf/AutomaticThread.h:
     20        * wtf/MainThread.cpp: Need to allow initializeGCThreads() to be called separately because it's now more than just a debugging thing.
     21        (WTF::initializeGCThreads):
     22
    1232016-11-02  Carlos Alberto Lopez Perez  <clopez@igalia.com>
    224
  • trunk/Source/WTF/wtf/Atomics.h

    r208209 r208306  
    3636#endif
    3737#include <windows.h>
     38#include <intrin.h>
    3839#endif
    3940
     
    5455
    5556    ALWAYS_INLINE T load(std::memory_order order = std::memory_order_seq_cst) const { return value.load(order); }
     57   
     58    ALWAYS_INLINE T loadRelaxed() const { return load(std::memory_order_relaxed); }
    5659
    5760    ALWAYS_INLINE void store(T desired, std::memory_order order = std::memory_order_seq_cst) { value.store(desired, order); }
     
    199202{
    200203    asm volatile("dmb ishst" ::: "memory");
     204}
     205
     206inline void arm_isb()
     207{
     208    asm volatile("isb" ::: "memory");
    201209}
    202210
     
    207215inline void memoryBarrierAfterLock() { arm_dmb(); }
    208216inline void memoryBarrierBeforeUnlock() { arm_dmb(); }
     217inline void crossModifyingCodeFence() { arm_isb(); }
    209218
    210219#elif CPU(X86) || CPU(X86_64)
     
    213222{
    214223#if OS(WINDOWS)
    215     // I think that this does the equivalent of a dummy interlocked instruction,
    216     // instead of using the 'mfence' instruction, at least according to MSDN. I
    217     // know that it is equivalent for our purposes, but it would be good to
    218     // investigate if that is actually better.
    219224    MemoryBarrier();
    220225#elif CPU(X86_64)
     
    224229#else
    225230    asm volatile("lock; orl $0, (%%esp)" ::: "memory");
     231#endif
     232}
     233
     234inline void x86_cpuid()
     235{
     236#if OS(WINDOWS)
     237    int info[4];
     238    __cpuid(info, 0);
     239#else
     240    intptr_t a = 0, b, c, d;
     241    asm volatile(
     242        "cpuid"
     243        : "+a"(a), "=b"(b), "=c"(c), "=d"(d)
     244        :
     245        : "memory");
    226246#endif
    227247}
     
    233253inline void memoryBarrierAfterLock() { compilerFence(); }
    234254inline void memoryBarrierBeforeUnlock() { compilerFence(); }
     255inline void crossModifyingCodeFence() { x86_cpuid(); }
    235256
    236257#else
     
    242263inline void memoryBarrierAfterLock() { std::atomic_thread_fence(std::memory_order_seq_cst); }
    243264inline void memoryBarrierBeforeUnlock() { std::atomic_thread_fence(std::memory_order_seq_cst); }
     265inline void crossModifyingCodeFence() { std::atomic_thread_fence(std::memory_order_seq_cst); } // Probably not strong enough.
    244266
    245267#endif
  • trunk/Source/WTF/wtf/AutomaticThread.cpp

    r207566 r208306  
    7979void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)
    8080{
    81     ASSERT(m_threads.contains(thread));
    8281    m_threads.removeFirst(thread);
    8382    ASSERT(!m_threads.contains(thread));
     
    9392    , m_condition(condition)
    9493{
     94    if (verbose)
     95        dataLog(RawPointer(this), ": Allocated AutomaticThread.\n");
    9596    m_condition->add(locker, this);
    9697}
     
    9899AutomaticThread::~AutomaticThread()
    99100{
     101    if (verbose)
     102        dataLog(RawPointer(this), ": Deleting AutomaticThread.\n");
    100103    LockHolder locker(*m_lock);
    101104   
     
    105108}
    106109
     110bool AutomaticThread::tryStop(const LockHolder&)
     111{
     112    if (!m_isRunning)
     113        return true;
     114    if (m_hasUnderlyingThread)
     115        return false;
     116    m_isRunning = false;
     117    return true;
     118}
     119
    107120void AutomaticThread::join()
    108121{
     
    114127class AutomaticThread::ThreadScope {
    115128public:
    116     ThreadScope(AutomaticThread& thread)
     129    ThreadScope(RefPtr<AutomaticThread> thread)
    117130        : m_thread(thread)
    118131    {
    119         m_thread.threadDidStart();
     132        m_thread->threadDidStart();
    120133    }
    121134   
    122135    ~ThreadScope()
    123136    {
    124         m_thread.threadWillStop();
     137        m_thread->threadWillStop();
     138       
     139        LockHolder locker(*m_thread->m_lock);
     140        m_thread->m_hasUnderlyingThread = false;
    125141    }
    126142
    127143private:
    128     AutomaticThread& m_thread;
     144    RefPtr<AutomaticThread> m_thread;
    129145};
    130146
    131147void AutomaticThread::start(const LockHolder&)
    132148{
     149    RELEASE_ASSERT(m_isRunning);
     150   
    133151    RefPtr<AutomaticThread> preserveThisForThread = this;
     152   
     153    m_hasUnderlyingThread = true;
    134154   
    135155    ThreadIdentifier thread = createThread(
     
    137157        [=] () {
    138158            if (verbose)
    139                 dataLog("Running automatic thread!\n");
    140             RefPtr<AutomaticThread> preserveThisInThread = preserveThisForThread;
     159                dataLog(RawPointer(this), ": Running automatic thread!\n");
     160            ThreadScope threadScope(preserveThisForThread);
    141161           
    142             {
     162            if (!ASSERT_DISABLED) {
    143163                LockHolder locker(*m_lock);
    144164                ASSERT(!m_condition->contains(locker, this));
    145165            }
    146            
    147             ThreadScope threadScope(*this);
    148166           
    149167            auto stop = [&] (const LockHolder&) {
     
    168186                        if (!awokenByNotify) {
    169187                            if (verbose)
    170                                 dataLog("Going to sleep!\n");
     188                                dataLog(RawPointer(this), ": Going to sleep!\n");
    171189                            m_condition->add(locker, this);
    172190                            return;
  • trunk/Source/WTF/wtf/AutomaticThread.h

    r207566 r208306  
    7070class AutomaticThread;
    7171
    72 class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThread> {
     72class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThreadCondition> {
    7373public:
    7474    static WTF_EXPORT_PRIVATE RefPtr<AutomaticThreadCondition> create();
     
    113113    virtual ~AutomaticThread();
    114114   
     115    // Sometimes it's possible to optimize for the case that there is no underlying thread.
     116    bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }
     117   
     118    // This attempts to quickly stop the thread. This will succeed if the thread happens to not be
     119    // running. Returns true if the thread has been stopped. A good idiom for stopping your automatic
     120    // thread is to first try this, and if that doesn't work, to tell the thread using your own
     121    // mechanism (set some flag and then notify the condition).
     122    bool tryStop(const LockHolder&);
     123   
    115124    void join();
    116125   
     
    152161    virtual WorkResult work() = 0;
    153162   
    154     class ThreadScope;
    155     friend class ThreadScope;
    156    
    157163    // It's sometimes useful to allocate resources while the thread is running, and to destroy them
    158164    // when the thread dies. These methods let you do this. You can override these methods, and you
     
    164170    friend class AutomaticThreadCondition;
    165171   
     172    class ThreadScope;
     173    friend class ThreadScope;
     174   
    166175    void start(const LockHolder&);
    167176   
     
    169178    RefPtr<AutomaticThreadCondition> m_condition;
    170179    bool m_isRunning { true };
     180    bool m_hasUnderlyingThread { false };
    171181    Condition m_isRunningCondition;
    172182};
  • trunk/Source/WTF/wtf/CompilationThread.cpp

    r161146 r208306  
    11/*
    2  * Copyright (C) 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3434namespace WTF {
    3535
    36 static ThreadSpecific<bool>* s_isCompilationThread;
     36static ThreadSpecific<bool, CanBeGCThread::True>* s_isCompilationThread;
    3737
    3838static void initializeCompilationThreads()
     
    4040    static std::once_flag initializeCompilationThreadsOnceFlag;
    4141    std::call_once(initializeCompilationThreadsOnceFlag, []{
    42         s_isCompilationThread = new ThreadSpecific<bool>();
     42        s_isCompilationThread = new ThreadSpecific<bool, CanBeGCThread::True>();
    4343    });
    4444}
  • trunk/Source/WTF/wtf/MainThread.cpp

    r207653 r208306  
    191191#endif
    192192
    193 static ThreadSpecific<Optional<GCThreadType>>* isGCThread;
     193static ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>* isGCThread;
    194194
    195195void initializeGCThreads()
    196196{
    197     isGCThread = new ThreadSpecific<Optional<GCThreadType>>();
     197    static std::once_flag flag;
     198    std::call_once(
     199        flag,
     200        [] {
     201            isGCThread = new ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>();
     202        });
    198203}
    199204
  • trunk/Source/WTF/wtf/MainThread.h

    r207653 r208306  
    6969#endif // USE(WEB_THREAD)
    7070
    71 void initializeGCThreads();
     71WTF_EXPORT_PRIVATE void initializeGCThreads();
    7272
    7373enum class GCThreadType {
  • trunk/Source/WTF/wtf/Optional.h

    r207237 r208306  
    11/*
    2  * Copyright (C) 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2929#include <type_traits>
    3030#include <wtf/Assertions.h>
     31#include <wtf/PrintStream.h>
    3132#include <wtf/StdLibExtras.h>
    3233
     
    264265}
    265266
     267template<typename T>
     268void printInternal(PrintStream& out, const Optional<T>& optional)
     269{
     270    if (optional)
     271        out.print(*optional);
     272    else
     273        out.print("Nullopt");
     274}
     275
    266276} // namespace WTF
    267277
  • trunk/Source/WTF/wtf/ParkingLot.cpp

    r208209 r208306  
    448448ThreadData* myThreadData()
    449449{
    450     static ThreadSpecific<RefPtr<ThreadData>>* threadData;
     450    static ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>* threadData;
    451451    static std::once_flag initializeOnce;
    452452    std::call_once(
    453453        initializeOnce,
    454454        [] {
    455             threadData = new ThreadSpecific<RefPtr<ThreadData>>();
     455            threadData = new ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>();
    456456        });
    457457   
  • trunk/Source/WTF/wtf/ThreadSpecific.h

    r199762 r208306  
    11/*
    2  * Copyright (C) 2008 Apple Inc. All rights reserved.
     2 * Copyright (C) 2008, 2016 Apple Inc. All rights reserved.
    33 * Copyright (C) 2009 Jian Li <jianli@chromium.org>
    44 * Copyright (C) 2012 Patrick Gansterer <paroga@paroga.com>
     
    4343#define WTF_ThreadSpecific_h
    4444
     45#include <wtf/MainThread.h>
    4546#include <wtf/Noncopyable.h>
    4647#include <wtf/StdLibExtras.h>
     
    6061#endif
    6162
    62 template<typename T> class ThreadSpecific {
     63enum class CanBeGCThread {
     64    False,
     65    True
     66};
     67
     68template<typename T, CanBeGCThread canBeGCThread = CanBeGCThread::False> class ThreadSpecific {
    6369    WTF_MAKE_NONCOPYABLE(ThreadSpecific);
    6470public:
     
    8793        WTF_MAKE_NONCOPYABLE(Data);
    8894    public:
    89         Data(T* value, ThreadSpecific<T>* owner) : value(value), owner(owner) {}
     95        Data(T* value, ThreadSpecific<T, canBeGCThread>* owner) : value(value), owner(owner) {}
    9096
    9197        T* value;
    92         ThreadSpecific<T>* owner;
     98        ThreadSpecific<T, canBeGCThread>* owner;
    9399    };
    94100
     
    128134}
    129135
    130 template<typename T>
    131 inline ThreadSpecific<T>::ThreadSpecific()
     136template<typename T, CanBeGCThread canBeGCThread>
     137inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific()
    132138{
    133139    int error = pthread_key_create(&m_key, destroy);
     
    136142}
    137143
    138 template<typename T>
    139 inline T* ThreadSpecific<T>::get()
     144template<typename T, CanBeGCThread canBeGCThread>
     145inline T* ThreadSpecific<T, canBeGCThread>::get()
    140146{
    141147    Data* data = static_cast<Data*>(pthread_getspecific(m_key));
    142     return data ? data->value : 0;
    143 }
    144 
    145 template<typename T>
    146 inline void ThreadSpecific<T>::set(T* ptr)
    147 {
     148    if (data)
     149        return data->value;
     150    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
     151    return nullptr;
     152}
     153
     154template<typename T, CanBeGCThread canBeGCThread>
     155inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr)
     156{
     157    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
    148158    ASSERT(!get());
    149159    pthread_setspecific(m_key, new Data(ptr, this));
     
    186196}
    187197
    188 template<typename T>
    189 inline ThreadSpecific<T>::ThreadSpecific()
     198template<typename T, CanBeGCThread canBeGCThread>
     199inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific()
    190200    : m_index(-1)
    191201{
     
    200210}
    201211
    202 template<typename T>
    203 inline ThreadSpecific<T>::~ThreadSpecific()
     212template<typename T, CanBeGCThread canBeGCThread>
     213inline ThreadSpecific<T, canBeGCThread>::~ThreadSpecific()
    204214{
    205215    FlsFree(flsKeys()[m_index]);
    206216}
    207217
    208 template<typename T>
    209 inline T* ThreadSpecific<T>::get()
     218template<typename T, CanBeGCThread canBeGCThread>
     219inline T* ThreadSpecific<T, canBeGCThread>::get()
    210220{
    211221    Data* data = static_cast<Data*>(FlsGetValue(flsKeys()[m_index]));
    212     return data ? data->value : 0;
    213 }
    214 
    215 template<typename T>
    216 inline void ThreadSpecific<T>::set(T* ptr)
    217 {
     222    if (data)
     223        return data->value;
     224    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
     225    return nullptr;
     226}
     227
     228template<typename T, CanBeGCThread canBeGCThread>
     229inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr)
     230{
     231    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
    218232    ASSERT(!get());
    219233    Data* data = new Data(ptr, this);
     
    225239#endif
    226240
    227 template<typename T>
    228 inline void THREAD_SPECIFIC_CALL ThreadSpecific<T>::destroy(void* ptr)
     241template<typename T, CanBeGCThread canBeGCThread>
     242inline void THREAD_SPECIFIC_CALL ThreadSpecific<T, canBeGCThread>::destroy(void* ptr)
    229243{
    230244    Data* data = static_cast<Data*>(ptr);
     
    250264}
    251265
    252 template<typename T>
    253 inline bool ThreadSpecific<T>::isSet()
     266template<typename T, CanBeGCThread canBeGCThread>
     267inline bool ThreadSpecific<T, canBeGCThread>::isSet()
    254268{
    255269    return !!get();
    256270}
    257271
    258 template<typename T>
    259 inline ThreadSpecific<T>::operator T*()
     272template<typename T, CanBeGCThread canBeGCThread>
     273inline ThreadSpecific<T, canBeGCThread>::operator T*()
    260274{
    261275    T* ptr = static_cast<T*>(get());
     
    270284}
    271285
    272 template<typename T>
    273 inline T* ThreadSpecific<T>::operator->()
     286template<typename T, CanBeGCThread canBeGCThread>
     287inline T* ThreadSpecific<T, canBeGCThread>::operator->()
    274288{
    275289    return operator T*();
    276290}
    277291
    278 template<typename T>
    279 inline T& ThreadSpecific<T>::operator*()
     292template<typename T, CanBeGCThread canBeGCThread>
     293inline T& ThreadSpecific<T, canBeGCThread>::operator*()
    280294{
    281295    return *operator T*();
     
    283297
    284298#if USE(WEB_THREAD)
    285 template<typename T>
    286 inline void ThreadSpecific<T>::replace(T* newPtr)
     299template<typename T, CanBeGCThread canBeGCThread>
     300inline void ThreadSpecific<T, canBeGCThread>::replace(T* newPtr)
    287301{
    288302    ASSERT(newPtr);
  • trunk/Source/WTF/wtf/WordLock.cpp

    r198345 r208306  
    6262};
    6363
    64 ThreadSpecific<ThreadData>* threadData;
     64ThreadSpecific<ThreadData, CanBeGCThread::True>* threadData;
    6565
    6666ThreadData* myThreadData()
     
    7070        initializeOnce,
    7171        [] {
    72             threadData = new ThreadSpecific<ThreadData>();
     72            threadData = new ThreadSpecific<ThreadData, CanBeGCThread::True>();
    7373        });
    7474
  • trunk/Source/WTF/wtf/text/AtomicStringImpl.cpp

    r201782 r208306  
    2626
    2727#include "AtomicStringTable.h"
     28#include "CommaPrinter.h"
     29#include "DataLog.h"
    2830#include "HashSet.h"
    2931#include "IntegerToStringConversion.h"
    3032#include "StringHash.h"
     33#include "StringPrintStream.h"
    3134#include "Threading.h"
    3235#include "WTFThreadData.h"
     
    7679    AtomicStringTableLocker locker;
    7780
    78     HashSet<StringImpl*>::AddResult addResult = stringTable().add<HashTranslator>(value);
     81    HashSet<StringImpl*>& atomicStringTable = stringTable();
     82    HashSet<StringImpl*>::AddResult addResult = atomicStringTable.add<HashTranslator>(value);
    7983
    8084    // If the string is newly-translated, then we need to adopt it.
     
    452456    HashSet<StringImpl*>::iterator iterator = atomicStringTable.find(string);
    453457    ASSERT_WITH_MESSAGE(iterator != atomicStringTable.end(), "The string being removed is atomic in the string table of an other thread!");
     458    ASSERT(string == *iterator);
    454459    atomicStringTable.remove(iterator);
    455460}
  • trunk/Source/WebCore/ChangeLog

    r208304 r208306  
     12016-11-02  Filip Pizlo  <fpizlo@apple.com>
     2
     3        The GC should be in a thread
     4        https://bugs.webkit.org/show_bug.cgi?id=163562
     5
     6        Reviewed by Geoffrey Garen and Andreas Kling.
     7
     8        No new tests because existing tests cover this.
     9       
     10        We now need to be more careful about using JSLock. This fixes some places that were not
     11        holding it. New assertions in the GC are more likely to catch this than before.
     12
     13        * bindings/js/WorkerScriptController.cpp:
     14        (WebCore::WorkerScriptController::WorkerScriptController):
     15
    1162016-11-02  Joseph Pecoraro  <pecoraro@apple.com>
    217
  • trunk/Source/WebCore/Modules/indexeddb/IDBDatabase.cpp

    r207937 r208306  
    11/*
    2  * Copyright (C) 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    5656    , m_info(resultData.databaseInfo())
    5757    , m_databaseConnectionIdentifier(resultData.databaseConnectionIdentifier())
     58    , m_eventNames(eventNames())
    5859{
    5960    LOG(IndexedDB, "IDBDatabase::IDBDatabase - Creating database %s with version %" PRIu64 " connection %" PRIu64 " (%p)", m_info.name().utf8().data(), m_info.version(), m_databaseConnectionIdentifier, this);
     
    7475bool IDBDatabase::hasPendingActivity() const
    7576{
    76     ASSERT(currentThread() == originThreadID());
     77    ASSERT(currentThread() == originThreadID() || mayBeGCThread());
    7778
    7879    if (m_closedInServer)
     
    8283        return true;
    8384
    84     return hasEventListeners(eventNames().abortEvent) || hasEventListeners(eventNames().errorEvent) || hasEventListeners(eventNames().versionchangeEvent);
     85    return hasEventListeners(m_eventNames.abortEvent) || hasEventListeners(m_eventNames.errorEvent) || hasEventListeners(m_eventNames.versionchangeEvent);
    8586}
    8687
     
    253254        transaction->connectionClosedFromServer(error);
    254255
    255     Ref<Event> event = Event::create(eventNames().errorEvent, true, false);
     256    Ref<Event> event = Event::create(m_eventNames.errorEvent, true, false);
    256257    event->setTarget(this);
    257258
     
    447448        return;
    448449    }
    449 
    450     Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, eventNames().versionchangeEvent);
     450   
     451    Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, m_eventNames.versionchangeEvent);
    451452    event->setTarget(this);
    452453    scriptExecutionContext()->eventQueue().enqueueEvent(WTFMove(event));
     
    460461    bool result = EventTargetWithInlineData::dispatchEvent(event);
    461462
    462     if (event.isVersionChangeEvent() && event.type() == eventNames().versionchangeEvent)
     463    if (event.isVersionChangeEvent() && event.type() == m_eventNames.versionchangeEvent)
    463464        connectionProxy().didFireVersionChangeEvent(m_databaseConnectionIdentifier, downcast<IDBVersionChangeEvent>(event).requestIdentifier());
    464465
  • trunk/Source/WebCore/Modules/indexeddb/IDBDatabase.h

    r207937 r208306  
    11/*
    2  * Copyright (C) 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4646class IDBTransaction;
    4747class IDBTransactionInfo;
     48struct EventNames;
    4849
    4950class IDBDatabase : public ThreadSafeRefCounted<IDBDatabase>, public EventTargetWithInlineData, public IDBActiveDOMObject {
     
    130131    HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_committingTransactions;
    131132    HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_abortingTransactions;
     133   
     134    const EventNames& m_eventNames; // Need to cache this so we can use it from GC threads.
    132135};
    133136
  • trunk/Source/WebCore/Modules/indexeddb/IDBRequest.cpp

    r208261 r208306  
    229229bool IDBRequest::hasPendingActivity() const
    230230{
    231     ASSERT(currentThread() == originThreadID());
     231    ASSERT(currentThread() == originThreadID() || mayBeGCThread());
    232232    return m_hasPendingActivity;
    233233}
  • trunk/Source/WebCore/Modules/indexeddb/IDBTransaction.cpp

    r208261 r208306  
    11/*
    2  * Copyright (C) 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    265265bool IDBTransaction::hasPendingActivity() const
    266266{
    267     ASSERT(currentThread() == m_database->originThreadID());
     267    ASSERT(currentThread() == m_database->originThreadID() || mayBeGCThread());
    268268    return !m_contextStopped && m_state != IndexedDB::TransactionState::Finished;
    269269}
  • trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj

    r208304 r208306  
    38853885                A456FA2711AD4A830020B420 /* LabelsNodeList.h in Headers */ = {isa = PBXBuildFile; fileRef = A456FA2511AD4A830020B420 /* LabelsNodeList.h */; };
    38863886                A501920E132EBF2E008BFE55 /* Autocapitalize.h in Headers */ = {isa = PBXBuildFile; fileRef = A501920C132EBF2E008BFE55 /* Autocapitalize.h */; settings = {ATTRIBUTES = (Private, ); }; };
    3887                 A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */; };
    38883887                A5071E801C506B66009951BE /* InspectorMemoryAgent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */; };
    38893888                A5071E811C506B69009951BE /* InspectorMemoryAgent.h in Headers */ = {isa = PBXBuildFile; fileRef = A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */; };
     
    59025901                CE7B2DB61586ABAD0098B3FA /* TextAlternativeWithRange.mm in Sources */ = {isa = PBXBuildFile; fileRef = CE7B2DB21586ABAD0098B3FA /* TextAlternativeWithRange.mm */; };
    59035902                CE7E17831C83A49100AD06AF /* ContentSecurityPolicyHash.h in Headers */ = {isa = PBXBuildFile; fileRef = CE7E17821C83A49100AD06AF /* ContentSecurityPolicyHash.h */; };
    5904                 CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */; };
    59055903                CEC337AD1A46071F009B8523 /* ServersSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AC1A46071F009B8523 /* ServersSPI.h */; settings = {ATTRIBUTES = (Private, ); }; };
    59065904                CEC337AF1A46086D009B8523 /* GraphicsServicesSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AE1A46086D009B8523 /* GraphicsServicesSPI.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    1140911407                A456FA2511AD4A830020B420 /* LabelsNodeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LabelsNodeList.h; sourceTree = "<group>"; };
    1141011408                A501920C132EBF2E008BFE55 /* Autocapitalize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Autocapitalize.h; sourceTree = "<group>"; };
    11411                 A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeGCActivityCallbackIOS.h; sourceTree = "<group>"; };
    1141211409                A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InspectorMemoryAgent.cpp; sourceTree = "<group>"; };
    1141311410                A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InspectorMemoryAgent.h; sourceTree = "<group>"; };
     
    1335213349                C280833E1C6DC22C001451B6 /* JSFontFace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSFontFace.h; sourceTree = "<group>"; };
    1335313350                C28083411C6DC96A001451B6 /* JSFontFaceCustom.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSFontFaceCustom.cpp; sourceTree = "<group>"; };
    13354                 C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeIncrementalSweeperIOS.h; sourceTree = "<group>"; };
    1335513351                C330A22113EC196B0000B45B /* ColorChooser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ColorChooser.h; sourceTree = "<group>"; };
    1335613352                C33EE5C214FB49610002095A /* BaseClickableWithKeyInputType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BaseClickableWithKeyInputType.cpp; sourceTree = "<group>"; };
     
    1935019346                                CDA29A2E1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.h */,
    1935119347                                CDA29A2F1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.mm */,
    19352                                 A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */,
    19353                                 C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */,
    1935419348                                3F42B31B1881191B00278AAC /* WebVideoFullscreenControllerAVKit.h */,
    1935519349                                3F42B31C1881191B00278AAC /* WebVideoFullscreenControllerAVKit.mm */,
     
    2786627860                                CDA29A0F1CBD9CFE00901CCF /* WebPlaybackSessionModelMediaElement.h in Headers */,
    2786727861                                99CC0B6B18BEA1FF006CEBCC /* WebReplayInputs.h in Headers */,
    27868                                 A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */,
    27869                                 CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */,
    2787027862                                1CAF34810A6C405200ABE06E /* WebScriptObject.h in Headers */,
    2787127863                                1CAF34830A6C405200ABE06E /* WebScriptObjectPrivate.h in Headers */,
  • trunk/Source/WebCore/bindings/js/JSDOMWindowBase.cpp

    r205670 r208306  
    22 *  Copyright (C) 2000 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2006 Jon Shier (jshier@iastate.edu)
    4  *  Copyright (C) 2003-2009, 2014 Apple Inc. All rights reseved.
     4 *  Copyright (C) 2003-2009, 2014, 2016 Apple Inc. All rights reseved.
    55 *  Copyright (C) 2006 Alexey Proskuryakov (ap@webkit.org)
    66 *  Copyright (c) 2015 Canon Inc. All rights reserved.
     
    5050#if PLATFORM(IOS)
    5151#include "ChromeClient.h"
    52 #include "WebSafeGCActivityCallbackIOS.h"
    53 #include "WebSafeIncrementalSweeperIOS.h"
    5452#endif
    5553
     
    245243        ScriptController::initializeThreading();
    246244        vm = &VM::createLeaked(LargeHeap).leakRef();
     245        vm->heap.acquireAccess(); // At any time, we may do things that affect the GC.
    247246#if !PLATFORM(IOS)
    248247        vm->setExclusiveThread(std::this_thread::get_id());
    249248#else
    250         vm->heap.setFullActivityCallback(WebSafeFullGCActivityCallback::create(&vm->heap));
    251         vm->heap.setEdenActivityCallback(WebSafeEdenGCActivityCallback::create(&vm->heap));
    252 
    253         vm->heap.setIncrementalSweeper(std::make_unique<WebSafeIncrementalSweeper>(&vm->heap));
     249        vm->heap.setRunLoop(WebThreadRunLoop());
    254250        vm->heap.machineThreads().addCurrentThread();
    255251#endif
  • trunk/Source/WebCore/bindings/js/WorkerScriptController.cpp

    r208008 r208306  
    5252    , m_workerGlobalScopeWrapper(*m_vm)
    5353{
     54    m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently.
    5455    m_vm->ensureWatchdog();
    5556    initNormalWorldClientData(m_vm.get());
     
    189190}
    190191
     192void WorkerScriptController::releaseHeapAccess()
     193{
     194    m_vm->heap.releaseAccess();
     195}
     196
     197void WorkerScriptController::acquireHeapAccess()
     198{
     199    m_vm->heap.acquireAccess();
     200}
     201
    191202void WorkerScriptController::attachDebugger(JSC::Debugger* debugger)
    192203{
  • trunk/Source/WebCore/bindings/js/WorkerScriptController.h

    r208008 r208306  
    11/*
    2  * Copyright (C) 2008, 2015 Apple Inc. All Rights Reserved.
     2 * Copyright (C) 2008, 2015, 2016 Apple Inc. All Rights Reserved.
    33 * Copyright (C) 2012 Google Inc. All Rights Reserved.
    44 *
     
    7777
    7878        JSC::VM& vm() { return *m_vm; }
     79       
     80        void releaseHeapAccess();
     81        void acquireHeapAccess();
    7982
    8083        void attachDebugger(JSC::Debugger*);
  • trunk/Source/WebCore/dom/EventTarget.cpp

    r207734 r208306  
    132132        return true;
    133133    }
    134     return addEventListener(eventType, listener.releaseNonNull());}
     134    return addEventListener(eventType, listener.releaseNonNull());
     135}
    135136
    136137EventListener* EventTarget::getAttributeEventListener(const AtomicString& eventType)
  • trunk/Source/WebCore/testing/Internals.cpp

    r208300 r208306  
    11/*
    22 * Copyright (C) 2012 Google Inc. All rights reserved.
    3  * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
     3 * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
    44 *
    55 * Redistribution and use in source and binary forms, with or without
     
    32943294#endif
    32953295
     3296void Internals::reportBacktrace()
     3297{
     3298    WTFReportBacktrace();
     3299}
     3300
    32963301} // namespace WebCore
  • trunk/Source/WebCore/testing/Internals.h

    r208300 r208306  
    497497
    498498    bool userPrefersReducedMotion() const;
     499   
     500    void reportBacktrace();
    499501
    500502private:
  • trunk/Source/WebCore/testing/Internals.idl

    r208300 r208306  
    11/*
    22 * Copyright (C) 2012 Google Inc. All rights reserved.
    3  * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
     3 * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
    44 *
    55 * Redistribution and use in source and binary forms, with or without
     
    469469
    470470    boolean userPrefersReducedMotion();
    471 };
     471   
     472    void reportBacktrace();
     473};
  • trunk/Source/WebCore/workers/WorkerRunLoop.cpp

    r208010 r208306  
    174174    }
    175175    MessageQueueWaitResult result;
     176    if (WorkerScriptController* script = context->script())
     177        script->releaseHeapAccess();
    176178    auto task = m_messageQueue.waitForMessageFilteredWithTimeout(result, predicate, absoluteTime);
     179    if (WorkerScriptController* script = context->script())
     180        script->acquireHeapAccess();
    177181
    178182    // If the context is closing, don't execute any further JavaScript tasks (per section 4.1.1 of the Web Workers spec).  However, there may be implementation cleanup tasks in the queue, so keep running through it.
Note: See TracChangeset for help on using the changeset viewer.