Changeset 208306 in webkit
- Timestamp:
- Nov 2, 2016, 3:01:04 PM (9 years ago)
- Location:
- trunk/Source
- Files:
-
- 3 added
- 2 deleted
- 58 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/API/JSBase.cpp
r207653 r208306 166 166 ExecState* exec = toJS(ctx); 167 167 JSLockHolder locker(exec); 168 exec->vm().heap.collect (CollectionScope::Eden);168 exec->vm().heap.collectSync(CollectionScope::Eden); 169 169 } 170 170 -
trunk/Source/JavaScriptCore/API/JSManagedValue.mm
r208227 r208306 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without -
trunk/Source/JavaScriptCore/CMakeLists.txt
r208238 r208306 487 487 heap/MutatorState.cpp 488 488 heap/SlotVisitor.cpp 489 heap/StopIfNecessaryTimer.cpp 489 490 heap/Weak.cpp 490 491 heap/WeakBlock.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r208299 r208306 1 2016-11-02 Filip Pizlo <fpizlo@apple.com> 2 3 The GC should be in a thread 4 https://bugs.webkit.org/show_bug.cgi?id=163562 5 6 Reviewed by Geoffrey Garen and Andreas Kling. 7 8 In a concurrent GC, the work of collecting happens on a separate thread. This patch 9 implements this, and schedules the thread the way that a concurrent GC thread would be 10 scheduled. But, the GC isn't actually concurrent yet because it calls stopTheWorld() before 11 doing anything and calls resumeTheWorld() after it's done with everything. The next step will 12 be to make it really concurrent by basically calling stopTheWorld()/resumeTheWorld() around 13 bounded snippets of work while making most of the work happen with the world running. Our GC 14 will probably always have stop-the-world phases because the semantics of JSC weak references 15 call for it. 16 17 This implements concurrent GC scheduling. This means that there is no longer a 18 Heap::collect() API. Instead, you can call collectAsync() which makes sure that a GC is 19 scheduled (it will do nothing if one is scheduled or ongoing) or you can call collectSync() 20 to schedule a GC and wait for it to happen. I made our debugging stuff call collectSync(). 21 It should be a goal to never call collectSync() except for debugging or benchmark harness 22 hacks. 23 24 The collector thread is an AutomaticThread, so it won't linger when not in use. It works on 25 a ticket-based system, like you would see at the DMV. A ticket is a 64-bit integer. There are 26 two ticket counters: last granted and last served. When you request a collection, last 27 granted is incremented and its new value given to you. When a collection completes, last 28 served is incremented. collectSync() waits until last served catches up to what last granted 29 had been at the time you requested a GC. This means that if you request a sync GC in the 30 middle of an async GC, you will wait for that async GC to finish and then you will request 31 and wait for your sync GC. 32 33 The synchronization between the collector thread and the main threads is complex. The 34 collector thread needs to be able to ask the main thread to stop. It needs to be able to do 35 some post-GC clean-up, like the synchronous CodeBlock and LargeAllocation sweeps, on the main 36 thread. The collector needs to be able to ask the main thread to execute a cross-modifying 37 code fence before running any JIT code, since the GC might aid the JIT worklist and run JIT 38 finalization. It's possible for the GC to want the main thread to run something at the same 39 time that the main thread wants to wait for the GC. The main thread needs to be able to run 40 non-JSC stuff without causing the GC to completely stall. The main thread needs to be able 41 to query its own state (is there a request to stop?) and change it (running JSC versus not) 42 quickly, since this may happen on hot paths. This kind of intertwined system of requests, 43 notifications, and state changes requires a combination of lock-free algorithms and waiting. 44 So, this is all implemented using a Atomic<unsigned> Heap::m_worldState, which has bits to 45 represent things being requested by the collector and the heap access state of the mutator. I 46 am borrowing a lot of terms that I've seen in other VMs that I've worked on. Here's what they 47 mean: 48 49 - Stop the world: make sure that either the mutator is not running, or that it's not running 50 code that could mess with the heap. 51 52 - Heap access: the mutator is said to have heap access if it could mess with the heap. 53 54 If you stop the world and the mutator doesn't have heap access, all you're doing is making 55 sure that it will block when it tries to acquire heap access. This means that our GC is 56 already fully concurrent in cases where the GC is requested while the mutator has no heap 57 access. This probably won't happen, but if it did then it should just work. Usually, stopping 58 the world means that we state our shouldStop request with m_worldState, and a future call 59 to Heap::stopIfNecessary() will go to slow path and stop. The act of stopping or waiting to 60 acquire heap access is managed by using ParkingLot API directly on m_worldState. This works 61 out great because it would be very awkward to get the same functionality using locks and 62 condition variables, since we want stopIfNecessary/acquireAccess/requestAccess fast paths 63 that are single atomic instructions (load/CAS/CAS, respectively). The mutator will call these 64 things frequently. Currently we have Heap::stopIfNecessary() polling on every allocator slow 65 path, but we may want to make it even more frequent than that. 66 67 Currently only JSC API clients benefit from the heap access optimization. The DOM forces us 68 to assume that heap access is permanently on, since DOM manipulation doesn't always hold the 69 JSLock. We could still allow the GC to proceed when the runloop is idle by having the GC put 70 a task on the runloop that just calls stopIfNecessary(). 71 72 This is perf neutral. The only behavior change that clients ought to observe is that marking 73 and the weak fixpoint happen on a separate thread. Marking was already parallel so it already 74 handled multiple threads, but now it _never_ runs on the main thread. The weak fixpoint 75 needed some help to be able to run on another thread - mostly because there was some code in 76 IndexedDB that was using thread specifics in the weak fixpoint. 77 78 * API/JSBase.cpp: 79 (JSSynchronousEdenCollectForDebugging): 80 * API/JSManagedValue.mm: 81 (-[JSManagedValue initWithValue:]): 82 * heap/EdenGCActivityCallback.cpp: 83 (JSC::EdenGCActivityCallback::doCollection): 84 * heap/FullGCActivityCallback.cpp: 85 (JSC::FullGCActivityCallback::doCollection): 86 * heap/Heap.cpp: 87 (JSC::Heap::Thread::Thread): 88 (JSC::Heap::Heap): 89 (JSC::Heap::lastChanceToFinalize): 90 (JSC::Heap::markRoots): 91 (JSC::Heap::gatherStackRoots): 92 (JSC::Heap::deleteUnmarkedCompiledCode): 93 (JSC::Heap::collectAllGarbage): 94 (JSC::Heap::collectAsync): 95 (JSC::Heap::collectSync): 96 (JSC::Heap::shouldCollectInThread): 97 (JSC::Heap::collectInThread): 98 (JSC::Heap::stopTheWorld): 99 (JSC::Heap::resumeTheWorld): 100 (JSC::Heap::stopIfNecessarySlow): 101 (JSC::Heap::acquireAccessSlow): 102 (JSC::Heap::releaseAccessSlow): 103 (JSC::Heap::handleDidJIT): 104 (JSC::Heap::handleNeedFinalize): 105 (JSC::Heap::setDidJIT): 106 (JSC::Heap::setNeedFinalize): 107 (JSC::Heap::waitWhileNeedFinalize): 108 (JSC::Heap::finalize): 109 (JSC::Heap::requestCollection): 110 (JSC::Heap::waitForCollection): 111 (JSC::Heap::didFinishCollection): 112 (JSC::Heap::canCollect): 113 (JSC::Heap::shouldCollectHeuristic): 114 (JSC::Heap::shouldCollect): 115 (JSC::Heap::collectIfNecessaryOrDefer): 116 (JSC::Heap::collectAccordingToDeferGCProbability): 117 (JSC::Heap::collect): Deleted. 118 (JSC::Heap::collectWithoutAnySweep): Deleted. 119 (JSC::Heap::collectImpl): Deleted. 120 * heap/Heap.h: 121 (JSC::Heap::ReleaseAccessScope::ReleaseAccessScope): 122 (JSC::Heap::ReleaseAccessScope::~ReleaseAccessScope): 123 * heap/HeapInlines.h: 124 (JSC::Heap::acquireAccess): 125 (JSC::Heap::releaseAccess): 126 (JSC::Heap::stopIfNecessary): 127 * heap/MachineStackMarker.cpp: 128 (JSC::MachineThreads::gatherConservativeRoots): 129 (JSC::MachineThreads::gatherFromCurrentThread): Deleted. 130 * heap/MachineStackMarker.h: 131 * jit/JITWorklist.cpp: 132 (JSC::JITWorklist::completeAllForVM): 133 * jit/JITWorklist.h: 134 * jsc.cpp: 135 (functionFullGC): 136 (functionEdenGC): 137 * runtime/InitializeThreading.cpp: 138 (JSC::initializeThreading): 139 * runtime/JSLock.cpp: 140 (JSC::JSLock::didAcquireLock): 141 (JSC::JSLock::unlock): 142 (JSC::JSLock::willReleaseLock): 143 * tools/JSDollarVMPrototype.cpp: 144 (JSC::JSDollarVMPrototype::edenGC): 145 1 146 2016-11-02 Michael Saboff <msaboff@apple.com> 2 147 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r208238 r208306 506 506 0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; }; 507 507 0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */; }; 508 0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */; }; 509 0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */; }; 510 0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */; }; 508 511 0F7CF9561DC1258D0098CC12 /* AtomicsObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */; }; 509 512 0F7CF9571DC125900098CC12 /* AtomicsObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */; }; … … 2865 2868 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedBlockInlines.h; sourceTree = "<group>"; }; 2866 2869 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; }; 2870 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ReleaseHeapAccessScope.h; sourceTree = "<group>"; }; 2871 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StopIfNecessaryTimer.cpp; sourceTree = "<group>"; }; 2872 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StopIfNecessaryTimer.h; sourceTree = "<group>"; }; 2867 2873 0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AtomicsObject.cpp; sourceTree = "<group>"; }; 2868 2874 0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AtomicsObject.h; sourceTree = "<group>"; }; … … 5620 5626 0FA762031DB9242300B7A2FD /* MutatorState.h */, 5621 5627 ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */, 5628 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */, 5622 5629 C225494215F7DBAA0065E898 /* SlotVisitor.cpp */, 5623 5630 14BA78F013AAB88F005B7C2C /* SlotVisitor.h */, 5624 5631 0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */, 5632 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */, 5633 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */, 5625 5634 142E3132134FF0A600AFADB5 /* Strong.h */, 5626 5635 145722851437E140005FDE26 /* StrongInlines.h */, … … 7833 7842 E3FFC8531DAD7D1500DEA53E /* DOMJITValue.h in Headers */, 7834 7843 0F9D36951AE9CC33000D4DFB /* DFGCleanUpPhase.h in Headers */, 7844 0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */, 7835 7845 A77A424017A0BBFD00A8DB81 /* DFGClobberize.h in Headers */, 7836 7846 0F37308D1C0BD29100052BFA /* B3PhiChildren.h in Headers */, … … 8672 8682 BC18C4640E16F5CD00B34460 /* SourceCode.h in Headers */, 8673 8683 0F7C39FD1C8F659500480151 /* RegExpObjectInlines.h in Headers */, 8684 0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */, 8674 8685 BC18C4630E16F5CD00B34460 /* SourceProvider.h in Headers */, 8675 8686 E49DC16C12EF294E00184A1F /* SourceProviderCache.h in Headers */, … … 9907 9918 7C184E2217BEE240007CB63A /* JSPromiseConstructor.cpp in Sources */, 9908 9919 7C008CDA187124BB00955C24 /* JSPromiseDeferred.cpp in Sources */, 9920 0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */, 9909 9921 7C184E1E17BEE22E007CB63A /* JSPromisePrototype.cpp in Sources */, 9910 9922 2A05ABD51961DF2400341750 /* JSPropertyNameEnumerator.cpp in Sources */, -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r208235 r208306 2608 2608 if (m_jitCode) 2609 2609 visitor.reportExtraMemoryVisited(m_jitCode->size()); 2610 if (m_instructions.size()) 2611 visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount()); 2610 if (m_instructions.size()) { 2611 unsigned refCount = m_instructions.refCount(); 2612 RELEASE_ASSERT(refCount); 2613 visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount); 2614 } 2612 2615 2613 2616 stronglyVisitStrongReferences(visitor); -
trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp
r204393 r208306 1 1 /* 2 * Copyright (C) 2011 , 2012, 2013, 2014Apple Inc. All rights reserved.2 * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 102 102 plan->callback = callback; 103 103 if (Options::useConcurrentJIT()) { 104 Worklist *worklist = ensureGlobalWorklistFor(mode);104 Worklist& worklist = ensureGlobalWorklistFor(mode); 105 105 if (logCompilationChanges(mode)) 106 dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist ->queueLength(), ".\n");107 worklist ->enqueue(plan);106 dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist.queueLength(), ".\n"); 107 worklist.enqueue(plan); 108 108 return CompilationDeferred; 109 109 } -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.cpp
r207653 r208306 34 34 #include "DeferGC.h" 35 35 #include "JSCInlines.h" 36 #include "ReleaseHeapAccessScope.h" 36 37 #include <mutex> 37 38 … … 105 106 106 107 // There's no way for the GC to be safepointing since we own rightToRun. 107 RELEASE_ASSERT(m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC); 108 if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) { 109 dataLog("Heap is stoped but here we are! (1)\n"); 110 RELEASE_ASSERT_NOT_REACHED(); 111 } 108 112 m_plan->compileInThread(*m_longLivedState, &m_data); 109 RELEASE_ASSERT(m_plan->stage == Plan::Cancelled || m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC); 113 if (m_plan->stage != Plan::Cancelled) { 114 if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) { 115 dataLog("Heap is stopped but here we are! (2)\n"); 116 RELEASE_ASSERT_NOT_REACHED(); 117 } 118 } 110 119 111 120 { … … 125 134 m_worklist.m_planCompiled.notifyAll(); 126 135 } 127 RELEASE_ASSERT( m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);136 RELEASE_ASSERT(!m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()); 128 137 129 138 return WorkResult::Continue; … … 239 248 { 240 249 DeferGC deferGC(vm.heap); 250 251 // While we are waiting for the compiler to finish, the collector might have already suspended 252 // the compiler and then it will be waiting for us to stop. That's a deadlock. We avoid that 253 // deadlock by relinquishing our heap access, so that the collector pretends that we are stopped 254 // even if we aren't. 255 ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap); 256 241 257 // Wait for all of the plans for the given VM to complete. The idea here 242 258 // is that we want all of the caller VM's plans to be done. We don't care … … 484 500 static Worklist* theGlobalDFGWorklist; 485 501 486 Worklist *ensureGlobalDFGWorklist()502 Worklist& ensureGlobalDFGWorklist() 487 503 { 488 504 static std::once_flag initializeGlobalWorklistOnceFlag; … … 490 506 theGlobalDFGWorklist = &Worklist::create("DFG Worklist", Options::numberOfDFGCompilerThreads(), Options::priorityDeltaOfDFGCompilerThreads()).leakRef(); 491 507 }); 508 return *theGlobalDFGWorklist; 509 } 510 511 Worklist* existingGlobalDFGWorklistOrNull() 512 { 492 513 return theGlobalDFGWorklist; 493 514 } 494 515 495 Worklist* existingGlobalDFGWorklistOrNull()496 {497 return theGlobalDFGWorklist;498 }499 500 516 static Worklist* theGlobalFTLWorklist; 501 517 502 Worklist *ensureGlobalFTLWorklist()518 Worklist& ensureGlobalFTLWorklist() 503 519 { 504 520 static std::once_flag initializeGlobalWorklistOnceFlag; … … 506 522 theGlobalFTLWorklist = &Worklist::create("FTL Worklist", Options::numberOfFTLCompilerThreads(), Options::priorityDeltaOfFTLCompilerThreads()).leakRef(); 507 523 }); 524 return *theGlobalFTLWorklist; 525 } 526 527 Worklist* existingGlobalFTLWorklistOrNull() 528 { 508 529 return theGlobalFTLWorklist; 509 530 } 510 531 511 Worklist* existingGlobalFTLWorklistOrNull() 512 { 513 return theGlobalFTLWorklist; 514 } 515 516 Worklist* ensureGlobalWorklistFor(CompilationMode mode) 532 Worklist& ensureGlobalWorklistFor(CompilationMode mode) 517 533 { 518 534 switch (mode) { 519 535 case InvalidCompilationMode: 520 536 RELEASE_ASSERT_NOT_REACHED(); 521 return 0;537 return ensureGlobalDFGWorklist(); 522 538 case DFGMode: 523 539 return ensureGlobalDFGWorklist(); … … 527 543 } 528 544 RELEASE_ASSERT_NOT_REACHED(); 529 return 0;545 return ensureGlobalDFGWorklist(); 530 546 } 531 547 … … 533 549 { 534 550 for (unsigned i = DFG::numberOfWorklists(); i--;) { 535 if (DFG::Worklist* worklist = DFG:: worklistForIndexOrNull(i))551 if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i)) 536 552 worklist->completeAllPlansForVM(vm); 537 553 } … … 541 557 { 542 558 for (unsigned i = DFG::numberOfWorklists(); i--;) { 543 if (DFG::Worklist* worklist = DFG:: worklistForIndexOrNull(i))559 if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i)) 544 560 worklist->rememberCodeBlocks(vm); 545 561 } -
trunk/Source/JavaScriptCore/dfg/DFGWorklist.h
r207545 r208306 122 122 123 123 // For DFGMode compilations. 124 Worklist *ensureGlobalDFGWorklist();124 Worklist& ensureGlobalDFGWorklist(); 125 125 Worklist* existingGlobalDFGWorklistOrNull(); 126 126 127 127 // For FTLMode and FTLForOSREntryMode compilations. 128 Worklist *ensureGlobalFTLWorklist();128 Worklist& ensureGlobalFTLWorklist(); 129 129 Worklist* existingGlobalFTLWorklistOrNull(); 130 130 131 Worklist *ensureGlobalWorklistFor(CompilationMode);131 Worklist& ensureGlobalWorklistFor(CompilationMode); 132 132 133 133 // Simplify doing things for all worklists. 134 134 inline unsigned numberOfWorklists() { return 2; } 135 inline Worklist* worklistForIndexOrNull(unsigned index) 135 inline Worklist& ensureWorklistForIndex(unsigned index) 136 { 137 switch (index) { 138 case 0: 139 return ensureGlobalDFGWorklist(); 140 case 1: 141 return ensureGlobalFTLWorklist(); 142 default: 143 RELEASE_ASSERT_NOT_REACHED(); 144 return ensureGlobalDFGWorklist(); 145 } 146 } 147 inline Worklist* existingWorklistForIndexOrNull(unsigned index) 136 148 { 137 149 switch (index) { … … 145 157 } 146 158 } 159 inline Worklist& existingWorklistForIndex(unsigned index) 160 { 161 Worklist* result = existingWorklistForIndexOrNull(index); 162 RELEASE_ASSERT(result); 163 return *result; 164 } 147 165 148 166 void completeAllPlansForVM(VM&); -
trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp
r207653 r208306 66 66 if (safepointResult.didGetCancelled()) 67 67 return; 68 RELEASE_ASSERT( state.graph.m_vm.heap.mutatorState() != MutatorState::HelpingGC);68 RELEASE_ASSERT(!state.graph.m_vm.heap.collectorBelievesThatTheWorldIsStopped()); 69 69 70 70 if (state.allocationFailed) -
trunk/Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp
r207855 r208306 40 40 void EdenGCActivityCallback::doCollection() 41 41 { 42 m_vm->heap.collect (CollectionScope::Eden);42 m_vm->heap.collectAsync(CollectionScope::Eden); 43 43 } 44 44 -
trunk/Source/JavaScriptCore/heap/FullGCActivityCallback.cpp
r207855 r208306 56 56 #endif 57 57 58 heap.collect (CollectionScope::Full);58 heap.collectAsync(CollectionScope::Full); 59 59 } 60 60 -
trunk/Source/JavaScriptCore/heap/GCActivityCallback.h
r207855 r208306 1 1 /* 2 * Copyright (C) 2010 Apple Inc. All rights reserved.2 * Copyright (C) 2010, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 41 41 class Heap; 42 42 43 class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer, public ThreadSafeRefCounted<GCActivityCallback> { 44 WTF_MAKE_FAST_ALLOCATED; 43 class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer { 45 44 public: 46 45 static RefPtr<FullGCActivityCallback> createFullTimer(Heap*); -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r208209 r208306 53 53 #include "ShadowChicken.h" 54 54 #include "SuperSampler.h" 55 #include "StopIfNecessaryTimer.h" 55 56 #include "TypeProfilerLog.h" 56 57 #include "UnlinkedCodeBlock.h" … … 189 190 190 191 } // anonymous namespace 192 193 class Heap::Thread : public AutomaticThread { 194 public: 195 Thread(const LockHolder& locker, Heap& heap) 196 : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition) 197 , m_heap(heap) 198 { 199 } 200 201 protected: 202 PollResult poll(const LockHolder& locker) override 203 { 204 if (m_heap.m_threadShouldStop) { 205 m_heap.notifyThreadStopping(locker); 206 return PollResult::Stop; 207 } 208 if (m_heap.shouldCollectInThread(locker)) 209 return PollResult::Work; 210 return PollResult::Wait; 211 } 212 213 WorkResult work() override 214 { 215 m_heap.collectInThread(); 216 return WorkResult::Continue; 217 } 218 219 void threadDidStart() override 220 { 221 WTF::registerGCThread(GCThreadType::Main); 222 } 223 224 private: 225 Heap& m_heap; 226 }; 191 227 192 228 Heap::Heap(VM* vm, HeapType heapType) … … 225 261 , m_fullActivityCallback(GCActivityCallback::createFullTimer(this)) 226 262 , m_edenActivityCallback(GCActivityCallback::createEdenTimer(this)) 227 , m_sweeper(std::make_unique<IncrementalSweeper>(this)) 263 , m_sweeper(adoptRef(new IncrementalSweeper(this))) 264 , m_stopIfNecessaryTimer(adoptRef(new StopIfNecessaryTimer(vm))) 228 265 , m_deferralDepth(0) 229 266 #if USE(FOUNDATION) … … 231 268 #endif 232 269 , m_helperClient(&heapHelperPool()) 233 { 270 , m_threadLock(Box<Lock>::create()) 271 , m_threadCondition(AutomaticThreadCondition::create()) 272 { 273 m_worldState.store(0); 274 234 275 if (Options::verifyHeap()) 235 276 m_verifier = std::make_unique<HeapVerifier>(this, Options::numberOfGCCyclesToRecordForVerification()); 277 278 LockHolder locker(*m_threadLock); 279 m_thread = adoptRef(new Thread(locker, *this)); 236 280 } 237 281 … … 252 296 { 253 297 RELEASE_ASSERT(!m_vm->entryScope); 254 RELEASE_ASSERT(!m_collectionScope);255 298 RELEASE_ASSERT(m_mutatorState == MutatorState::Running); 256 299 300 // Carefully bring the thread down. We need to use waitForCollector() until we know that there 301 // won't be any other collections. 302 bool stopped = false; 303 { 304 LockHolder locker(*m_threadLock); 305 stopped = m_thread->tryStop(locker); 306 if (!stopped) { 307 m_threadShouldStop = true; 308 m_threadCondition->notifyOne(locker); 309 } 310 } 311 if (!stopped) { 312 waitForCollector( 313 [&] (const LockHolder&) -> bool { 314 return m_threadIsStopping; 315 }); 316 // It's now safe to join the thread, since we know that there will not be any more collections. 317 m_thread->join(); 318 } 319 257 320 m_arrayBuffers.lastChanceToFinalize(); 258 321 m_codeBlocks->lastChanceToFinalize(); … … 382 445 } 383 446 384 void Heap::markRoots(double gcStartTime , void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)447 void Heap::markRoots(double gcStartTime) 385 448 { 386 449 TimingScope markRootsTimingScope(*this, "Heap::markRoots"); 387 450 388 ASSERT(isValidThreadState(m_vm));389 390 451 HeapRootVisitor heapRootVisitor(m_slotVisitor); 391 452 … … 460 521 ConservativeRoots conservativeRoots(*this); 461 522 SuperSamplerScope superSamplerScope(false); 462 gatherStackRoots(conservativeRoots , stackOrigin, stackTop, calleeSavedRegisters);523 gatherStackRoots(conservativeRoots); 463 524 gatherJSStackRoots(conservativeRoots); 464 525 gatherScratchBufferRoots(conservativeRoots); … … 500 561 } 501 562 502 void Heap::gatherStackRoots(ConservativeRoots& roots , void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)563 void Heap::gatherStackRoots(ConservativeRoots& roots) 503 564 { 504 565 m_jitStubRoutines->clearMarks(); 505 m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks , stackOrigin, stackTop, calleeSavedRegisters);566 m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks); 506 567 } 507 568 … … 567 628 { 568 629 #if ENABLE(DFG_JIT) 569 for ( auto worklist : m_suspendedCompilerWorklists)570 worklist->visitWeakReferences(m_slotVisitor);630 for (unsigned i = DFG::numberOfWorklists(); i--;) 631 DFG::existingWorklistForIndex(i).visitWeakReferences(m_slotVisitor); 571 632 572 633 if (Options::logGC() == GCLogging::Verbose) … … 578 639 { 579 640 #if ENABLE(DFG_JIT) 580 for ( auto worklist : m_suspendedCompilerWorklists)581 worklist->removeDeadPlans(*m_vm);641 for (unsigned i = DFG::numberOfWorklists(); i--;) 642 DFG::existingWorklistForIndex(i).removeDeadPlans(*m_vm); 582 643 #endif 583 644 } … … 909 970 { 910 971 clearUnmarkedExecutables(); 911 m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_ collectionScope);972 m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_lastCollectionScope); 912 973 m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines(); 913 974 } … … 929 990 void Heap::collectAllGarbage() 930 991 { 931 SuperSamplerScope superSamplerScope(false);932 992 if (!m_isSafeToCollect) 933 993 return; 934 935 collect WithoutAnySweep(CollectionScope::Full);994 995 collectSync(CollectionScope::Full); 936 996 937 997 DeferGCForAWhile deferGC(*this); … … 956 1016 } 957 1017 958 void Heap::collect(Optional<CollectionScope> scope) 959 { 960 SuperSamplerScope superSamplerScope(false); 1018 void Heap::collectAsync(Optional<CollectionScope> scope) 1019 { 961 1020 if (!m_isSafeToCollect) 962 1021 return; 963 964 collectWithoutAnySweep(scope); 965 } 966 967 NEVER_INLINE void Heap::collectWithoutAnySweep(Optional<CollectionScope> scope) 968 { 969 void* stackTop; 970 ALLOCATE_AND_GET_REGISTER_STATE(registers); 971 972 collectImpl(scope, wtfThreadData().stack().origin(), &stackTop, registers); 973 974 sanitizeStackForVM(m_vm); 975 } 976 977 NEVER_INLINE void Heap::collectImpl(Optional<CollectionScope> scope, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters) 978 { 1022 1023 bool alreadyRequested = false; 1024 { 1025 LockHolder locker(*m_threadLock); 1026 for (Optional<CollectionScope> request : m_requests) { 1027 if (scope) { 1028 if (scope == CollectionScope::Eden) { 1029 alreadyRequested = true; 1030 break; 1031 } else { 1032 RELEASE_ASSERT(scope == CollectionScope::Full); 1033 if (request == CollectionScope::Full) { 1034 alreadyRequested = true; 1035 break; 1036 } 1037 } 1038 } else { 1039 if (!request || request == CollectionScope::Full) { 1040 alreadyRequested = true; 1041 break; 1042 } 1043 } 1044 } 1045 } 1046 if (alreadyRequested) 1047 return; 1048 1049 requestCollection(scope); 1050 } 1051 1052 void Heap::collectSync(Optional<CollectionScope> scope) 1053 { 1054 if (!m_isSafeToCollect) 1055 return; 1056 1057 waitForCollection(requestCollection(scope)); 1058 } 1059 1060 bool Heap::shouldCollectInThread(const LockHolder&) 1061 { 1062 RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket)); 1063 RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket); 1064 1065 return !m_requests.isEmpty(); 1066 } 1067 1068 void Heap::collectInThread() 1069 { 1070 Optional<CollectionScope> scope; 1071 { 1072 LockHolder locker(*m_threadLock); 1073 RELEASE_ASSERT(!m_requests.isEmpty()); 1074 scope = m_requests.first(); 1075 } 1076 979 1077 SuperSamplerScope superSamplerScope(false); 980 TimingScope collectImplTimingScope(scope, "Heap::collectI mpl");1078 TimingScope collectImplTimingScope(scope, "Heap::collectInThread"); 981 1079 982 1080 #if ENABLE(ALLOCATION_LOGGING) … … 984 1082 #endif 985 1083 1084 stopTheWorld(); 1085 986 1086 double before = 0; 987 1087 if (Options::logGC()) { … … 1000 1100 { 1001 1101 DeferGCForAWhile awhile(*this); 1002 JITWorklist::instance()->completeAllForVM(*m_vm); 1102 if (JITWorklist::instance()->completeAllForVM(*m_vm)) 1103 setGCDidJIT(); 1003 1104 } 1004 1105 #endif // ENABLE(JIT) … … 1006 1107 vm()->shadowChicken().update(*vm(), vm()->topCallFrame); 1007 1108 1008 RELEASE_ASSERT(!m_deferralDepth);1009 ASSERT(vm()->currentThreadIsHoldingAPILock());1010 RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());1011 1109 ASSERT(m_isSafeToCollect); 1012 RELEASE_ASSERT(!m_collectionScope); 1013 1014 suspendCompilerThreads(); 1110 if (m_collectionScope) { 1111 dataLog("Collection scope already set during GC: ", m_collectionScope, "\n"); 1112 RELEASE_ASSERT_NOT_REACHED(); 1113 } 1114 1015 1115 willStartCollection(scope); 1016 { 1017 HelpingGCScope helpingHeapScope(*this); 1018 1019 collectImplTimingScope.setScope(*this); 1020 1021 gcStartTime = WTF::monotonicallyIncreasingTime(); 1022 if (m_verifier) { 1023 // Verify that live objects from the last GC cycle haven't been corrupted by 1024 // mutators before we begin this new GC cycle. 1025 m_verifier->verify(HeapVerifier::Phase::BeforeGC); 1116 collectImplTimingScope.setScope(*this); 1117 1118 gcStartTime = WTF::monotonicallyIncreasingTime(); 1119 if (m_verifier) { 1120 // Verify that live objects from the last GC cycle haven't been corrupted by 1121 // mutators before we begin this new GC cycle. 1122 m_verifier->verify(HeapVerifier::Phase::BeforeGC); 1026 1123 1027 m_verifier->initializeGCCycle(); 1028 m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking); 1029 } 1030 1031 flushOldStructureIDTables(); 1032 stopAllocation(); 1033 prepareForMarking(); 1034 flushWriteBarrierBuffer(); 1035 1036 if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache()) 1037 cache->clear(); 1038 1039 markRoots(gcStartTime, stackOrigin, stackTop, calleeSavedRegisters); 1040 1041 if (m_verifier) { 1042 m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking); 1043 m_verifier->verify(HeapVerifier::Phase::AfterMarking); 1044 } 1045 1046 if (vm()->typeProfiler()) 1047 vm()->typeProfiler()->invalidateTypeSetCache(); 1048 1049 reapWeakHandles(); 1050 pruneStaleEntriesFromWeakGCMaps(); 1051 sweepArrayBuffers(); 1052 snapshotUnswept(); 1053 finalizeUnconditionalFinalizers(); 1054 removeDeadCompilerWorklistEntries(); 1055 deleteUnmarkedCompiledCode(); 1056 deleteSourceProviderCaches(); 1057 1058 notifyIncrementalSweeper(); 1059 m_codeBlocks->writeBarrierCurrentlyExecuting(this); 1060 m_codeBlocks->clearCurrentlyExecuting(); 1061 1062 prepareForAllocation(); 1063 updateAllocationLimits(); 1064 } 1124 m_verifier->initializeGCCycle(); 1125 m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking); 1126 } 1127 1128 flushOldStructureIDTables(); 1129 stopAllocation(); 1130 prepareForMarking(); 1131 flushWriteBarrierBuffer(); 1132 1133 if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache()) 1134 cache->clear(); 1135 1136 markRoots(gcStartTime); 1137 1138 if (m_verifier) { 1139 m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking); 1140 m_verifier->verify(HeapVerifier::Phase::AfterMarking); 1141 } 1142 1143 if (vm()->typeProfiler()) 1144 vm()->typeProfiler()->invalidateTypeSetCache(); 1145 1146 reapWeakHandles(); 1147 pruneStaleEntriesFromWeakGCMaps(); 1148 sweepArrayBuffers(); 1149 snapshotUnswept(); 1150 finalizeUnconditionalFinalizers(); 1151 removeDeadCompilerWorklistEntries(); 1152 notifyIncrementalSweeper(); 1153 1154 m_codeBlocks->writeBarrierCurrentlyExecuting(this); 1155 m_codeBlocks->clearCurrentlyExecuting(); 1156 1157 prepareForAllocation(); 1158 updateAllocationLimits(); 1159 1065 1160 didFinishCollection(gcStartTime); 1066 resumeCompilerThreads();1067 sweepLargeAllocations();1068 1161 1069 1162 if (m_verifier) { … … 1072 1165 } 1073 1166 1167 if (false) { 1168 dataLog("Heap state after GC:\n"); 1169 m_objectSpace.dumpBits(); 1170 } 1171 1074 1172 if (Options::logGC()) { 1075 1173 double after = currentTimeMS(); … … 1077 1175 } 1078 1176 1079 if (false) { 1080 dataLog("Heap state after GC:\n"); 1081 m_objectSpace.dumpBits(); 1082 } 1177 { 1178 LockHolder locker(*m_threadLock); 1179 m_requests.removeFirst(); 1180 m_lastServedTicket++; 1181 clearMutatorWaiting(); 1182 } 1183 ParkingLot::unparkAll(&m_worldState); 1184 1185 setNeedFinalize(); 1186 resumeTheWorld(); 1187 } 1188 1189 void Heap::stopTheWorld() 1190 { 1191 RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped); 1192 waitWhileNeedFinalize(); 1193 stopTheMutator(); 1194 suspendCompilerThreads(); 1195 m_collectorBelievesThatTheWorldIsStopped = true; 1196 } 1197 1198 void Heap::resumeTheWorld() 1199 { 1200 RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped); 1201 m_collectorBelievesThatTheWorldIsStopped = false; 1202 resumeCompilerThreads(); 1203 resumeTheMutator(); 1204 } 1205 1206 void Heap::stopTheMutator() 1207 { 1208 for (;;) { 1209 unsigned oldState = m_worldState.load(); 1210 if ((oldState & stoppedBit) 1211 && (oldState & shouldStopBit)) 1212 return; 1213 1214 // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could 1215 // switch to that if it turned out to be less confusing, but then it would not give the 1216 // mutator the opportunity to react to the world being stopped. 1217 if (oldState & mutatorWaitingBit) { 1218 if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit)) 1219 ParkingLot::unparkAll(&m_worldState); 1220 continue; 1221 } 1222 1223 if (!(oldState & hasAccessBit) 1224 || (oldState & stoppedBit)) { 1225 // We can stop the world instantly. 1226 if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit)) 1227 return; 1228 continue; 1229 } 1230 1231 RELEASE_ASSERT(oldState & hasAccessBit); 1232 RELEASE_ASSERT(!(oldState & stoppedBit)); 1233 m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit); 1234 m_stopIfNecessaryTimer->scheduleSoon(); 1235 ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit); 1236 } 1237 } 1238 1239 void Heap::resumeTheMutator() 1240 { 1241 for (;;) { 1242 unsigned oldState = m_worldState.load(); 1243 RELEASE_ASSERT(oldState & shouldStopBit); 1244 1245 if (!(oldState & hasAccessBit)) { 1246 // We can resume the world instantly. 1247 if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) { 1248 ParkingLot::unparkAll(&m_worldState); 1249 return; 1250 } 1251 continue; 1252 } 1253 1254 // We can tell the world to resume. 1255 if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) { 1256 ParkingLot::unparkAll(&m_worldState); 1257 return; 1258 } 1259 } 1260 } 1261 1262 void Heap::stopIfNecessarySlow() 1263 { 1264 while (stopIfNecessarySlow(m_worldState.load())) { } 1265 handleGCDidJIT(); 1266 } 1267 1268 bool Heap::stopIfNecessarySlow(unsigned oldState) 1269 { 1270 RELEASE_ASSERT(oldState & hasAccessBit); 1271 1272 if (handleNeedFinalize(oldState)) 1273 return true; 1274 1275 if (!(oldState & shouldStopBit)) { 1276 if (!(oldState & stoppedBit)) 1277 return false; 1278 m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit); 1279 return true; 1280 } 1281 1282 m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit); 1283 ParkingLot::unparkAll(&m_worldState); 1284 ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit); 1285 return true; 1286 } 1287 1288 template<typename Func> 1289 void Heap::waitForCollector(const Func& func) 1290 { 1291 for (;;) { 1292 bool done; 1293 { 1294 LockHolder locker(*m_threadLock); 1295 done = func(locker); 1296 if (!done) { 1297 setMutatorWaiting(); 1298 // At this point, the collector knows that we intend to wait, and he will clear the 1299 // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit 1300 // prevents us from parking except if there is also stop-the-world. Unparking after 1301 // clearing means that if the clearing happens after we park, then we will unpark. 1302 } 1303 } 1304 1305 // If we're in a stop-the-world scenario, we need to wait for that even if done is true. 1306 unsigned oldState = m_worldState.load(); 1307 if (stopIfNecessarySlow(oldState)) 1308 continue; 1309 1310 if (done) { 1311 clearMutatorWaiting(); // Clean up just in case. 1312 return; 1313 } 1314 1315 // If mutatorWaitingBit is still set then we want to wait. 1316 ParkingLot::compareAndPark(&m_worldState, oldState | mutatorWaitingBit); 1317 } 1318 } 1319 1320 void Heap::acquireAccessSlow() 1321 { 1322 for (;;) { 1323 unsigned oldState = m_worldState.load(); 1324 RELEASE_ASSERT(!(oldState & hasAccessBit)); 1325 1326 if (oldState & shouldStopBit) { 1327 RELEASE_ASSERT(oldState & stoppedBit); 1328 // Wait until we're not stopped anymore. 1329 ParkingLot::compareAndPark(&m_worldState, oldState); 1330 continue; 1331 } 1332 1333 RELEASE_ASSERT(!(oldState & stoppedBit)); 1334 unsigned newState = oldState | hasAccessBit; 1335 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1336 handleGCDidJIT(); 1337 handleNeedFinalize(); 1338 return; 1339 } 1340 } 1341 } 1342 1343 void Heap::releaseAccessSlow() 1344 { 1345 for (;;) { 1346 unsigned oldState = m_worldState.load(); 1347 RELEASE_ASSERT(oldState & hasAccessBit); 1348 RELEASE_ASSERT(!(oldState & stoppedBit)); 1349 1350 if (handleNeedFinalize(oldState)) 1351 continue; 1352 1353 if (oldState & shouldStopBit) { 1354 unsigned newState = (oldState & ~hasAccessBit) | stoppedBit; 1355 if (m_worldState.compareExchangeWeak(oldState, newState)) { 1356 ParkingLot::unparkAll(&m_worldState); 1357 return; 1358 } 1359 continue; 1360 } 1361 1362 RELEASE_ASSERT(!(oldState & shouldStopBit)); 1363 1364 if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit)) 1365 return; 1366 } 1367 } 1368 1369 bool Heap::handleGCDidJIT(unsigned oldState) 1370 { 1371 RELEASE_ASSERT(oldState & hasAccessBit); 1372 if (!(oldState & gcDidJITBit)) 1373 return false; 1374 if (m_worldState.compareExchangeWeak(oldState, oldState & ~gcDidJITBit)) { 1375 WTF::crossModifyingCodeFence(); 1376 return true; 1377 } 1378 return true; 1379 } 1380 1381 bool Heap::handleNeedFinalize(unsigned oldState) 1382 { 1383 RELEASE_ASSERT(oldState & hasAccessBit); 1384 if (!(oldState & needFinalizeBit)) 1385 return false; 1386 if (m_worldState.compareExchangeWeak(oldState, oldState & ~needFinalizeBit)) { 1387 finalize(); 1388 // Wake up anyone waiting for us to finalize. Note that they may have woken up already, in 1389 // which case they would be waiting for us to release heap access. 1390 ParkingLot::unparkAll(&m_worldState); 1391 return true; 1392 } 1393 return true; 1394 } 1395 1396 void Heap::handleGCDidJIT() 1397 { 1398 while (handleGCDidJIT(m_worldState.load())) { } 1399 } 1400 1401 void Heap::handleNeedFinalize() 1402 { 1403 while (handleNeedFinalize(m_worldState.load())) { } 1404 } 1405 1406 void Heap::setGCDidJIT() 1407 { 1408 for (;;) { 1409 unsigned oldState = m_worldState.load(); 1410 RELEASE_ASSERT(oldState & stoppedBit); 1411 if (m_worldState.compareExchangeWeak(oldState, oldState | gcDidJITBit)) 1412 return; 1413 } 1414 } 1415 1416 void Heap::setNeedFinalize() 1417 { 1418 for (;;) { 1419 unsigned oldState = m_worldState.load(); 1420 if (m_worldState.compareExchangeWeak(oldState, oldState | needFinalizeBit)) { 1421 m_stopIfNecessaryTimer->scheduleSoon(); 1422 return; 1423 } 1424 } 1425 } 1426 1427 void Heap::waitWhileNeedFinalize() 1428 { 1429 for (;;) { 1430 unsigned oldState = m_worldState.load(); 1431 if (!(oldState & needFinalizeBit)) { 1432 // This means that either there was no finalize request or the main thread will finalize 1433 // with heap access, so a subsequent call to stopTheWorld() will return only when 1434 // finalize finishes. 1435 return; 1436 } 1437 ParkingLot::compareAndPark(&m_worldState, oldState); 1438 } 1439 } 1440 1441 unsigned Heap::setMutatorWaiting() 1442 { 1443 for (;;) { 1444 unsigned oldState = m_worldState.load(); 1445 unsigned newState = oldState | mutatorWaitingBit; 1446 if (m_worldState.compareExchangeWeak(oldState, newState)) 1447 return newState; 1448 } 1449 } 1450 1451 void Heap::clearMutatorWaiting() 1452 { 1453 for (;;) { 1454 unsigned oldState = m_worldState.load(); 1455 if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit)) 1456 return; 1457 } 1458 } 1459 1460 void Heap::notifyThreadStopping(const LockHolder&) 1461 { 1462 m_threadIsStopping = true; 1463 clearMutatorWaiting(); 1464 ParkingLot::unparkAll(&m_worldState); 1465 } 1466 1467 void Heap::finalize() 1468 { 1469 HelpingGCScope helpingGCScope(*this); 1470 deleteUnmarkedCompiledCode(); 1471 deleteSourceProviderCaches(); 1472 sweepLargeAllocations(); 1473 } 1474 1475 Heap::Ticket Heap::requestCollection(Optional<CollectionScope> scope) 1476 { 1477 stopIfNecessary(); 1478 1479 ASSERT(vm()->currentThreadIsHoldingAPILock()); 1480 RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable()); 1481 1482 sanitizeStackForVM(m_vm); 1483 1484 LockHolder locker(*m_threadLock); 1485 m_requests.append(scope); 1486 m_lastGrantedTicket++; 1487 m_threadCondition->notifyOne(locker); 1488 return m_lastGrantedTicket; 1489 } 1490 1491 void Heap::waitForCollection(Ticket ticket) 1492 { 1493 waitForCollector( 1494 [&] (const LockHolder&) -> bool { 1495 return m_lastServedTicket >= ticket; 1496 }); 1083 1497 } 1084 1498 … … 1091 1505 { 1092 1506 #if ENABLE(DFG_JIT) 1093 ASSERT(m_suspendedCompilerWorklists.isEmpty()); 1094 for (unsigned i = DFG::numberOfWorklists(); i--;) { 1095 if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) { 1096 m_suspendedCompilerWorklists.append(worklist); 1097 worklist->suspendAllThreads(); 1098 } 1099 } 1507 // We ensure the worklists so that it's not possible for the mutator to start a new worklist 1508 // after we have suspended the ones that he had started before. That's not very expensive since 1509 // the worklists use AutomaticThreads anyway. 1510 for (unsigned i = DFG::numberOfWorklists(); i--;) 1511 DFG::ensureWorklistForIndex(i).suspendAllThreads(); 1100 1512 #endif 1101 1513 } … … 1321 1733 1322 1734 RELEASE_ASSERT(m_collectionScope); 1735 m_lastCollectionScope = m_collectionScope; 1323 1736 m_collectionScope = Nullopt; 1324 1737 … … 1330 1743 { 1331 1744 #if ENABLE(DFG_JIT) 1332 for (auto worklist : m_suspendedCompilerWorklists) 1333 worklist->resumeAllThreads(); 1334 m_suspendedCompilerWorklists.clear(); 1745 for (unsigned i = DFG::numberOfWorklists(); i--;) 1746 DFG::existingWorklistForIndex(i).resumeAllThreads(); 1335 1747 #endif 1336 1748 } 1337 1749 1338 void Heap::setFullActivityCallback(PassRefPtr<FullGCActivityCallback> activityCallback)1339 {1340 m_fullActivityCallback = activityCallback;1341 }1342 1343 void Heap::setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback> activityCallback)1344 {1345 m_edenActivityCallback = activityCallback;1346 }1347 1348 1750 GCActivityCallback* Heap::fullActivityCallback() 1349 1751 { … … 1354 1756 { 1355 1757 return m_edenActivityCallback.get(); 1356 }1357 1358 void Heap::setIncrementalSweeper(std::unique_ptr<IncrementalSweeper> sweeper)1359 {1360 m_sweeper = WTFMove(sweeper);1361 1758 } 1362 1759 … … 1547 1944 } 1548 1945 1549 bool Heap:: shouldCollect()1946 bool Heap::canCollect() 1550 1947 { 1551 1948 if (isDeferred()) … … 1555 1952 if (collectionScope() || mutatorState() == MutatorState::HelpingGC) 1556 1953 return false; 1954 return true; 1955 } 1956 1957 bool Heap::shouldCollectHeuristic() 1958 { 1557 1959 if (Options::gcMaxHeapSize()) 1558 1960 return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize(); 1559 1961 return m_bytesAllocatedThisCycle > m_maxEdenSize; 1962 } 1963 1964 bool Heap::shouldCollect() 1965 { 1966 return canCollect() && shouldCollectHeuristic(); 1560 1967 } 1561 1968 … … 1599 2006 bool Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext) 1600 2007 { 1601 if (!shouldCollect()) 2008 if (!canCollect()) 2009 return false; 2010 2011 if (deferralContext) { 2012 deferralContext->m_shouldGC |= 2013 !!(m_worldState.load() & (shouldStopBit | needFinalizeBit | gcDidJITBit)); 2014 } else 2015 stopIfNecessary(); 2016 2017 if (!shouldCollectHeuristic()) 1602 2018 return false; 1603 2019 … … 1605 2021 deferralContext->m_shouldGC = true; 1606 2022 else 1607 collect ();2023 collectAsync(); 1608 2024 return true; 1609 2025 } … … 1615 2031 1616 2032 if (randomNumber() < Options::deferGCProbability()) { 1617 collect ();2033 collectAsync(); 1618 2034 return; 1619 2035 } … … 1661 2077 } 1662 2078 2079 #if USE(CF) 2080 void Heap::setRunLoop(CFRunLoopRef runLoop) 2081 { 2082 m_runLoop = runLoop; 2083 m_fullActivityCallback->setRunLoop(runLoop); 2084 m_edenActivityCallback->setRunLoop(runLoop); 2085 m_sweeper->setRunLoop(runLoop); 2086 } 2087 #endif // USE(CF) 2088 1663 2089 } // namespace JSC -
trunk/Source/JavaScriptCore/heap/Heap.h
r207855 r208306 44 44 #include "WriteBarrierBuffer.h" 45 45 #include "WriteBarrierSupport.h" 46 #include <wtf/AutomaticThread.h> 47 #include <wtf/Deque.h> 46 48 #include <wtf/HashCountedSet.h> 47 49 #include <wtf/HashSet.h> … … 70 72 class LLIntOffsetsExtractor; 71 73 class MarkedArgumentBuffer; 74 class StopIfNecessaryTimer; 72 75 class VM; 73 76 … … 131 134 JS_EXPORT_PRIVATE GCActivityCallback* fullActivityCallback(); 132 135 JS_EXPORT_PRIVATE GCActivityCallback* edenActivityCallback(); 133 JS_EXPORT_PRIVATE void setFullActivityCallback(PassRefPtr<FullGCActivityCallback>);134 JS_EXPORT_PRIVATE void setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback>);135 136 JS_EXPORT_PRIVATE void setGarbageCollectionTimerEnabled(bool); 136 137 137 138 JS_EXPORT_PRIVATE IncrementalSweeper* sweeper(); 138 JS_EXPORT_PRIVATE void setIncrementalSweeper(std::unique_ptr<IncrementalSweeper>);139 139 140 140 void addObserver(HeapObserver* observer) { m_observers.append(observer); } … … 143 143 MutatorState mutatorState() const { return m_mutatorState; } 144 144 Optional<CollectionScope> collectionScope() const { return m_collectionScope; } 145 bool hasHeapAccess() const; 146 bool mutatorIsStopped() const; 147 bool collectorBelievesThatTheWorldIsStopped() const; 145 148 146 149 // We're always busy on the collection threads. On the main thread, this returns true if we're … … 174 177 JS_EXPORT_PRIVATE void collectAllGarbage(); 175 178 179 bool canCollect(); 180 bool shouldCollectHeuristic(); 176 181 bool shouldCollect(); 177 JS_EXPORT_PRIVATE void collect(Optional<CollectionScope> = Nullopt); 182 183 // Queue up a collection. Returns immediately. This will not queue a collection if a collection 184 // of equal or greater strength exists. Full collections are stronger than Nullopt collections 185 // and Nullopt collections are stronger than Eden collections. Nullopt means that the GC can 186 // choose Eden or Full. This implies that if you request a GC while that GC is ongoing, nothing 187 // will happen. 188 JS_EXPORT_PRIVATE void collectAsync(Optional<CollectionScope> = Nullopt); 189 190 // Queue up a collection and wait for it to complete. This won't return until you get your own 191 // complete collection. For example, if there was an ongoing asynchronous collection at the time 192 // you called this, then this would wait for that one to complete and then trigger your 193 // collection and then return. In weird cases, there could be multiple GC requests in the backlog 194 // and this will wait for that backlog before running its GC and returning. 195 JS_EXPORT_PRIVATE void collectSync(Optional<CollectionScope> = Nullopt); 196 178 197 bool collectIfNecessaryOrDefer(GCDeferralContext* = nullptr); // Returns true if it did collect. 179 198 void collectAccordingToDeferGCProbability(); … … 271 290 const unsigned* addressOfBarrierThreshold() const { return &m_barrierThreshold; } 272 291 292 // If true, the GC believes that the mutator is currently messing with the heap. We call this 293 // "having heap access". The GC may block if the mutator is in this state. If false, the GC may 294 // currently be doing things to the heap that make the heap unsafe to access for the mutator. 295 bool hasAccess() const; 296 297 // If the mutator does not currently have heap access, this function will acquire it. If the GC 298 // is currently using the lack of heap access to do dangerous things to the heap then this 299 // function will block, waiting for the GC to finish. It's not valid to call this if the mutator 300 // already has heap access. The mutator is required to precisely track whether or not it has 301 // heap access. 302 // 303 // It's totally fine to acquireAccess() upon VM instantiation and keep it that way. This is how 304 // WebCore uses us. For most other clients, JSLock does acquireAccess()/releaseAccess() for you. 305 void acquireAccess(); 306 307 // Releases heap access. If the GC is blocking waiting to do bad things to the heap, it will be 308 // allowed to run now. 309 // 310 // Ordinarily, you should use the ReleaseHeapAccessScope to release and then reacquire heap 311 // access. You should do this anytime you're about do perform a blocking operation, like waiting 312 // on the ParkingLot. 313 void releaseAccess(); 314 315 // This is like a super optimized way of saying: 316 // 317 // releaseAccess() 318 // acquireAccess() 319 // 320 // The fast path is an inlined relaxed load and branch. The slow path will block the mutator if 321 // the GC wants to do bad things to the heap. 322 // 323 // All allocations logically call this. As an optimization to improve GC progress, you can call 324 // this anywhere that you can afford a load-branch and where an object allocation would have been 325 // safe. 326 // 327 // The GC will also push a stopIfNecessary() event onto the runloop of the thread that 328 // instantiated the VM whenever it wants the mutator to stop. This means that if you never block 329 // but instead use the runloop to wait for events, then you could safely run in a mode where the 330 // mutator has permanent heap access (like the DOM does). If you have good event handling 331 // discipline (i.e. you don't block the runloop) then you can be sure that stopIfNecessary() will 332 // already be called for you at the right times. 333 void stopIfNecessary(); 334 273 335 #if USE(CF) 274 336 CFRunLoopRef runLoop() const { return m_runLoop.get(); } 337 JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef); 275 338 #endif // USE(CF) 276 339 … … 297 360 friend class VM; 298 361 friend class WeakSet; 362 363 class Thread; 364 friend class Thread; 365 299 366 template<typename T> friend void* allocateCell(Heap&); 300 367 template<typename T> friend void* allocateCell(Heap&, size_t); 301 368 template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*); 302 369 template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*, size_t); 303 304 void collectWithoutAnySweep(Optional<CollectionScope> = Nullopt);305 370 306 371 void* allocateWithDestructor(size_t); // For use with objects with destructors. … … 320 385 JS_EXPORT_PRIVATE void reportExtraMemoryAllocatedSlowCase(size_t); 321 386 JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t); 322 323 void collectImpl(Optional<CollectionScope>, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&); 324 387 388 bool shouldCollectInThread(const LockHolder&); 389 void collectInThread(); 390 391 void stopTheWorld(); 392 void resumeTheWorld(); 393 394 void stopTheMutator(); 395 void resumeTheMutator(); 396 397 void stopIfNecessarySlow(); 398 bool stopIfNecessarySlow(unsigned extraStateBits); 399 400 template<typename Func> 401 void waitForCollector(const Func&); 402 403 JS_EXPORT_PRIVATE void acquireAccessSlow(); 404 JS_EXPORT_PRIVATE void releaseAccessSlow(); 405 406 bool handleGCDidJIT(unsigned); 407 bool handleNeedFinalize(unsigned); 408 void handleGCDidJIT(); 409 void handleNeedFinalize(); 410 411 void setGCDidJIT(); 412 void setNeedFinalize(); 413 void waitWhileNeedFinalize(); 414 415 unsigned setMutatorWaiting(); 416 void clearMutatorWaiting(); 417 void notifyThreadStopping(const LockHolder&); 418 419 typedef uint64_t Ticket; 420 Ticket requestCollection(Optional<CollectionScope>); 421 void waitForCollection(Ticket); 422 325 423 void suspendCompilerThreads(); 326 424 void willStartCollection(Optional<CollectionScope>); … … 330 428 void prepareForMarking(); 331 429 332 void markRoots(double gcStartTime , void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);333 void gatherStackRoots(ConservativeRoots& , void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);430 void markRoots(double gcStartTime); 431 void gatherStackRoots(ConservativeRoots&); 334 432 void gatherJSStackRoots(ConservativeRoots&); 335 433 void gatherScratchBufferRoots(ConservativeRoots&); … … 370 468 void gatherExtraHeapSnapshotData(HeapProfiler&); 371 469 void removeDeadHeapSnapshotNodes(HeapProfiler&); 470 void finalize(); 372 471 void sweepLargeAllocations(); 373 472 … … 404 503 405 504 Optional<CollectionScope> m_collectionScope; 505 Optional<CollectionScope> m_lastCollectionScope; 406 506 MutatorState m_mutatorState { MutatorState::Running }; 407 507 StructureIDTable m_structureIDTable; … … 454 554 RefPtr<FullGCActivityCallback> m_fullActivityCallback; 455 555 RefPtr<GCActivityCallback> m_edenActivityCallback; 456 std::unique_ptr<IncrementalSweeper> m_sweeper; 556 RefPtr<IncrementalSweeper> m_sweeper; 557 RefPtr<StopIfNecessaryTimer> m_stopIfNecessaryTimer; 457 558 458 559 Vector<HeapObserver*> m_observers; 459 560 460 561 unsigned m_deferralDepth; 461 Vector<DFG::Worklist*> m_suspendedCompilerWorklists;462 562 463 563 std::unique_ptr<HeapVerifier> m_verifier; … … 491 591 size_t m_externalMemorySize { 0 }; 492 592 #endif 593 594 static const unsigned shouldStopBit = 1u << 0u; 595 static const unsigned stoppedBit = 1u << 1u; 596 static const unsigned hasAccessBit = 1u << 2u; 597 static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid. 598 static const unsigned needFinalizeBit = 1u << 4u; 599 static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable. 600 Atomic<unsigned> m_worldState; 601 bool m_collectorBelievesThatTheWorldIsStopped { false }; 602 603 Deque<Optional<CollectionScope>> m_requests; 604 Ticket m_lastServedTicket { 0 }; 605 Ticket m_lastGrantedTicket { 0 }; 606 bool m_threadShouldStop { false }; 607 bool m_threadIsStopping { false }; 608 Box<Lock> m_threadLock; 609 RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock. 610 RefPtr<AutomaticThread> m_thread; 493 611 }; 494 612 -
trunk/Source/JavaScriptCore/heap/HeapInlines.h
r207714 r208306 52 52 } 53 53 54 inline bool Heap::hasHeapAccess() const 55 { 56 return m_worldState.load() & hasAccessBit; 57 } 58 59 inline bool Heap::mutatorIsStopped() const 60 { 61 unsigned state = m_worldState.load(); 62 bool shouldStop = state & shouldStopBit; 63 bool stopped = state & stoppedBit; 64 // I only got it right when I considered all four configurations of shouldStop/stopped: 65 // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we 66 // should return false. 67 // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested 68 // that we resume, so we should return false. 69 // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet. 70 // The mutator should be able to do whatever it wants in this state, as if we were not 71 // stopped. So return false. 72 // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is 73 // stopped, so return true. 74 return shouldStop & stopped; 75 } 76 77 inline bool Heap::collectorBelievesThatTheWorldIsStopped() const 78 { 79 return m_collectorBelievesThatTheWorldIsStopped; 80 } 81 54 82 ALWAYS_INLINE bool Heap::isMarked(const void* rawCell) 55 83 { … … 311 339 } 312 340 341 inline void Heap::acquireAccess() 342 { 343 if (m_worldState.compareExchangeWeak(0, hasAccessBit)) 344 return; 345 acquireAccessSlow(); 346 } 347 348 inline bool Heap::hasAccess() const 349 { 350 return m_worldState.loadRelaxed() & hasAccessBit; 351 } 352 353 inline void Heap::releaseAccess() 354 { 355 if (m_worldState.compareExchangeWeak(hasAccessBit, 0)) 356 return; 357 releaseAccessSlow(); 358 } 359 360 inline void Heap::stopIfNecessary() 361 { 362 if (m_worldState.loadRelaxed() == hasAccessBit) 363 return; 364 stopIfNecessarySlow(); 365 } 366 313 367 } // namespace JSC -
trunk/Source/JavaScriptCore/heap/HeapTimer.cpp
r207855 r208306 100 100 { 101 101 CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + intervalInSeconds); 102 m_isScheduled = true; 102 103 } 103 104 … … 105 106 { 106 107 CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + s_decade); 108 m_isScheduled = false; 107 109 } 108 110 … … 153 155 double targetTime = currentTime() + intervalInSeconds; 154 156 ecore_timer_interval_set(m_timer, targetTime); 157 m_isScheduled = true; 155 158 } 156 159 … … 158 161 { 159 162 ecore_timer_freeze(m_timer); 163 m_isScheduled = false; 160 164 } 161 165 #elif USE(GLIB) … … 220 224 ASSERT(targetTime >= currentTime); 221 225 g_source_set_ready_time(m_timer.get(), targetTime); 226 m_isScheduled = true; 222 227 } 223 228 … … 225 230 { 226 231 g_source_set_ready_time(m_timer.get(), -1); 232 m_isScheduled = false; 227 233 } 228 234 #else -
trunk/Source/JavaScriptCore/heap/HeapTimer.h
r207855 r208306 45 45 class VM; 46 46 47 class HeapTimer {47 class HeapTimer : public ThreadSafeRefCounted<HeapTimer> { 48 48 public: 49 49 HeapTimer(VM*); … … 57 57 void scheduleTimer(double intervalInSeconds); 58 58 void cancelTimer(); 59 bool isScheduled() const { return m_isScheduled; } 59 60 60 61 #if USE(CF) … … 66 67 67 68 RefPtr<JSLock> m_apiLock; 69 bool m_isScheduled { false }; 68 70 #if USE(CF) 69 71 static const CFTimeInterval s_decade; -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.cpp
r207855 r208306 72 72 bool IncrementalSweeper::sweepNextBlock() 73 73 { 74 m_vm->heap.stopIfNecessary(); 75 74 76 MarkedBlock::Handle* block = nullptr; 75 77 -
trunk/Source/JavaScriptCore/heap/IncrementalSweeper.h
r207855 r208306 35 35 36 36 class IncrementalSweeper : public HeapTimer { 37 WTF_MAKE_FAST_ALLOCATED;38 37 public: 39 38 JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*); -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp
r199762 r208306 1 1 /* 2 * Copyright (C) 2003-2009, 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2007 Eric Seidel <eric@webkit.org> 4 4 * Copyright (C) 2009 Acision BV. All rights reserved. … … 300 300 delete t; 301 301 } 302 }303 304 SUPPRESS_ASAN305 void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters)306 {307 void* registersBegin = &calleeSavedRegisters;308 void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(&calleeSavedRegisters + 1)));309 conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks);310 311 conservativeRoots.add(stackTop, stackOrigin, jitStubRoutines, codeBlocks);312 302 } 313 303 … … 1019 1009 } 1020 1010 1021 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters) 1022 { 1023 gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, stackOrigin, stackTop, calleeSavedRegisters); 1024 1011 void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks) 1012 { 1025 1013 size_t size; 1026 1014 size_t capacity = 0; -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.h
r206525 r208306 2 2 * Copyright (C) 1999-2000 Harri Porten (porten@kde.org) 3 3 * Copyright (C) 2001 Peter Kelly (pmk@post.com) 4 * Copyright (C) 2003 , 2004, 2005, 2006, 2007, 2008, 2009, 2015Apple Inc. All rights reserved.4 * Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved. 5 5 * 6 6 * This library is free software; you can redistribute it and/or … … 66 66 ~MachineThreads(); 67 67 68 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet& , void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);68 void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&); 69 69 70 70 JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads. … … 146 146 147 147 private: 148 void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);149 150 148 void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*); 151 149 bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*); -
trunk/Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
r207444 r208306 456 456 void InspectorDebuggerAgent::setBreakpoint(JSC::Breakpoint& breakpoint, bool& existing) 457 457 { 458 JSC::JSLockHolder locker(m_scriptDebugServer.vm()); 458 459 m_scriptDebugServer.setBreakpoint(breakpoint, existing); 459 460 } … … 470 471 m_injectedScriptManager.releaseObjectGroup(objectGroupForBreakpointAction(action)); 471 472 473 JSC::JSLockHolder locker(m_scriptDebugServer.vm()); 472 474 m_scriptDebugServer.removeBreakpointActions(breakpointID); 473 475 m_scriptDebugServer.removeBreakpoint(breakpointID); … … 561 563 m_breakReason = breakReason; 562 564 m_breakAuxData = WTFMove(data); 565 JSC::JSLockHolder locker(m_scriptDebugServer.vm()); 563 566 m_scriptDebugServer.setPauseOnNextStatement(true); 564 567 } … … 882 885 void InspectorDebuggerAgent::clearDebuggerBreakpointState() 883 886 { 884 m_scriptDebugServer.clearBreakpointActions(); 885 m_scriptDebugServer.clearBreakpoints(); 886 m_scriptDebugServer.clearBlacklist(); 887 { 888 JSC::JSLockHolder holder(m_scriptDebugServer.vm()); 889 m_scriptDebugServer.clearBreakpointActions(); 890 m_scriptDebugServer.clearBreakpoints(); 891 m_scriptDebugServer.clearBlacklist(); 892 } 887 893 888 894 m_pausedScriptState = nullptr; -
trunk/Source/JavaScriptCore/jit/JITWorklist.cpp
r207566 r208306 159 159 } 160 160 161 void JITWorklist::completeAllForVM(VM& vm) 162 { 161 bool JITWorklist::completeAllForVM(VM& vm) 162 { 163 bool result = false; 163 164 DeferGC deferGC(vm.heap); 164 165 for (;;) { … … 187 188 // whether we found some unfinished plans. 188 189 if (!didFindUnfinishedPlan) 189 return ;190 return result; 190 191 191 192 m_condition->wait(*m_lock); … … 193 194 } 194 195 196 RELEASE_ASSERT(!myPlans.isEmpty()); 197 result = true; 195 198 finalizePlans(myPlans); 196 199 } -
trunk/Source/JavaScriptCore/jit/JITWorklist.h
r207566 r208306 51 51 ~JITWorklist(); 52 52 53 void completeAllForVM(VM&);53 bool completeAllForVM(VM&); // Return true if any JIT work happened. 54 54 void poll(VM&); 55 55 -
trunk/Source/JavaScriptCore/jsc.cpp
r208209 r208306 1678 1678 { 1679 1679 JSLockHolder lock(exec); 1680 exec->heap()->collect (CollectionScope::Full);1680 exec->heap()->collectSync(CollectionScope::Full); 1681 1681 return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastFullCollection())); 1682 1682 } … … 1685 1685 { 1686 1686 JSLockHolder lock(exec); 1687 exec->heap()->collect (CollectionScope::Eden);1687 exec->heap()->collectSync(CollectionScope::Eden); 1688 1688 return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastEdenCollection())); 1689 1689 } -
trunk/Source/JavaScriptCore/runtime/AtomicsObject.cpp
r208209 r208306 30 30 #include "JSTypedArrays.h" 31 31 #include "ObjectPrototype.h" 32 #include "ReleaseHeapAccessScope.h" 32 33 #include "TypedArrayController.h" 33 34 … … 341 342 342 343 bool didPassValidation = false; 343 ParkingLot::ParkResult result = ParkingLot::parkConditionally( 344 ptr, 345 [&] () -> bool { 346 didPassValidation = WTF::atomicLoad(ptr) == expectedValue; 347 return didPassValidation; 348 }, 349 [] () { }, 350 timeout); 344 ParkingLot::ParkResult result; 345 { 346 ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap); 347 result = ParkingLot::parkConditionally( 348 ptr, 349 [&] () -> bool { 350 didPassValidation = WTF::atomicLoad(ptr) == expectedValue; 351 return didPassValidation; 352 }, 353 [] () { }, 354 timeout); 355 } 351 356 const char* resultString; 352 357 if (!didPassValidation) -
trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp
r198364 r208306 1 1 /* 2 * Copyright (C) 2008, 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2008, 2015-2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 43 43 #include "WriteBarrier.h" 44 44 #include <mutex> 45 #include <wtf/MainThread.h> 46 #include <wtf/Threading.h> 45 47 #include <wtf/dtoa.h> 46 #include <wtf/Threading.h>47 48 #include <wtf/dtoa/cached-powers.h> 48 49 … … 58 59 WTF::double_conversion::initialize(); 59 60 WTF::initializeThreading(); 61 WTF::initializeGCThreads(); 60 62 Options::initialize(); 61 63 if (Options::recordGCPauseTimes()) -
trunk/Source/JavaScriptCore/runtime/JSLock.cpp
r207653 r208306 1 1 /* 2 * Copyright (C) 2005, 2008, 2012, 2014 Apple Inc. All rights reserved.2 * Copyright (C) 2005, 2008, 2012, 2014, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * This library is free software; you can redistribute it and/or … … 129 129 if (!m_vm) 130 130 return; 131 132 WTFThreadData& threadData = wtfThreadData(); 133 ASSERT(!m_entryAtomicStringTable); 134 m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable()); 135 ASSERT(m_entryAtomicStringTable); 136 137 if (m_vm->heap.hasAccess()) 138 m_shouldReleaseHeapAccess = false; 139 else { 140 m_vm->heap.acquireAccess(); 141 m_shouldReleaseHeapAccess = true; 142 } 131 143 132 144 RELEASE_ASSERT(!m_vm->stackPointerAtVMEntry()); … … 134 146 m_vm->setStackPointerAtVMEntry(p); 135 147 136 WTFThreadData& threadData = wtfThreadData();137 148 m_vm->setLastStackTop(threadData.savedLastStackTop()); 138 139 ASSERT(!m_entryAtomicStringTable);140 m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable());141 ASSERT(m_entryAtomicStringTable);142 149 143 150 m_vm->heap.machineThreads().addCurrentThread(); … … 168 175 169 176 if (!m_lockCount) { 170 177 171 178 if (!m_hasExclusiveThread) { 172 179 m_ownerThreadID = std::thread::id(); … … 184 191 vm->heap.releaseDelayedReleasedObjects(); 185 192 vm->setStackPointerAtVMEntry(nullptr); 193 194 if (m_shouldReleaseHeapAccess) 195 vm->heap.releaseAccess(); 186 196 } 187 197 -
trunk/Source/JavaScriptCore/runtime/JSLock.h
r206525 r208306 1 1 /* 2 * Copyright (C) 2005, 2008, 2009, 2014 Apple Inc. All rights reserved.2 * Copyright (C) 2005, 2008, 2009, 2014, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * This library is free software; you can redistribute it and/or … … 137 137 unsigned m_lockDropDepth; 138 138 bool m_hasExclusiveThread; 139 bool m_shouldReleaseHeapAccess; 139 140 VM* m_vm; 140 141 AtomicStringTable* m_entryAtomicStringTable; -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r206658 r208306 355 355 // no point to doing so. 356 356 for (unsigned i = DFG::numberOfWorklists(); i--;) { 357 if (DFG::Worklist* worklist = DFG:: worklistForIndexOrNull(i)) {357 if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i)) { 358 358 worklist->removeNonCompilingPlansForVM(*this); 359 359 worklist->waitUntilAllPlansForVMAreReady(*this); -
trunk/Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
r208189 r208306 131 131 if (!ensureCurrentThreadOwnsJSLock(exec)) 132 132 return; 133 exec->heap()->collect (CollectionScope::Eden);133 exec->heap()->collectSync(CollectionScope::Eden); 134 134 } 135 135 -
trunk/Source/WTF/ChangeLog
r208305 r208306 1 2016-11-02 Filip Pizlo <fpizlo@apple.com> 2 3 The GC should be in a thread 4 https://bugs.webkit.org/show_bug.cgi?id=163562 5 6 Reviewed by Geoffrey Garen and Andreas Kling. 7 8 This fixes some bugs and adds a few features. 9 10 * wtf/Atomics.h: The GC may do work on behalf of the JIT. If it does, the main thread needs to execute a cross-modifying code fence. This is cpuid on x86 and I believe it's isb on ARM. It would have been an isync on PPC and I think that isb is the ARM equivalent. 11 (WTF::arm_isb): 12 (WTF::crossModifyingCodeFence): 13 (WTF::x86_ortop): 14 (WTF::x86_cpuid): 15 * wtf/AutomaticThread.cpp: I accidentally had AutomaticThreadCondition inherit from ThreadSafeRefCounted<AutomaticThread> [sic]. This never crashed before because all of our prior AutomaticThreadConditions were immortal. 16 (WTF::AutomaticThread::AutomaticThread): 17 (WTF::AutomaticThread::~AutomaticThread): 18 (WTF::AutomaticThread::start): 19 * wtf/AutomaticThread.h: 20 * wtf/MainThread.cpp: Need to allow initializeGCThreads() to be called separately because it's now more than just a debugging thing. 21 (WTF::initializeGCThreads): 22 1 23 2016-11-02 Carlos Alberto Lopez Perez <clopez@igalia.com> 2 24 -
trunk/Source/WTF/wtf/Atomics.h
r208209 r208306 36 36 #endif 37 37 #include <windows.h> 38 #include <intrin.h> 38 39 #endif 39 40 … … 54 55 55 56 ALWAYS_INLINE T load(std::memory_order order = std::memory_order_seq_cst) const { return value.load(order); } 57 58 ALWAYS_INLINE T loadRelaxed() const { return load(std::memory_order_relaxed); } 56 59 57 60 ALWAYS_INLINE void store(T desired, std::memory_order order = std::memory_order_seq_cst) { value.store(desired, order); } … … 199 202 { 200 203 asm volatile("dmb ishst" ::: "memory"); 204 } 205 206 inline void arm_isb() 207 { 208 asm volatile("isb" ::: "memory"); 201 209 } 202 210 … … 207 215 inline void memoryBarrierAfterLock() { arm_dmb(); } 208 216 inline void memoryBarrierBeforeUnlock() { arm_dmb(); } 217 inline void crossModifyingCodeFence() { arm_isb(); } 209 218 210 219 #elif CPU(X86) || CPU(X86_64) … … 213 222 { 214 223 #if OS(WINDOWS) 215 // I think that this does the equivalent of a dummy interlocked instruction,216 // instead of using the 'mfence' instruction, at least according to MSDN. I217 // know that it is equivalent for our purposes, but it would be good to218 // investigate if that is actually better.219 224 MemoryBarrier(); 220 225 #elif CPU(X86_64) … … 224 229 #else 225 230 asm volatile("lock; orl $0, (%%esp)" ::: "memory"); 231 #endif 232 } 233 234 inline void x86_cpuid() 235 { 236 #if OS(WINDOWS) 237 int info[4]; 238 __cpuid(info, 0); 239 #else 240 intptr_t a = 0, b, c, d; 241 asm volatile( 242 "cpuid" 243 : "+a"(a), "=b"(b), "=c"(c), "=d"(d) 244 : 245 : "memory"); 226 246 #endif 227 247 } … … 233 253 inline void memoryBarrierAfterLock() { compilerFence(); } 234 254 inline void memoryBarrierBeforeUnlock() { compilerFence(); } 255 inline void crossModifyingCodeFence() { x86_cpuid(); } 235 256 236 257 #else … … 242 263 inline void memoryBarrierAfterLock() { std::atomic_thread_fence(std::memory_order_seq_cst); } 243 264 inline void memoryBarrierBeforeUnlock() { std::atomic_thread_fence(std::memory_order_seq_cst); } 265 inline void crossModifyingCodeFence() { std::atomic_thread_fence(std::memory_order_seq_cst); } // Probably not strong enough. 244 266 245 267 #endif -
trunk/Source/WTF/wtf/AutomaticThread.cpp
r207566 r208306 79 79 void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread) 80 80 { 81 ASSERT(m_threads.contains(thread));82 81 m_threads.removeFirst(thread); 83 82 ASSERT(!m_threads.contains(thread)); … … 93 92 , m_condition(condition) 94 93 { 94 if (verbose) 95 dataLog(RawPointer(this), ": Allocated AutomaticThread.\n"); 95 96 m_condition->add(locker, this); 96 97 } … … 98 99 AutomaticThread::~AutomaticThread() 99 100 { 101 if (verbose) 102 dataLog(RawPointer(this), ": Deleting AutomaticThread.\n"); 100 103 LockHolder locker(*m_lock); 101 104 … … 105 108 } 106 109 110 bool AutomaticThread::tryStop(const LockHolder&) 111 { 112 if (!m_isRunning) 113 return true; 114 if (m_hasUnderlyingThread) 115 return false; 116 m_isRunning = false; 117 return true; 118 } 119 107 120 void AutomaticThread::join() 108 121 { … … 114 127 class AutomaticThread::ThreadScope { 115 128 public: 116 ThreadScope( AutomaticThread&thread)129 ThreadScope(RefPtr<AutomaticThread> thread) 117 130 : m_thread(thread) 118 131 { 119 m_thread .threadDidStart();132 m_thread->threadDidStart(); 120 133 } 121 134 122 135 ~ThreadScope() 123 136 { 124 m_thread.threadWillStop(); 137 m_thread->threadWillStop(); 138 139 LockHolder locker(*m_thread->m_lock); 140 m_thread->m_hasUnderlyingThread = false; 125 141 } 126 142 127 143 private: 128 AutomaticThread&m_thread;144 RefPtr<AutomaticThread> m_thread; 129 145 }; 130 146 131 147 void AutomaticThread::start(const LockHolder&) 132 148 { 149 RELEASE_ASSERT(m_isRunning); 150 133 151 RefPtr<AutomaticThread> preserveThisForThread = this; 152 153 m_hasUnderlyingThread = true; 134 154 135 155 ThreadIdentifier thread = createThread( … … 137 157 [=] () { 138 158 if (verbose) 139 dataLog( "Running automatic thread!\n");140 RefPtr<AutomaticThread> preserveThisInThread = preserveThisForThread;159 dataLog(RawPointer(this), ": Running automatic thread!\n"); 160 ThreadScope threadScope(preserveThisForThread); 141 161 142 {162 if (!ASSERT_DISABLED) { 143 163 LockHolder locker(*m_lock); 144 164 ASSERT(!m_condition->contains(locker, this)); 145 165 } 146 147 ThreadScope threadScope(*this);148 166 149 167 auto stop = [&] (const LockHolder&) { … … 168 186 if (!awokenByNotify) { 169 187 if (verbose) 170 dataLog( "Going to sleep!\n");188 dataLog(RawPointer(this), ": Going to sleep!\n"); 171 189 m_condition->add(locker, this); 172 190 return; -
trunk/Source/WTF/wtf/AutomaticThread.h
r207566 r208306 70 70 class AutomaticThread; 71 71 72 class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThread > {72 class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThreadCondition> { 73 73 public: 74 74 static WTF_EXPORT_PRIVATE RefPtr<AutomaticThreadCondition> create(); … … 113 113 virtual ~AutomaticThread(); 114 114 115 // Sometimes it's possible to optimize for the case that there is no underlying thread. 116 bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; } 117 118 // This attempts to quickly stop the thread. This will succeed if the thread happens to not be 119 // running. Returns true if the thread has been stopped. A good idiom for stopping your automatic 120 // thread is to first try this, and if that doesn't work, to tell the thread using your own 121 // mechanism (set some flag and then notify the condition). 122 bool tryStop(const LockHolder&); 123 115 124 void join(); 116 125 … … 152 161 virtual WorkResult work() = 0; 153 162 154 class ThreadScope;155 friend class ThreadScope;156 157 163 // It's sometimes useful to allocate resources while the thread is running, and to destroy them 158 164 // when the thread dies. These methods let you do this. You can override these methods, and you … … 164 170 friend class AutomaticThreadCondition; 165 171 172 class ThreadScope; 173 friend class ThreadScope; 174 166 175 void start(const LockHolder&); 167 176 … … 169 178 RefPtr<AutomaticThreadCondition> m_condition; 170 179 bool m_isRunning { true }; 180 bool m_hasUnderlyingThread { false }; 171 181 Condition m_isRunningCondition; 172 182 }; -
trunk/Source/WTF/wtf/CompilationThread.cpp
r161146 r208306 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 34 34 namespace WTF { 35 35 36 static ThreadSpecific<bool >* s_isCompilationThread;36 static ThreadSpecific<bool, CanBeGCThread::True>* s_isCompilationThread; 37 37 38 38 static void initializeCompilationThreads() … … 40 40 static std::once_flag initializeCompilationThreadsOnceFlag; 41 41 std::call_once(initializeCompilationThreadsOnceFlag, []{ 42 s_isCompilationThread = new ThreadSpecific<bool >();42 s_isCompilationThread = new ThreadSpecific<bool, CanBeGCThread::True>(); 43 43 }); 44 44 } -
trunk/Source/WTF/wtf/MainThread.cpp
r207653 r208306 191 191 #endif 192 192 193 static ThreadSpecific<Optional<GCThreadType> >* isGCThread;193 static ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>* isGCThread; 194 194 195 195 void initializeGCThreads() 196 196 { 197 isGCThread = new ThreadSpecific<Optional<GCThreadType>>(); 197 static std::once_flag flag; 198 std::call_once( 199 flag, 200 [] { 201 isGCThread = new ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>(); 202 }); 198 203 } 199 204 -
trunk/Source/WTF/wtf/MainThread.h
r207653 r208306 69 69 #endif // USE(WEB_THREAD) 70 70 71 void initializeGCThreads();71 WTF_EXPORT_PRIVATE void initializeGCThreads(); 72 72 73 73 enum class GCThreadType { -
trunk/Source/WTF/wtf/Optional.h
r207237 r208306 1 1 /* 2 * Copyright (C) 2014 Apple Inc. All rights reserved.2 * Copyright (C) 2014, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 29 29 #include <type_traits> 30 30 #include <wtf/Assertions.h> 31 #include <wtf/PrintStream.h> 31 32 #include <wtf/StdLibExtras.h> 32 33 … … 264 265 } 265 266 267 template<typename T> 268 void printInternal(PrintStream& out, const Optional<T>& optional) 269 { 270 if (optional) 271 out.print(*optional); 272 else 273 out.print("Nullopt"); 274 } 275 266 276 } // namespace WTF 267 277 -
trunk/Source/WTF/wtf/ParkingLot.cpp
r208209 r208306 448 448 ThreadData* myThreadData() 449 449 { 450 static ThreadSpecific<RefPtr<ThreadData> >* threadData;450 static ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>* threadData; 451 451 static std::once_flag initializeOnce; 452 452 std::call_once( 453 453 initializeOnce, 454 454 [] { 455 threadData = new ThreadSpecific<RefPtr<ThreadData> >();455 threadData = new ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>(); 456 456 }); 457 457 -
trunk/Source/WTF/wtf/ThreadSpecific.h
r199762 r208306 1 1 /* 2 * Copyright (C) 2008 Apple Inc. All rights reserved.2 * Copyright (C) 2008, 2016 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2009 Jian Li <jianli@chromium.org> 4 4 * Copyright (C) 2012 Patrick Gansterer <paroga@paroga.com> … … 43 43 #define WTF_ThreadSpecific_h 44 44 45 #include <wtf/MainThread.h> 45 46 #include <wtf/Noncopyable.h> 46 47 #include <wtf/StdLibExtras.h> … … 60 61 #endif 61 62 62 template<typename T> class ThreadSpecific { 63 enum class CanBeGCThread { 64 False, 65 True 66 }; 67 68 template<typename T, CanBeGCThread canBeGCThread = CanBeGCThread::False> class ThreadSpecific { 63 69 WTF_MAKE_NONCOPYABLE(ThreadSpecific); 64 70 public: … … 87 93 WTF_MAKE_NONCOPYABLE(Data); 88 94 public: 89 Data(T* value, ThreadSpecific<T >* owner) : value(value), owner(owner) {}95 Data(T* value, ThreadSpecific<T, canBeGCThread>* owner) : value(value), owner(owner) {} 90 96 91 97 T* value; 92 ThreadSpecific<T >* owner;98 ThreadSpecific<T, canBeGCThread>* owner; 93 99 }; 94 100 … … 128 134 } 129 135 130 template<typename T >131 inline ThreadSpecific<T >::ThreadSpecific()136 template<typename T, CanBeGCThread canBeGCThread> 137 inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific() 132 138 { 133 139 int error = pthread_key_create(&m_key, destroy); … … 136 142 } 137 143 138 template<typename T >139 inline T* ThreadSpecific<T >::get()144 template<typename T, CanBeGCThread canBeGCThread> 145 inline T* ThreadSpecific<T, canBeGCThread>::get() 140 146 { 141 147 Data* data = static_cast<Data*>(pthread_getspecific(m_key)); 142 return data ? data->value : 0; 143 } 144 145 template<typename T> 146 inline void ThreadSpecific<T>::set(T* ptr) 147 { 148 if (data) 149 return data->value; 150 RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread()); 151 return nullptr; 152 } 153 154 template<typename T, CanBeGCThread canBeGCThread> 155 inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr) 156 { 157 RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread()); 148 158 ASSERT(!get()); 149 159 pthread_setspecific(m_key, new Data(ptr, this)); … … 186 196 } 187 197 188 template<typename T >189 inline ThreadSpecific<T >::ThreadSpecific()198 template<typename T, CanBeGCThread canBeGCThread> 199 inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific() 190 200 : m_index(-1) 191 201 { … … 200 210 } 201 211 202 template<typename T >203 inline ThreadSpecific<T >::~ThreadSpecific()212 template<typename T, CanBeGCThread canBeGCThread> 213 inline ThreadSpecific<T, canBeGCThread>::~ThreadSpecific() 204 214 { 205 215 FlsFree(flsKeys()[m_index]); 206 216 } 207 217 208 template<typename T >209 inline T* ThreadSpecific<T >::get()218 template<typename T, CanBeGCThread canBeGCThread> 219 inline T* ThreadSpecific<T, canBeGCThread>::get() 210 220 { 211 221 Data* data = static_cast<Data*>(FlsGetValue(flsKeys()[m_index])); 212 return data ? data->value : 0; 213 } 214 215 template<typename T> 216 inline void ThreadSpecific<T>::set(T* ptr) 217 { 222 if (data) 223 return data->value; 224 RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread()); 225 return nullptr; 226 } 227 228 template<typename T, CanBeGCThread canBeGCThread> 229 inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr) 230 { 231 RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread()); 218 232 ASSERT(!get()); 219 233 Data* data = new Data(ptr, this); … … 225 239 #endif 226 240 227 template<typename T >228 inline void THREAD_SPECIFIC_CALL ThreadSpecific<T >::destroy(void* ptr)241 template<typename T, CanBeGCThread canBeGCThread> 242 inline void THREAD_SPECIFIC_CALL ThreadSpecific<T, canBeGCThread>::destroy(void* ptr) 229 243 { 230 244 Data* data = static_cast<Data*>(ptr); … … 250 264 } 251 265 252 template<typename T >253 inline bool ThreadSpecific<T >::isSet()266 template<typename T, CanBeGCThread canBeGCThread> 267 inline bool ThreadSpecific<T, canBeGCThread>::isSet() 254 268 { 255 269 return !!get(); 256 270 } 257 271 258 template<typename T >259 inline ThreadSpecific<T >::operator T*()272 template<typename T, CanBeGCThread canBeGCThread> 273 inline ThreadSpecific<T, canBeGCThread>::operator T*() 260 274 { 261 275 T* ptr = static_cast<T*>(get()); … … 270 284 } 271 285 272 template<typename T >273 inline T* ThreadSpecific<T >::operator->()286 template<typename T, CanBeGCThread canBeGCThread> 287 inline T* ThreadSpecific<T, canBeGCThread>::operator->() 274 288 { 275 289 return operator T*(); 276 290 } 277 291 278 template<typename T >279 inline T& ThreadSpecific<T >::operator*()292 template<typename T, CanBeGCThread canBeGCThread> 293 inline T& ThreadSpecific<T, canBeGCThread>::operator*() 280 294 { 281 295 return *operator T*(); … … 283 297 284 298 #if USE(WEB_THREAD) 285 template<typename T >286 inline void ThreadSpecific<T >::replace(T* newPtr)299 template<typename T, CanBeGCThread canBeGCThread> 300 inline void ThreadSpecific<T, canBeGCThread>::replace(T* newPtr) 287 301 { 288 302 ASSERT(newPtr); -
trunk/Source/WTF/wtf/WordLock.cpp
r198345 r208306 62 62 }; 63 63 64 ThreadSpecific<ThreadData >* threadData;64 ThreadSpecific<ThreadData, CanBeGCThread::True>* threadData; 65 65 66 66 ThreadData* myThreadData() … … 70 70 initializeOnce, 71 71 [] { 72 threadData = new ThreadSpecific<ThreadData >();72 threadData = new ThreadSpecific<ThreadData, CanBeGCThread::True>(); 73 73 }); 74 74 -
trunk/Source/WTF/wtf/text/AtomicStringImpl.cpp
r201782 r208306 26 26 27 27 #include "AtomicStringTable.h" 28 #include "CommaPrinter.h" 29 #include "DataLog.h" 28 30 #include "HashSet.h" 29 31 #include "IntegerToStringConversion.h" 30 32 #include "StringHash.h" 33 #include "StringPrintStream.h" 31 34 #include "Threading.h" 32 35 #include "WTFThreadData.h" … … 76 79 AtomicStringTableLocker locker; 77 80 78 HashSet<StringImpl*>::AddResult addResult = stringTable().add<HashTranslator>(value); 81 HashSet<StringImpl*>& atomicStringTable = stringTable(); 82 HashSet<StringImpl*>::AddResult addResult = atomicStringTable.add<HashTranslator>(value); 79 83 80 84 // If the string is newly-translated, then we need to adopt it. … … 452 456 HashSet<StringImpl*>::iterator iterator = atomicStringTable.find(string); 453 457 ASSERT_WITH_MESSAGE(iterator != atomicStringTable.end(), "The string being removed is atomic in the string table of an other thread!"); 458 ASSERT(string == *iterator); 454 459 atomicStringTable.remove(iterator); 455 460 } -
trunk/Source/WebCore/ChangeLog
r208304 r208306 1 2016-11-02 Filip Pizlo <fpizlo@apple.com> 2 3 The GC should be in a thread 4 https://bugs.webkit.org/show_bug.cgi?id=163562 5 6 Reviewed by Geoffrey Garen and Andreas Kling. 7 8 No new tests because existing tests cover this. 9 10 We now need to be more careful about using JSLock. This fixes some places that were not 11 holding it. New assertions in the GC are more likely to catch this than before. 12 13 * bindings/js/WorkerScriptController.cpp: 14 (WebCore::WorkerScriptController::WorkerScriptController): 15 1 16 2016-11-02 Joseph Pecoraro <pecoraro@apple.com> 2 17 -
trunk/Source/WebCore/Modules/indexeddb/IDBDatabase.cpp
r207937 r208306 1 1 /* 2 * Copyright (C) 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 56 56 , m_info(resultData.databaseInfo()) 57 57 , m_databaseConnectionIdentifier(resultData.databaseConnectionIdentifier()) 58 , m_eventNames(eventNames()) 58 59 { 59 60 LOG(IndexedDB, "IDBDatabase::IDBDatabase - Creating database %s with version %" PRIu64 " connection %" PRIu64 " (%p)", m_info.name().utf8().data(), m_info.version(), m_databaseConnectionIdentifier, this); … … 74 75 bool IDBDatabase::hasPendingActivity() const 75 76 { 76 ASSERT(currentThread() == originThreadID() );77 ASSERT(currentThread() == originThreadID() || mayBeGCThread()); 77 78 78 79 if (m_closedInServer) … … 82 83 return true; 83 84 84 return hasEventListeners( eventNames().abortEvent) || hasEventListeners(eventNames().errorEvent) || hasEventListeners(eventNames().versionchangeEvent);85 return hasEventListeners(m_eventNames.abortEvent) || hasEventListeners(m_eventNames.errorEvent) || hasEventListeners(m_eventNames.versionchangeEvent); 85 86 } 86 87 … … 253 254 transaction->connectionClosedFromServer(error); 254 255 255 Ref<Event> event = Event::create( eventNames().errorEvent, true, false);256 Ref<Event> event = Event::create(m_eventNames.errorEvent, true, false); 256 257 event->setTarget(this); 257 258 … … 447 448 return; 448 449 } 449 450 Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, eventNames().versionchangeEvent);450 451 Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, m_eventNames.versionchangeEvent); 451 452 event->setTarget(this); 452 453 scriptExecutionContext()->eventQueue().enqueueEvent(WTFMove(event)); … … 460 461 bool result = EventTargetWithInlineData::dispatchEvent(event); 461 462 462 if (event.isVersionChangeEvent() && event.type() == eventNames().versionchangeEvent)463 if (event.isVersionChangeEvent() && event.type() == m_eventNames.versionchangeEvent) 463 464 connectionProxy().didFireVersionChangeEvent(m_databaseConnectionIdentifier, downcast<IDBVersionChangeEvent>(event).requestIdentifier()); 464 465 -
trunk/Source/WebCore/Modules/indexeddb/IDBDatabase.h
r207937 r208306 1 1 /* 2 * Copyright (C) 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 46 46 class IDBTransaction; 47 47 class IDBTransactionInfo; 48 struct EventNames; 48 49 49 50 class IDBDatabase : public ThreadSafeRefCounted<IDBDatabase>, public EventTargetWithInlineData, public IDBActiveDOMObject { … … 130 131 HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_committingTransactions; 131 132 HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_abortingTransactions; 133 134 const EventNames& m_eventNames; // Need to cache this so we can use it from GC threads. 132 135 }; 133 136 -
trunk/Source/WebCore/Modules/indexeddb/IDBRequest.cpp
r208261 r208306 229 229 bool IDBRequest::hasPendingActivity() const 230 230 { 231 ASSERT(currentThread() == originThreadID() );231 ASSERT(currentThread() == originThreadID() || mayBeGCThread()); 232 232 return m_hasPendingActivity; 233 233 } -
trunk/Source/WebCore/Modules/indexeddb/IDBTransaction.cpp
r208261 r208306 1 1 /* 2 * Copyright (C) 2015 Apple Inc. All rights reserved.2 * Copyright (C) 2015, 2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 265 265 bool IDBTransaction::hasPendingActivity() const 266 266 { 267 ASSERT(currentThread() == m_database->originThreadID() );267 ASSERT(currentThread() == m_database->originThreadID() || mayBeGCThread()); 268 268 return !m_contextStopped && m_state != IndexedDB::TransactionState::Finished; 269 269 } -
trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj
r208304 r208306 3885 3885 A456FA2711AD4A830020B420 /* LabelsNodeList.h in Headers */ = {isa = PBXBuildFile; fileRef = A456FA2511AD4A830020B420 /* LabelsNodeList.h */; }; 3886 3886 A501920E132EBF2E008BFE55 /* Autocapitalize.h in Headers */ = {isa = PBXBuildFile; fileRef = A501920C132EBF2E008BFE55 /* Autocapitalize.h */; settings = {ATTRIBUTES = (Private, ); }; }; 3887 A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */; };3888 3887 A5071E801C506B66009951BE /* InspectorMemoryAgent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */; }; 3889 3888 A5071E811C506B69009951BE /* InspectorMemoryAgent.h in Headers */ = {isa = PBXBuildFile; fileRef = A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */; }; … … 5902 5901 CE7B2DB61586ABAD0098B3FA /* TextAlternativeWithRange.mm in Sources */ = {isa = PBXBuildFile; fileRef = CE7B2DB21586ABAD0098B3FA /* TextAlternativeWithRange.mm */; }; 5903 5902 CE7E17831C83A49100AD06AF /* ContentSecurityPolicyHash.h in Headers */ = {isa = PBXBuildFile; fileRef = CE7E17821C83A49100AD06AF /* ContentSecurityPolicyHash.h */; }; 5904 CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */; };5905 5903 CEC337AD1A46071F009B8523 /* ServersSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AC1A46071F009B8523 /* ServersSPI.h */; settings = {ATTRIBUTES = (Private, ); }; }; 5906 5904 CEC337AF1A46086D009B8523 /* GraphicsServicesSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AE1A46086D009B8523 /* GraphicsServicesSPI.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 11409 11407 A456FA2511AD4A830020B420 /* LabelsNodeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LabelsNodeList.h; sourceTree = "<group>"; }; 11410 11408 A501920C132EBF2E008BFE55 /* Autocapitalize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Autocapitalize.h; sourceTree = "<group>"; }; 11411 A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeGCActivityCallbackIOS.h; sourceTree = "<group>"; };11412 11409 A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InspectorMemoryAgent.cpp; sourceTree = "<group>"; }; 11413 11410 A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InspectorMemoryAgent.h; sourceTree = "<group>"; }; … … 13352 13349 C280833E1C6DC22C001451B6 /* JSFontFace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSFontFace.h; sourceTree = "<group>"; }; 13353 13350 C28083411C6DC96A001451B6 /* JSFontFaceCustom.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSFontFaceCustom.cpp; sourceTree = "<group>"; }; 13354 C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeIncrementalSweeperIOS.h; sourceTree = "<group>"; };13355 13351 C330A22113EC196B0000B45B /* ColorChooser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ColorChooser.h; sourceTree = "<group>"; }; 13356 13352 C33EE5C214FB49610002095A /* BaseClickableWithKeyInputType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BaseClickableWithKeyInputType.cpp; sourceTree = "<group>"; }; … … 19350 19346 CDA29A2E1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.h */, 19351 19347 CDA29A2F1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.mm */, 19352 A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */,19353 C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */,19354 19348 3F42B31B1881191B00278AAC /* WebVideoFullscreenControllerAVKit.h */, 19355 19349 3F42B31C1881191B00278AAC /* WebVideoFullscreenControllerAVKit.mm */, … … 27866 27860 CDA29A0F1CBD9CFE00901CCF /* WebPlaybackSessionModelMediaElement.h in Headers */, 27867 27861 99CC0B6B18BEA1FF006CEBCC /* WebReplayInputs.h in Headers */, 27868 A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */,27869 CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */,27870 27862 1CAF34810A6C405200ABE06E /* WebScriptObject.h in Headers */, 27871 27863 1CAF34830A6C405200ABE06E /* WebScriptObjectPrivate.h in Headers */, -
trunk/Source/WebCore/bindings/js/JSDOMWindowBase.cpp
r205670 r208306 2 2 * Copyright (C) 2000 Harri Porten (porten@kde.org) 3 3 * Copyright (C) 2006 Jon Shier (jshier@iastate.edu) 4 * Copyright (C) 2003-2009, 2014 Apple Inc. All rights reseved.4 * Copyright (C) 2003-2009, 2014, 2016 Apple Inc. All rights reseved. 5 5 * Copyright (C) 2006 Alexey Proskuryakov (ap@webkit.org) 6 6 * Copyright (c) 2015 Canon Inc. All rights reserved. … … 50 50 #if PLATFORM(IOS) 51 51 #include "ChromeClient.h" 52 #include "WebSafeGCActivityCallbackIOS.h"53 #include "WebSafeIncrementalSweeperIOS.h"54 52 #endif 55 53 … … 245 243 ScriptController::initializeThreading(); 246 244 vm = &VM::createLeaked(LargeHeap).leakRef(); 245 vm->heap.acquireAccess(); // At any time, we may do things that affect the GC. 247 246 #if !PLATFORM(IOS) 248 247 vm->setExclusiveThread(std::this_thread::get_id()); 249 248 #else 250 vm->heap.setFullActivityCallback(WebSafeFullGCActivityCallback::create(&vm->heap)); 251 vm->heap.setEdenActivityCallback(WebSafeEdenGCActivityCallback::create(&vm->heap)); 252 253 vm->heap.setIncrementalSweeper(std::make_unique<WebSafeIncrementalSweeper>(&vm->heap)); 249 vm->heap.setRunLoop(WebThreadRunLoop()); 254 250 vm->heap.machineThreads().addCurrentThread(); 255 251 #endif -
trunk/Source/WebCore/bindings/js/WorkerScriptController.cpp
r208008 r208306 52 52 , m_workerGlobalScopeWrapper(*m_vm) 53 53 { 54 m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently. 54 55 m_vm->ensureWatchdog(); 55 56 initNormalWorldClientData(m_vm.get()); … … 189 190 } 190 191 192 void WorkerScriptController::releaseHeapAccess() 193 { 194 m_vm->heap.releaseAccess(); 195 } 196 197 void WorkerScriptController::acquireHeapAccess() 198 { 199 m_vm->heap.acquireAccess(); 200 } 201 191 202 void WorkerScriptController::attachDebugger(JSC::Debugger* debugger) 192 203 { -
trunk/Source/WebCore/bindings/js/WorkerScriptController.h
r208008 r208306 1 1 /* 2 * Copyright (C) 2008, 2015 Apple Inc. All Rights Reserved.2 * Copyright (C) 2008, 2015, 2016 Apple Inc. All Rights Reserved. 3 3 * Copyright (C) 2012 Google Inc. All Rights Reserved. 4 4 * … … 77 77 78 78 JSC::VM& vm() { return *m_vm; } 79 80 void releaseHeapAccess(); 81 void acquireHeapAccess(); 79 82 80 83 void attachDebugger(JSC::Debugger*); -
trunk/Source/WebCore/dom/EventTarget.cpp
r207734 r208306 132 132 return true; 133 133 } 134 return addEventListener(eventType, listener.releaseNonNull());} 134 return addEventListener(eventType, listener.releaseNonNull()); 135 } 135 136 136 137 EventListener* EventTarget::getAttributeEventListener(const AtomicString& eventType) -
trunk/Source/WebCore/testing/Internals.cpp
r208300 r208306 1 1 /* 2 2 * Copyright (C) 2012 Google Inc. All rights reserved. 3 * Copyright (C) 2013-201 5Apple Inc. All rights reserved.3 * Copyright (C) 2013-2016 Apple Inc. All rights reserved. 4 4 * 5 5 * Redistribution and use in source and binary forms, with or without … … 3294 3294 #endif 3295 3295 3296 void Internals::reportBacktrace() 3297 { 3298 WTFReportBacktrace(); 3299 } 3300 3296 3301 } // namespace WebCore -
trunk/Source/WebCore/testing/Internals.h
r208300 r208306 497 497 498 498 bool userPrefersReducedMotion() const; 499 500 void reportBacktrace(); 499 501 500 502 private: -
trunk/Source/WebCore/testing/Internals.idl
r208300 r208306 1 1 /* 2 2 * Copyright (C) 2012 Google Inc. All rights reserved. 3 * Copyright (C) 2013-201 5Apple Inc. All rights reserved.3 * Copyright (C) 2013-2016 Apple Inc. All rights reserved. 4 4 * 5 5 * Redistribution and use in source and binary forms, with or without … … 469 469 470 470 boolean userPrefersReducedMotion(); 471 }; 471 472 void reportBacktrace(); 473 }; -
trunk/Source/WebCore/workers/WorkerRunLoop.cpp
r208010 r208306 174 174 } 175 175 MessageQueueWaitResult result; 176 if (WorkerScriptController* script = context->script()) 177 script->releaseHeapAccess(); 176 178 auto task = m_messageQueue.waitForMessageFilteredWithTimeout(result, predicate, absoluteTime); 179 if (WorkerScriptController* script = context->script()) 180 script->acquireHeapAccess(); 177 181 178 182 // If the context is closing, don't execute any further JavaScript tasks (per section 4.1.1 of the Web Workers spec). However, there may be implementation cleanup tasks in the queue, so keep running through it.
Note:
See TracChangeset
for help on using the changeset viewer.