Changeset 207179 in webkit
- Timestamp:
- Oct 11, 2016 4:52:02 PM (8 years ago)
- Location:
- trunk/Source
- Files:
-
- 1 added
- 20 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r207178 r207179 1 2016-10-06 Filip Pizlo <fpizlo@apple.com> 2 3 MarkedBlock should know what objects are live during marking 4 https://bugs.webkit.org/show_bug.cgi?id=162309 5 6 Reviewed by Geoffrey Garen. 7 8 It used to be that we would forget which objects are live the moment we started collection. 9 That's because the flip at the beginning clears all mark bits. 10 11 But we already have a facility for tracking objects that are live-but-not-marked. It's called 12 newlyAllocated. So, instead of clearing mark bits, we want to just transfer them to 13 newlyAllocated. Then we want to clear all newlyAllocated after GC. 14 15 This implements such an approach, along with a versioning optimization for newlyAllocated. 16 Instead of walking the whole heap to clear newlyAllocated bits at the end of the GC, we bump 17 the newlyAllocatedVersion, which causes MarkedBlock to treat newlyAllocated as if it was 18 clear. 19 20 We could have even avoided allocating newlyAllocated in most cases, since empirically most 21 blocks are either completely empty or completely full. An earlier version of this patch did 22 this, but it was not better than this patch. In fact, it seemed to actually be worse for PLT 23 and membuster. 24 25 To validate this change, we now run the conservative scan after the beginMarking flip. And it 26 totally works! 27 28 This is a huge step towards concurrent GC. It means that we ought to be able to run the 29 allocator while marking. Since we already separately made it possible to run the barrier 30 while marking, this means that we're pretty much ready for some serious concurrency action. 31 32 This appears to be perf-neutral and space-neutral. 33 34 * JavaScriptCore.xcodeproj/project.pbxproj: 35 * bytecode/CodeBlock.cpp: 36 * bytecode/CodeBlock.h: 37 (JSC::CodeBlockSet::mark): Deleted. 38 * heap/CodeBlockSet.cpp: 39 (JSC::CodeBlockSet::writeBarrierCurrentlyExecuting): 40 (JSC::CodeBlockSet::clearCurrentlyExecuting): 41 (JSC::CodeBlockSet::writeBarrierCurrentlyExecutingCodeBlocks): Deleted. 42 * heap/CodeBlockSet.h: 43 * heap/CodeBlockSetInlines.h: Added. 44 (JSC::CodeBlockSet::mark): 45 * heap/ConservativeRoots.cpp: 46 * heap/Heap.cpp: 47 (JSC::Heap::markRoots): 48 (JSC::Heap::beginMarking): 49 (JSC::Heap::collectImpl): 50 (JSC::Heap::writeBarrierCurrentlyExecutingCodeBlocks): 51 (JSC::Heap::clearCurrentlyExecutingCodeBlocks): 52 * heap/Heap.h: 53 * heap/HeapUtil.h: 54 (JSC::HeapUtil::findGCObjectPointersForMarking): 55 * heap/MarkedAllocator.cpp: 56 (JSC::MarkedAllocator::isPagedOut): 57 * heap/MarkedBlock.cpp: 58 (JSC::MarkedBlock::Handle::Handle): 59 (JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated): 60 (JSC::MarkedBlock::Handle::stopAllocating): 61 (JSC::MarkedBlock::Handle::lastChanceToFinalize): 62 (JSC::MarkedBlock::Handle::resumeAllocating): 63 (JSC::MarkedBlock::aboutToMarkSlow): 64 (JSC::MarkedBlock::Handle::resetAllocated): 65 (JSC::MarkedBlock::resetMarks): 66 (JSC::MarkedBlock::setNeedsDestruction): 67 (JSC::MarkedBlock::Handle::didAddToAllocator): 68 (JSC::MarkedBlock::Handle::isLive): 69 (JSC::MarkedBlock::Handle::isLiveCell): 70 (JSC::MarkedBlock::clearMarks): Deleted. 71 * heap/MarkedBlock.h: 72 (JSC::MarkedBlock::Handle::newlyAllocatedVersion): 73 (JSC::MarkedBlock::Handle::hasAnyNewlyAllocated): Deleted. 74 (JSC::MarkedBlock::Handle::clearNewlyAllocated): Deleted. 75 * heap/MarkedBlockInlines.h: 76 (JSC::MarkedBlock::Handle::cellsPerBlock): 77 (JSC::MarkedBlock::Handle::isLive): 78 (JSC::MarkedBlock::Handle::isLiveCell): 79 (JSC::MarkedBlock::Handle::isNewlyAllocatedStale): 80 (JSC::MarkedBlock::Handle::hasAnyNewlyAllocatedWithSweep): 81 (JSC::MarkedBlock::Handle::hasAnyNewlyAllocated): 82 (JSC::MarkedBlock::heap): 83 (JSC::MarkedBlock::space): 84 (JSC::MarkedBlock::Handle::space): 85 (JSC::MarkedBlock::resetMarkingVersion): Deleted. 86 * heap/MarkedSpace.cpp: 87 (JSC::MarkedSpace::beginMarking): 88 (JSC::MarkedSpace::endMarking): 89 (JSC::MarkedSpace::clearNewlyAllocated): Deleted. 90 * heap/MarkedSpace.h: 91 (JSC::MarkedSpace::nextVersion): 92 (JSC::MarkedSpace::newlyAllocatedVersion): 93 (JSC::MarkedSpace::markingVersion): Deleted. 94 * runtime/SamplingProfiler.cpp: 95 1 96 2016-10-11 Mark Lam <mark.lam@apple.com> 2 97 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r207166 r207179 446 446 0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2781A7957B2006E4E66 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; }; 447 447 0F64EAF31C4ECD0600621E9B /* AirArgInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64EAF21C4ECD0600621E9B /* AirArgInlines.h */; }; 448 0F664CE81DA304EF00B00A11 /* CodeBlockSetInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */; }; 448 449 0F666EC0183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; 449 450 0F666EC1183566F900D017F1 /* FullBytecodeLiveness.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F666EBF183566F900D017F1 /* FullBytecodeLiveness.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2710 2711 0F64B2781A7957B2006E4E66 /* CallEdge.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdge.h; sourceTree = "<group>"; }; 2711 2712 0F64EAF21C4ECD0600621E9B /* AirArgInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirArgInlines.h; path = b3/air/AirArgInlines.h; sourceTree = "<group>"; }; 2713 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CodeBlockSetInlines.h; sourceTree = "<group>"; }; 2712 2714 0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeLivenessAnalysisInlines.h; sourceTree = "<group>"; }; 2713 2715 0F666EBF183566F900D017F1 /* FullBytecodeLiveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FullBytecodeLiveness.h; sourceTree = "<group>"; }; … … 5326 5328 0FD8A31117D4326C00CA2C40 /* CodeBlockSet.cpp */, 5327 5329 0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */, 5330 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */, 5328 5331 146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */, 5329 5332 149DAAF212EB559D0083B12B /* ConservativeRoots.h */, … … 7961 7964 657CF45919BF6662004ACBF2 /* JSCallee.h in Headers */, 7962 7965 A7D801A91880D6A80026C39B /* JSCBuiltins.h in Headers */, 7966 0F664CE81DA304EF00B00A11 /* CodeBlockSetInlines.h in Headers */, 7963 7967 BC1167DA0E19BCC9008066DD /* JSCell.h in Headers */, 7964 7968 0F9749711687ADE400A4FF6A /* JSCellInlines.h in Headers */, -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r206853 r207179 37 37 #include "BytecodeUseDef.h" 38 38 #include "CallLinkStatus.h" 39 #include "CodeBlockSet.h" 39 40 #include "DFGCapabilities.h" 40 41 #include "DFGCommon.h" -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r206525 r207179 36 36 #include "CallReturnOffsetToBytecodeOffset.h" 37 37 #include "CodeBlockHash.h" 38 #include "CodeBlockSet.h"39 38 #include "CodeOrigin.h" 40 39 #include "CodeType.h" … … 77 76 78 77 class BytecodeLivenessAnalysis; 78 class CodeBlockSet; 79 79 class ExecState; 80 80 class JSModuleEnvironment; … … 1312 1312 } 1313 1313 1314 inline void CodeBlockSet::mark(const LockHolder& locker, void* candidateCodeBlock)1315 {1316 ASSERT(m_lock.isLocked());1317 // We have to check for 0 and -1 because those are used by the HashMap as markers.1318 uintptr_t value = reinterpret_cast<uintptr_t>(candidateCodeBlock);1319 1320 // This checks for both of those nasty cases in one go.1321 // 0 + 1 = 11322 // -1 + 1 = 01323 if (value + 1 <= 1)1324 return;1325 1326 CodeBlock* codeBlock = static_cast<CodeBlock*>(candidateCodeBlock);1327 if (!m_oldCodeBlocks.contains(codeBlock) && !m_newCodeBlocks.contains(codeBlock))1328 return;1329 1330 mark(locker, codeBlock);1331 }1332 1333 inline void CodeBlockSet::mark(const LockHolder&, CodeBlock* codeBlock)1334 {1335 if (!codeBlock)1336 return;1337 1338 // Try to recover gracefully if we forget to execute a barrier for a1339 // CodeBlock that does value profiling. This is probably overkill, but we1340 // have always done it.1341 Heap::heap(codeBlock)->writeBarrier(codeBlock);1342 1343 m_currentlyExecuting.add(codeBlock);1344 }1345 1346 1314 template <typename Functor> inline void ScriptExecutable::forEachCodeBlock(Functor&& functor) 1347 1315 { -
trunk/Source/JavaScriptCore/heap/CodeBlockSet.cpp
r206267 r207179 108 108 } 109 109 110 void CodeBlockSet::writeBarrierCurrentlyExecuting CodeBlocks(Heap* heap)110 void CodeBlockSet::writeBarrierCurrentlyExecuting(Heap* heap) 111 111 { 112 112 LockHolder locker(&m_lock); … … 115 115 for (CodeBlock* codeBlock : m_currentlyExecuting) 116 116 heap->writeBarrier(codeBlock); 117 } 117 118 118 // It's safe to clear this set because we won't delete the CodeBlocks 119 // in it until the next GC, and we'll recompute it at that time. 119 void CodeBlockSet::clearCurrentlyExecuting() 120 { 120 121 m_currentlyExecuting.clear(); 121 122 } -
trunk/Source/JavaScriptCore/heap/CodeBlockSet.h
r206525 r207179 72 72 // Add all currently executing CodeBlocks to the remembered set to be 73 73 // re-scanned during the next collection. 74 void writeBarrierCurrentlyExecutingCodeBlocks(Heap*); 74 void writeBarrierCurrentlyExecuting(Heap*); 75 76 void clearCurrentlyExecuting(); 75 77 76 78 bool contains(const LockHolder&, void* candidateCodeBlock); -
trunk/Source/JavaScriptCore/heap/ConservativeRoots.cpp
r206172 r207179 28 28 29 29 #include "CodeBlock.h" 30 #include "CodeBlockSet .h"30 #include "CodeBlockSetInlines.h" 31 31 #include "HeapInlines.h" 32 32 #include "HeapUtil.h" -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r206555 r207179 23 23 24 24 #include "CodeBlock.h" 25 #include "CodeBlockSet.h" 25 26 #include "ConservativeRoots.h" 26 27 #include "DFGWorklist.h" … … 389 390 HeapRootVisitor heapRootVisitor(m_slotVisitor); 390 391 391 ConservativeRoots conservativeRoots(*this);392 392 { 393 393 TimingScope preConvergenceTimingScope(*this, "Heap::markRoots before convergence"); 394 // We gather conservative roots before clearing mark bits because conservative395 // gathering uses the mark bits to determine whether a reference is valid.396 {397 TimingScope preConvergenceTimingScope(*this, "Heap::markRoots conservative scan");398 SuperSamplerScope superSamplerScope(false);399 gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);400 gatherJSStackRoots(conservativeRoots);401 gatherScratchBufferRoots(conservativeRoots);402 }403 394 404 395 #if ENABLE(DFG_JIT) … … 464 455 465 456 m_slotVisitor.donateAndDrain(); 457 458 { 459 TimingScope preConvergenceTimingScope(*this, "Heap::markRoots conservative scan"); 460 ConservativeRoots conservativeRoots(*this); 461 SuperSamplerScope superSamplerScope(false); 462 gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters); 463 gatherJSStackRoots(conservativeRoots); 464 gatherScratchBufferRoots(conservativeRoots); 465 visitConservativeRoots(conservativeRoots); 466 467 // We want to do this to conservatively ensure that we rescan any code blocks that are 468 // running right now. However, we need to be sure to do it *after* we mark the code block 469 // so that we know for sure if it really needs a barrier. 470 m_codeBlocks->writeBarrierCurrentlyExecuting(this); 471 } 472 466 473 visitExternalRememberedSet(); 467 474 visitSmallStrings(); 468 visitConservativeRoots(conservativeRoots);469 475 visitProtectedObjects(heapRootVisitor); 470 476 visitArgumentBuffers(heapRootVisitor); … … 523 529 if (m_operationInProgress == FullCollection) 524 530 m_codeBlocks->clearMarksForFullCollection(); 525 526 {527 TimingScope clearNewlyAllocatedTimingScope(*this, "m_objectSpace.clearNewlyAllocated");528 m_objectSpace.clearNewlyAllocated();529 }530 531 531 532 { … … 1060 1061 1061 1062 notifyIncrementalSweeper(); 1062 writeBarrierCurrentlyExecutingCodeBlocks(); 1063 m_codeBlocks->writeBarrierCurrentlyExecuting(this); 1064 m_codeBlocks->clearCurrentlyExecuting(); 1063 1065 1064 1066 prepareForAllocation(); … … 1200 1202 1201 1203 m_sweeper->startSweeping(); 1202 }1203 1204 void Heap::writeBarrierCurrentlyExecutingCodeBlocks()1205 {1206 m_codeBlocks->writeBarrierCurrentlyExecutingCodeBlocks(this);1207 1204 } 1208 1205 -
trunk/Source/JavaScriptCore/heap/Heap.h
r206555 r207179 350 350 void deleteSourceProviderCaches(); 351 351 void notifyIncrementalSweeper(); 352 void writeBarrierCurrentlyExecutingCodeBlocks();353 352 void prepareForAllocation(); 354 353 void harvestWeakReferences(); -
trunk/Source/JavaScriptCore/heap/HeapUtil.h
r206172 r207179 52 52 const HashSet<MarkedBlock*>& set = heap.objectSpace().blocks().set(); 53 53 54 ASSERT(heap.objectSpace().isMarking()); 55 static const bool isMarking = true; 56 54 57 char* pointer = static_cast<char*>(passedPointer); 55 58 … … 86 89 && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) { 87 90 previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer)); 88 if (previousCandidate->handle().isLiveCell(markingVersion, previousPointer))91 if (previousCandidate->handle().isLiveCell(markingVersion, isMarking, previousPointer)) 89 92 func(previousPointer); 90 93 } … … 100 103 101 104 auto tryPointer = [&] (void* pointer) { 102 if (candidate->handle().isLiveCell(markingVersion, pointer))105 if (candidate->handle().isLiveCell(markingVersion, isMarking, pointer)) 103 106 func(pointer); 104 107 }; -
trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp
r206555 r207179 54 54 for (size_t index = 0; index < m_blocks.size(); ++index) { 55 55 MarkedBlock::Handle* block = m_blocks[index]; 56 if (block) { 57 // Forces us to touch the memory of the block, but has no semantic effect. 58 if (block->areMarksStale()) 59 block->block().resetMarkingVersion(); 60 } 56 if (block) 57 block->block().updateNeedsDestruction(); 61 58 ++itersSinceLastTimeCheck; 62 59 if (itersSinceLastTimeCheck >= Heap::s_timeCheckResolution) { … … 437 434 m_unswept.forEachSetBit( 438 435 [&] (size_t index) { 439 m_blocks[index]->sweep(); 436 MarkedBlock::Handle* block = m_blocks[index]; 437 block->sweep(); 440 438 }); 441 439 } -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r206709 r207179 55 55 MarkedBlock::Handle::Handle(Heap& heap, void* blockSpace) 56 56 : m_weakSet(heap.vm(), CellContainer()) 57 , m_newlyAllocatedVersion(MarkedSpace::nullVersion) 57 58 { 58 59 m_block = new (NotNull, blockSpace) MarkedBlock(*heap.vm(), *this); … … 126 127 if (emptyMode == NotEmpty 127 128 && ((marksMode == MarksNotStale && block.m_marks.get(i)) 128 || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated ->get(i)))) {129 || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated.get(i)))) { 129 130 isEmpty = false; 130 131 continue; … … 149 150 // otherwise we would lose information on what's currently alive. 150 151 if (sweepMode == SweepToFreeList && newlyAllocatedMode == HasNewlyAllocated) 151 m_newlyAllocated = nullptr;152 m_newlyAllocatedVersion = MarkedSpace::nullVersion; 152 153 153 154 FreeList result = FreeList::list(head, count * cellSize()); … … 208 209 FreeList MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated(SweepMode sweepMode) 209 210 { 210 if ( m_newlyAllocated)211 if (hasAnyNewlyAllocated()) 211 212 return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, HasNewlyAllocated>(sweepMode); 212 213 return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>(sweepMode); … … 276 277 // way to tell what's live vs dead. 277 278 278 ASSERT(!m_newlyAllocated);279 m_newlyAllocated = std::make_unique<WTF::Bitmap<atomsPerBlock>>();279 m_newlyAllocated.clearAll(); 280 m_newlyAllocatedVersion = heap()->objectSpace().newlyAllocatedVersion(); 280 281 281 282 SetNewlyAllocatedFunctor functor(this); … … 296 297 { 297 298 allocator()->setIsAllocated(this, false); 298 m_block->clearMarks(); 299 m_block->m_marks.clearAll(); 300 m_block->clearHasAnyMarked(); 301 m_block->m_markingVersion = heap()->objectSpace().markingVersion(); 299 302 m_weakSet.lastChanceToFinalize(); 300 301 clearNewlyAllocated();303 m_newlyAllocated.clearAll(); 304 m_newlyAllocatedVersion = heap()->objectSpace().newlyAllocatedVersion(); 302 305 sweep(); 303 306 } … … 308 311 ASSERT(!isFreeListed()); 309 312 310 if (! m_newlyAllocated) {313 if (!hasAnyNewlyAllocated()) { 311 314 // This means we had already exhausted the block when we stopped allocation. 312 315 return FreeList(); … … 347 350 ASSERT(vm()->heap.objectSpace().isMarking()); 348 351 LockHolder locker(m_lock); 349 if (areMarksStale(markingVersion)) { 350 clearMarks(markingVersion); 351 // This means we're the first ones to mark any object in this block. 352 handle().allocator()->atomicSetAndCheckIsMarkingNotEmpty(&handle(), true); 353 } 354 } 355 356 void MarkedBlock::clearMarks() 357 { 358 clearMarks(vm()->heap.objectSpace().markingVersion()); 359 } 360 361 void MarkedBlock::clearMarks(HeapVersion markingVersion) 362 { 363 m_marks.clearAll(); 352 if (!areMarksStale(markingVersion)) 353 return; 354 355 if (handle().allocator()->isAllocated(&handle()) 356 || !marksConveyLivenessDuringMarking(markingVersion)) { 357 // We already know that the block is full and is already recognized as such, or that the 358 // block did not survive the previous GC. So, we can clear mark bits the old fashioned 359 // way. Note that it's possible for such a block to have newlyAllocated with an up-to- 360 // date version! If it does, then we want to leave the newlyAllocated alone, since that 361 // means that we had allocated in this previously empty block but did not fill it up, so 362 // we created a newlyAllocated. 363 m_marks.clearAll(); 364 } else { 365 HeapVersion newlyAllocatedVersion = space()->newlyAllocatedVersion(); 366 if (handle().m_newlyAllocatedVersion == newlyAllocatedVersion) { 367 // Merge the contents of marked into newlyAllocated. If we get the full set of bits 368 // then invalidate newlyAllocated and set allocated. 369 handle().m_newlyAllocated.mergeAndClear(m_marks); 370 } else { 371 // Replace the contents of newlyAllocated with marked. If we get the full set of 372 // bits then invalidate newlyAllocated and set allocated. 373 handle().m_newlyAllocated.setAndClear(m_marks); 374 } 375 handle().m_newlyAllocatedVersion = newlyAllocatedVersion; 376 } 364 377 clearHasAnyMarked(); 365 378 WTF::storeStoreFence(); 366 379 m_markingVersion = markingVersion; 380 381 // This means we're the first ones to mark any object in this block. 382 handle().allocator()->atomicSetAndCheckIsMarkingNotEmpty(&handle(), true); 383 } 384 385 void MarkedBlock::Handle::resetAllocated() 386 { 387 m_newlyAllocated.clearAll(); 388 m_newlyAllocatedVersion = MarkedSpace::nullVersion; 389 } 390 391 void MarkedBlock::resetMarks() 392 { 393 // We want aboutToMarkSlow() to see what the mark bits were after the last collection. It uses 394 // the version number to distinguish between the marks having already been stale before 395 // beginMarking(), or just stale now that beginMarking() bumped the version. If we have a version 396 // wraparound, then we will call this method before resetting the version to null. When the 397 // version is null, aboutToMarkSlow() will assume that the marks were not stale as of before 398 // beginMarking(). Hence the need to whip the marks into shape. 399 if (areMarksStale()) 400 m_marks.clearAll(); 401 m_markingVersion = MarkedSpace::nullVersion; 367 402 } 368 403 … … 426 461 } 427 462 463 void MarkedBlock::updateNeedsDestruction() 464 { 465 m_needsDestruction = handle().needsDestruction(); 466 } 467 428 468 void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t index) 429 469 { … … 443 483 RELEASE_ASSERT(m_attributes.destruction == DoesNotNeedDestruction); 444 484 445 block().m_needsDestruction = needsDestruction(); 446 447 unsigned cellsPerBlock = MarkedSpace::blockPayload / cellSize; 448 double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock); 485 block().updateNeedsDestruction(); 486 487 double markCountBias = -(Options::minMarkedBlockUtilization() * cellsPerBlock()); 449 488 450 489 // The mark count bias should be comfortably within this range. … … 467 506 bool MarkedBlock::Handle::isLive(const HeapCell* cell) 468 507 { 469 return isLive( vm()->heap.objectSpace().markingVersion(), cell);508 return isLive(space()->markingVersion(), space()->isMarking(), cell); 470 509 } 471 510 472 511 bool MarkedBlock::Handle::isLiveCell(const void* p) 473 512 { 474 return isLiveCell( vm()->heap.objectSpace().markingVersion(), p);513 return isLiveCell(space()->markingVersion(), space()->isMarking(), p); 475 514 } 476 515 -
trunk/Source/JavaScriptCore/heap/MarkedBlock.h
r206709 r207179 41 41 class JSCell; 42 42 class MarkedAllocator; 43 class MarkedSpace; 43 44 44 45 typedef uintptr_t Bits; … … 111 112 MarkedAllocator* allocator() const; 112 113 Heap* heap() const; 114 inline MarkedSpace* space() const; 113 115 VM* vm() const; 114 116 WeakSet& weakSet(); … … 142 144 FreeList resumeAllocating(); // Call this if you canonicalized a block for some non-collection related purpose. 143 145 144 // Returns true if the "newly allocated" bitmap was non-null145 // and was successfully cleared and false otherwise.146 bool clearNewlyAllocated();147 148 146 size_t cellSize(); 147 inline unsigned cellsPerBlock(); 148 149 149 const AllocatorAttributes& attributes() const; 150 150 DestructionMode destruction() const; … … 154 154 size_t markCount(); 155 155 size_t size(); 156 157 inline bool isLive(HeapVersion markingVersion, const HeapCell*);158 inline bool isLiveCell(HeapVersion markingVersion, const void*);156 157 inline bool isLive(HeapVersion markingVersion, bool isMarking, const HeapCell*); 158 inline bool isLiveCell(HeapVersion markingVersion, bool isMarking, const void*); 159 159 160 160 bool isLive(const HeapCell*); … … 165 165 void clearNewlyAllocated(const void*); 166 166 167 bool hasAnyNewlyAllocated() const { return !!m_newlyAllocated; } 168 167 HeapVersion newlyAllocatedVersion() const { return m_newlyAllocatedVersion; } 168 169 inline bool isNewlyAllocatedStale() const; 170 171 inline bool hasAnyNewlyAllocated(); 172 void resetAllocated(); 173 169 174 template <typename Functor> IterationStatus forEachCell(const Functor&); 170 175 template <typename Functor> inline IterationStatus forEachLiveCell(const Functor&); … … 186 191 private: 187 192 Handle(Heap&, void*); 188 193 189 194 template<DestructionMode> 190 195 FreeList sweepHelperSelectScribbleMode(SweepMode = SweepOnly); … … 224 229 size_t m_endAtom { std::numeric_limits<size_t>::max() }; // This is a fuzzy end. Always test for < m_endAtom. 225 230 226 std::unique_ptr<WTF::Bitmap<atomsPerBlock>> m_newlyAllocated;231 WTF::Bitmap<atomsPerBlock> m_newlyAllocated; 227 232 228 233 AllocatorAttributes m_attributes; … … 232 237 size_t m_index { std::numeric_limits<size_t>::max() }; 233 238 WeakSet m_weakSet; 239 240 HeapVersion m_newlyAllocatedVersion; 234 241 235 242 MarkedBlock* m_block { nullptr }; … … 241 248 242 249 VM* vm() const; 250 inline Heap* heap() const; 251 inline MarkedSpace* space() const; 243 252 244 253 static bool isAtomAligned(const void*); … … 279 288 bool needsDestruction() const { return m_needsDestruction; } 280 289 281 inline void resetMarkingVersion(); 290 // This is usually a no-op, and we use it as a no-op that touches the page in isPagedOut(). 291 void updateNeedsDestruction(); 292 293 void resetMarks(); 282 294 283 295 private: … … 290 302 291 303 void aboutToMarkSlow(HeapVersion markingVersion); 292 void clearMarks();293 void clearMarks(HeapVersion markingVersion);294 304 void clearHasAnyMarked(); 295 305 296 306 void noteMarkedSlow(); 297 298 WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_marks; 307 308 inline bool marksConveyLivenessDuringMarking(HeapVersion markingVersion); 309 310 WTF::Bitmap<atomsPerBlock> m_marks; 299 311 300 312 bool m_needsDestruction; … … 522 534 inline bool MarkedBlock::Handle::isNewlyAllocated(const void* p) 523 535 { 524 return m_newlyAllocated ->get(m_block->atomNumber(p));536 return m_newlyAllocated.get(m_block->atomNumber(p)); 525 537 } 526 538 527 539 inline void MarkedBlock::Handle::setNewlyAllocated(const void* p) 528 540 { 529 m_newlyAllocated ->set(m_block->atomNumber(p));541 m_newlyAllocated.set(m_block->atomNumber(p)); 530 542 } 531 543 532 544 inline void MarkedBlock::Handle::clearNewlyAllocated(const void* p) 533 545 { 534 m_newlyAllocated->clear(m_block->atomNumber(p)); 535 } 536 537 inline bool MarkedBlock::Handle::clearNewlyAllocated() 538 { 539 if (m_newlyAllocated) { 540 m_newlyAllocated = nullptr; 541 return true; 542 } 543 return false; 546 m_newlyAllocated.clear(m_block->atomNumber(p)); 544 547 } 545 548 -
trunk/Source/JavaScriptCore/heap/MarkedBlockInlines.h
r206172 r207179 28 28 #include "MarkedAllocator.h" 29 29 #include "MarkedBlock.h" 30 #include "MarkedSpace.h" 30 31 31 32 namespace JSC { 32 33 33 inline bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, const HeapCell* cell) 34 inline unsigned MarkedBlock::Handle::cellsPerBlock() 35 { 36 return MarkedSpace::blockPayload / cellSize(); 37 } 38 39 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion) 40 { 41 // This returns true if any of these is true: 42 // - We just created the block and so the bits are clear already. 43 // - This block has objects marked during the last GC, and so its version was up-to-date just 44 // before the current collection did beginMarking(). This means that any objects that have 45 // their mark bit set are valid objects that were never deleted, and so are candidates for 46 // marking in any conservative scan. Using our jargon, they are "live". 47 // - We did ~2^32 collections and rotated the version back to null, so we needed to hard-reset 48 // everything. If the marks had been stale, we would have cleared them. So, we can be sure that 49 // any set mark bit reflects objects marked during last GC, i.e. "live" objects. 50 // It would be absurd to use this method when not collecting, since this special "one version 51 // back" state only makes sense when we're in a concurrent collection and have to be 52 // conservative. 53 ASSERT(space()->isMarking()); 54 return m_markingVersion == MarkedSpace::nullVersion 55 || MarkedSpace::nextVersion(m_markingVersion) == markingVersion; 56 } 57 58 inline bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, bool isMarking, const HeapCell* cell) 34 59 { 35 60 ASSERT(!isFreeListed()); … … 40 65 } 41 66 42 MarkedBlock& block = this->block();43 44 67 if (allocator()->isAllocated(this)) 45 68 return true; 46 69 47 if (block.areMarksStale(markingVersion)) 48 return false; 70 MarkedBlock& block = this->block(); 71 72 if (block.areMarksStale()) { 73 if (!isMarking) 74 return false; 75 if (!block.marksConveyLivenessDuringMarking(markingVersion)) 76 return false; 77 } 49 78 50 return block. isMarked(cell);79 return block.m_marks.get(block.atomNumber(cell)); 51 80 } 52 81 53 inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, const void* p)82 inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, bool isMarking, const void* p) 54 83 { 55 84 if (!m_block->isAtom(p)) 56 85 return false; 57 return isLive(markingVersion, static_cast<const HeapCell*>(p)); 86 return isLive(markingVersion, isMarking, static_cast<const HeapCell*>(p)); 87 } 88 89 inline bool MarkedBlock::Handle::isNewlyAllocatedStale() const 90 { 91 return m_newlyAllocatedVersion != space()->newlyAllocatedVersion(); 92 } 93 94 inline bool MarkedBlock::Handle::hasAnyNewlyAllocated() 95 { 96 return !isNewlyAllocatedStale(); 58 97 } 59 98 … … 88 127 } 89 128 90 inline void MarkedBlock::resetMarkingVersion()129 inline Heap* MarkedBlock::heap() const 91 130 { 92 m_markingVersion = MarkedSpace::nullVersion; 131 return &vm()->heap; 132 } 133 134 inline MarkedSpace* MarkedBlock::space() const 135 { 136 return &heap()->objectSpace(); 137 } 138 139 inline MarkedSpace* MarkedBlock::Handle::space() const 140 { 141 return &heap()->objectSpace(); 93 142 } 94 143 -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r206555 r207179 452 452 } 453 453 454 void MarkedSpace::clearNewlyAllocated()455 {456 forEachAllocator(457 [&] (MarkedAllocator& allocator) -> IterationStatus {458 if (MarkedBlock::Handle* block = allocator.takeLastActiveBlock())459 block->clearNewlyAllocated();460 return IterationStatus::Continue;461 });462 463 for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i)464 m_largeAllocations[i]->clearNewlyAllocated();465 466 if (!ASSERT_DISABLED) {467 forEachBlock(468 [&] (MarkedBlock::Handle* block) {469 ASSERT_UNUSED(block, !block->clearNewlyAllocated());470 });471 472 for (LargeAllocation* allocation : m_largeAllocations)473 ASSERT_UNUSED(allocation, !allocation->isNewlyAllocated());474 }475 }476 477 454 void MarkedSpace::beginMarking() 478 455 { … … 484 461 }); 485 462 486 m_markingVersion = nextVersion(m_markingVersion); 487 488 if (UNLIKELY(m_markingVersion == initialVersion)) { 489 // Oh no! Version wrap-around! We handle this by setting all block versions to null. 463 if (UNLIKELY(nextVersion(m_markingVersion) == initialVersion)) { 490 464 forEachBlock( 491 465 [&] (MarkedBlock::Handle* handle) { 492 handle->block().resetMark ingVersion();466 handle->block().resetMarks(); 493 467 }); 494 468 } 469 470 m_markingVersion = nextVersion(m_markingVersion); 495 471 496 472 for (LargeAllocation* allocation : m_largeAllocations) … … 512 488 void MarkedSpace::endMarking() 513 489 { 490 if (UNLIKELY(nextVersion(m_newlyAllocatedVersion) == initialVersion)) { 491 forEachBlock( 492 [&] (MarkedBlock::Handle* handle) { 493 handle->resetAllocated(); 494 }); 495 } 496 497 m_newlyAllocatedVersion = nextVersion(m_newlyAllocatedVersion); 498 499 for (unsigned i = m_largeAllocationsOffsetForThisCollection; i < m_largeAllocations.size(); ++i) 500 m_largeAllocations[i]->clearNewlyAllocated(); 501 502 if (!ASSERT_DISABLED) { 503 for (LargeAllocation* allocation : m_largeAllocations) 504 ASSERT_UNUSED(allocation, !allocation->isNewlyAllocated()); 505 } 506 514 507 forEachAllocator( 515 508 [&] (MarkedAllocator& allocator) -> IterationStatus { -
trunk/Source/JavaScriptCore/heap/MarkedSpace.h
r206555 r207179 66 66 67 67 static const HeapVersion nullVersion = 0; // The version of freshly allocated blocks. 68 static const HeapVersion initialVersion = 1; // The version that the heap starts out with.69 70 HeapVersion nextVersion(HeapVersion version)68 static const HeapVersion initialVersion = 2; // The version that the heap starts out with. Set to make sure that nextVersion(nullVersion) != initialVersion. 69 70 static HeapVersion nextVersion(HeapVersion version) 71 71 { 72 72 version++; 73 73 if (version == nullVersion) 74 version ++;74 version = initialVersion; 75 75 return version; 76 76 } … … 172 172 173 173 HeapVersion markingVersion() const { return m_markingVersion; } 174 HeapVersion newlyAllocatedVersion() const { return m_newlyAllocatedVersion; } 174 175 175 176 const Vector<LargeAllocation*>& largeAllocations() const { return m_largeAllocations; } … … 218 219 Heap* m_heap; 219 220 HeapVersion m_markingVersion { initialVersion }; 221 HeapVersion m_newlyAllocatedVersion { initialVersion }; 220 222 size_t m_capacity; 221 223 bool m_isIterating; -
trunk/Source/JavaScriptCore/heap/SlotVisitor.cpp
r206530 r207179 191 191 validate(cell); 192 192 #endif 193 194 //dataLog(" Marking ", RawPointer(cell), "\n");195 193 196 194 if (cell->isLargeAllocation()) -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
r206459 r207179 31 31 #include "CallFrame.h" 32 32 #include "CodeBlock.h" 33 #include "CodeBlockSet.h" 33 34 #include "Executable.h" 34 35 #include "HeapInlines.h" -
trunk/Source/WTF/ChangeLog
r207156 r207179 1 2016-10-08 Filip Pizlo <fpizlo@apple.com> 2 3 MarkedBlock should know what objects are live during marking 4 https://bugs.webkit.org/show_bug.cgi?id=162309 5 6 Reviewed by Geoffrey Garen. 7 8 This removes the atomicity mode, because it's not really used: it only affects the 9 concurrentBlah methods, but their only users turn on atomicity. This was useful because 10 previously, some binary Bitmap methods (like merge(const Bitmap&)) couldn't be used 11 effectively in the GC because some of the GC's bitmaps set the atomic mode and some didn't. 12 Removing this useless mode is the best solution. 13 14 Also added some new binary Bitmap methods: mergeAndClear(Bitmap& other) and 15 setAndClear(Bitmap& other). They perform their action on 'this' (either merge or set, 16 respectively) while also clearing the contents of 'other'. This is great for one of the GC 17 hot paths. 18 19 * wtf/Bitmap.h: 20 (WTF::WordType>::Bitmap): 21 (WTF::WordType>::get): 22 (WTF::WordType>::set): 23 (WTF::WordType>::testAndSet): 24 (WTF::WordType>::testAndClear): 25 (WTF::WordType>::concurrentTestAndSet): 26 (WTF::WordType>::concurrentTestAndClear): 27 (WTF::WordType>::clear): 28 (WTF::WordType>::clearAll): 29 (WTF::WordType>::nextPossiblyUnset): 30 (WTF::WordType>::findRunOfZeros): 31 (WTF::WordType>::count): 32 (WTF::WordType>::isEmpty): 33 (WTF::WordType>::isFull): 34 (WTF::WordType>::merge): 35 (WTF::WordType>::filter): 36 (WTF::WordType>::exclude): 37 (WTF::WordType>::forEachSetBit): 38 (WTF::WordType>::mergeAndClear): 39 (WTF::WordType>::setAndClear): 40 (WTF::=): 41 (WTF::WordType>::hash): 42 1 43 2016-10-11 Said Abou-Hallawa <sabouhallawa@apple.com> 2 44 -
trunk/Source/WTF/wtf/Bitmap.h
r203365 r207179 28 28 namespace WTF { 29 29 30 enum BitmapAtomicMode { 31 // This makes concurrentTestAndSet behave just like testAndSet. 32 BitmapNotAtomic, 33 34 // This makes concurrentTestAndSet use compareAndSwap, so that it's 35 // atomic even when used concurrently. 36 BitmapAtomic 37 }; 38 39 template<size_t bitmapSize, BitmapAtomicMode atomicMode = BitmapNotAtomic, typename WordType = uint32_t> 30 template<size_t bitmapSize, typename WordType = uint32_t> 40 31 class Bitmap { 41 32 WTF_MAKE_FAST_ALLOCATED; … … 60 51 void clear(size_t); 61 52 void clearAll(); 62 int64_t findRunOfZeros(size_t ) const;63 size_t count(size_t = 0) const;53 int64_t findRunOfZeros(size_t runLength) const; 54 size_t count(size_t start = 0) const; 64 55 size_t isEmpty() const; 65 56 size_t isFull() const; … … 71 62 template<typename Func> 72 63 void forEachSetBit(const Func&) const; 64 65 void mergeAndClear(Bitmap&); 66 void setAndClear(Bitmap&); 73 67 74 68 bool operator==(const Bitmap&) const; … … 91 85 }; 92 86 93 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>94 inline Bitmap<bitmapSize, atomicMode,WordType>::Bitmap()87 template<size_t bitmapSize, typename WordType> 88 inline Bitmap<bitmapSize, WordType>::Bitmap() 95 89 { 96 90 clearAll(); 97 91 } 98 92 99 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>100 inline bool Bitmap<bitmapSize, atomicMode,WordType>::get(size_t n) const93 template<size_t bitmapSize, typename WordType> 94 inline bool Bitmap<bitmapSize, WordType>::get(size_t n) const 101 95 { 102 96 return !!(bits[n / wordSize] & (one << (n % wordSize))); 103 97 } 104 98 105 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>106 inline void Bitmap<bitmapSize, atomicMode,WordType>::set(size_t n)99 template<size_t bitmapSize, typename WordType> 100 inline void Bitmap<bitmapSize, WordType>::set(size_t n) 107 101 { 108 102 bits[n / wordSize] |= (one << (n % wordSize)); 109 103 } 110 104 111 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>112 inline void Bitmap<bitmapSize, atomicMode,WordType>::set(size_t n, bool value)105 template<size_t bitmapSize, typename WordType> 106 inline void Bitmap<bitmapSize, WordType>::set(size_t n, bool value) 113 107 { 114 108 if (value) … … 118 112 } 119 113 120 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>121 inline bool Bitmap<bitmapSize, atomicMode,WordType>::testAndSet(size_t n)114 template<size_t bitmapSize, typename WordType> 115 inline bool Bitmap<bitmapSize, WordType>::testAndSet(size_t n) 122 116 { 123 117 WordType mask = one << (n % wordSize); … … 128 122 } 129 123 130 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>131 inline bool Bitmap<bitmapSize, atomicMode,WordType>::testAndClear(size_t n)124 template<size_t bitmapSize, typename WordType> 125 inline bool Bitmap<bitmapSize, WordType>::testAndClear(size_t n) 132 126 { 133 127 WordType mask = one << (n % wordSize); … … 138 132 } 139 133 140 template<size_t bitmapSize, BitmapAtomicMode atomicMode, typename WordType> 141 inline bool Bitmap<bitmapSize, atomicMode, WordType>::concurrentTestAndSet(size_t n) 142 { 143 if (atomicMode == BitmapNotAtomic) 144 return testAndSet(n); 145 146 ASSERT(atomicMode == BitmapAtomic); 147 134 template<size_t bitmapSize, typename WordType> 135 inline bool Bitmap<bitmapSize, WordType>::concurrentTestAndSet(size_t n) 136 { 148 137 WordType mask = one << (n % wordSize); 149 138 size_t index = n / wordSize; … … 158 147 } 159 148 160 template<size_t bitmapSize, BitmapAtomicMode atomicMode, typename WordType> 161 inline bool Bitmap<bitmapSize, atomicMode, WordType>::concurrentTestAndClear(size_t n) 162 { 163 if (atomicMode == BitmapNotAtomic) 164 return testAndClear(n); 165 166 ASSERT(atomicMode == BitmapAtomic); 167 149 template<size_t bitmapSize, typename WordType> 150 inline bool Bitmap<bitmapSize, WordType>::concurrentTestAndClear(size_t n) 151 { 168 152 WordType mask = one << (n % wordSize); 169 153 size_t index = n / wordSize; … … 178 162 } 179 163 180 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>181 inline void Bitmap<bitmapSize, atomicMode,WordType>::clear(size_t n)164 template<size_t bitmapSize, typename WordType> 165 inline void Bitmap<bitmapSize, WordType>::clear(size_t n) 182 166 { 183 167 bits[n / wordSize] &= ~(one << (n % wordSize)); 184 168 } 185 169 186 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>187 inline void Bitmap<bitmapSize, atomicMode,WordType>::clearAll()170 template<size_t bitmapSize, typename WordType> 171 inline void Bitmap<bitmapSize, WordType>::clearAll() 188 172 { 189 173 memset(bits.data(), 0, sizeof(bits)); 190 174 } 191 175 192 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>193 inline size_t Bitmap<bitmapSize, atomicMode,WordType>::nextPossiblyUnset(size_t start) const176 template<size_t bitmapSize, typename WordType> 177 inline size_t Bitmap<bitmapSize, WordType>::nextPossiblyUnset(size_t start) const 194 178 { 195 179 if (!~bits[start / wordSize]) … … 198 182 } 199 183 200 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>201 inline int64_t Bitmap<bitmapSize, atomicMode,WordType>::findRunOfZeros(size_t runLength) const184 template<size_t bitmapSize, typename WordType> 185 inline int64_t Bitmap<bitmapSize, WordType>::findRunOfZeros(size_t runLength) const 202 186 { 203 187 if (!runLength) … … 218 202 } 219 203 220 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>221 inline size_t Bitmap<bitmapSize, atomicMode,WordType>::count(size_t start) const204 template<size_t bitmapSize, typename WordType> 205 inline size_t Bitmap<bitmapSize, WordType>::count(size_t start) const 222 206 { 223 207 size_t result = 0; … … 231 215 } 232 216 233 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>234 inline size_t Bitmap<bitmapSize, atomicMode,WordType>::isEmpty() const217 template<size_t bitmapSize, typename WordType> 218 inline size_t Bitmap<bitmapSize, WordType>::isEmpty() const 235 219 { 236 220 for (size_t i = 0; i < words; ++i) … … 240 224 } 241 225 242 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>243 inline size_t Bitmap<bitmapSize, atomicMode,WordType>::isFull() const226 template<size_t bitmapSize, typename WordType> 227 inline size_t Bitmap<bitmapSize, WordType>::isFull() const 244 228 { 245 229 for (size_t i = 0; i < words; ++i) … … 249 233 } 250 234 251 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>252 inline void Bitmap<bitmapSize, atomicMode,WordType>::merge(const Bitmap& other)235 template<size_t bitmapSize, typename WordType> 236 inline void Bitmap<bitmapSize, WordType>::merge(const Bitmap& other) 253 237 { 254 238 for (size_t i = 0; i < words; ++i) … … 256 240 } 257 241 258 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>259 inline void Bitmap<bitmapSize, atomicMode,WordType>::filter(const Bitmap& other)242 template<size_t bitmapSize, typename WordType> 243 inline void Bitmap<bitmapSize, WordType>::filter(const Bitmap& other) 260 244 { 261 245 for (size_t i = 0; i < words; ++i) … … 263 247 } 264 248 265 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>266 inline void Bitmap<bitmapSize, atomicMode,WordType>::exclude(const Bitmap& other)249 template<size_t bitmapSize, typename WordType> 250 inline void Bitmap<bitmapSize, WordType>::exclude(const Bitmap& other) 267 251 { 268 252 for (size_t i = 0; i < words; ++i) … … 270 254 } 271 255 272 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>256 template<size_t bitmapSize, typename WordType> 273 257 template<typename Func> 274 inline void Bitmap<bitmapSize, atomicMode,WordType>::forEachSetBit(const Func& func) const258 inline void Bitmap<bitmapSize, WordType>::forEachSetBit(const Func& func) const 275 259 { 276 260 for (size_t i = 0; i < words; ++i) { … … 287 271 } 288 272 289 template<size_t bitmapSize, BitmapAtomicMode atomicMode, typename WordType> 290 inline bool Bitmap<bitmapSize, atomicMode, WordType>::operator==(const Bitmap& other) const 273 template<size_t bitmapSize, typename WordType> 274 inline void Bitmap<bitmapSize, WordType>::mergeAndClear(Bitmap& other) 275 { 276 for (size_t i = 0; i < words; ++i) { 277 bits[i] |= other.bits[i]; 278 other.bits[i] = 0; 279 } 280 } 281 282 template<size_t bitmapSize, typename WordType> 283 inline void Bitmap<bitmapSize, WordType>::setAndClear(Bitmap& other) 284 { 285 for (size_t i = 0; i < words; ++i) { 286 bits[i] = other.bits[i]; 287 other.bits[i] = 0; 288 } 289 } 290 291 template<size_t bitmapSize, typename WordType> 292 inline bool Bitmap<bitmapSize, WordType>::operator==(const Bitmap& other) const 291 293 { 292 294 for (size_t i = 0; i < words; ++i) { … … 297 299 } 298 300 299 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>300 inline bool Bitmap<bitmapSize, atomicMode,WordType>::operator!=(const Bitmap& other) const301 template<size_t bitmapSize, typename WordType> 302 inline bool Bitmap<bitmapSize, WordType>::operator!=(const Bitmap& other) const 301 303 { 302 304 return !(*this == other); 303 305 } 304 306 305 template<size_t bitmapSize, BitmapAtomicMode atomicMode,typename WordType>306 inline unsigned Bitmap<bitmapSize, atomicMode,WordType>::hash() const307 template<size_t bitmapSize, typename WordType> 308 inline unsigned Bitmap<bitmapSize, WordType>::hash() const 307 309 { 308 310 unsigned result = 0;
Note: See TracChangeset
for help on using the changeset viewer.