Changeset 114698 in webkit
- Timestamp:
- Apr 19, 2012 5:05:37 PM (12 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 19 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r114695 r114698 1 2012-04-19 Mark Hahnenberg <mhahnenberg@apple.com> 2 3 We're collecting pathologically due to small allocations 4 https://bugs.webkit.org/show_bug.cgi?id=84404 5 6 Reviewed by Geoffrey Garen. 7 8 No change in performance on run-jsc-benchmarks. 9 10 * dfg/DFGSpeculativeJIT.h: Replacing m_firstFreeCell with m_freeList. 11 (JSC::DFG::SpeculativeJIT::emitAllocateBasicJSObject): 12 * heap/CopiedSpace.cpp: Getting rid of any water mark related stuff, since it's no 13 longer useful. 14 (JSC::CopiedSpace::CopiedSpace): 15 (JSC::CopiedSpace::tryAllocateSlowCase): We now only call didAllocate here rather than 16 carrying out a somewhat complicated accounting job for our old water mark throughout CopiedSpace. 17 (JSC::CopiedSpace::tryAllocateOversize): Call the new didAllocate to notify the Heap of 18 newly allocated stuff. 19 (JSC::CopiedSpace::tryReallocateOversize): 20 (JSC::CopiedSpace::doneFillingBlock): 21 (JSC::CopiedSpace::doneCopying): 22 (JSC::CopiedSpace::destroy): 23 * heap/CopiedSpace.h: 24 (CopiedSpace): 25 * heap/CopiedSpaceInlineMethods.h: 26 (JSC::CopiedSpace::startedCopying): 27 * heap/Heap.cpp: Removed water mark related stuff, replaced with new bytesAllocated and 28 bytesAllocatedLimit to track how much memory has been allocated since the last collection. 29 (JSC::Heap::Heap): 30 (JSC::Heap::reportExtraMemoryCostSlowCase): 31 (JSC::Heap::collect): We now set the new limit of bytes that we can allocate before triggering 32 a collection to be the size of the Heap after the previous collection. Thus, we still have our 33 2x allocation amount. 34 (JSC::Heap::didAllocate): Notifies the GC activity timer of how many bytes have been allocated 35 thus far and then adds the new number of bytes to the current total. 36 (JSC): 37 * heap/Heap.h: Removed water mark related stuff. 38 (JSC::Heap::notifyIsSafeToCollect): 39 (Heap): 40 (JSC::Heap::shouldCollect): 41 (JSC): 42 * heap/MarkedAllocator.cpp: 43 (JSC::MarkedAllocator::tryAllocateHelper): Refactored to use MarkedBlock's new FreeList struct. 44 (JSC::MarkedAllocator::allocateSlowCase): 45 (JSC::MarkedAllocator::addBlock): 46 * heap/MarkedAllocator.h: 47 (MarkedAllocator): 48 (JSC::MarkedAllocator::MarkedAllocator): 49 (JSC::MarkedAllocator::allocate): 50 (JSC::MarkedAllocator::zapFreeList): Refactored to take in a FreeList instead of a FreeCell. 51 * heap/MarkedBlock.cpp: 52 (JSC::MarkedBlock::specializedSweep): 53 (JSC::MarkedBlock::sweep): 54 (JSC::MarkedBlock::sweepHelper): 55 (JSC::MarkedBlock::zapFreeList): 56 * heap/MarkedBlock.h: 57 (FreeList): Added a new struct that keeps track of the current MarkedAllocator's 58 free list including the number of bytes of stuff in the free list so that when the free list is 59 exhausted, the correct amount can be reported to Heap. 60 (MarkedBlock): 61 (JSC::MarkedBlock::FreeList::FreeList): 62 (JSC): 63 * heap/MarkedSpace.cpp: Removing all water mark related stuff. 64 (JSC::MarkedSpace::MarkedSpace): 65 (JSC::MarkedSpace::resetAllocators): 66 * heap/MarkedSpace.h: 67 (MarkedSpace): 68 (JSC): 69 * heap/WeakSet.cpp: 70 (JSC::WeakSet::findAllocator): Refactored to use the didAllocate interface with the Heap. This 71 function still needs work though now that the Heap knows how many bytes have been allocated 72 since the last collection. 73 * jit/JITInlineMethods.h: Refactored to use MarkedBlock's new FreeList struct. 74 (JSC::JIT::emitAllocateBasicJSObject): Ditto. 75 * llint/LowLevelInterpreter.asm: Ditto. 76 * runtime/GCActivityCallback.cpp: 77 (JSC::DefaultGCActivityCallback::didAllocate): 78 * runtime/GCActivityCallback.h: 79 (JSC::GCActivityCallback::didAllocate): Renamed willAllocate to didAllocate to indicate that 80 the allocation that is being reported has already taken place. 81 (DefaultGCActivityCallback): 82 * runtime/GCActivityCallbackCF.cpp: 83 (JSC): 84 (JSC::DefaultGCActivityCallback::didAllocate): Refactored to return early if the amount of 85 allocation since the last collection is not above a threshold (initially arbitrarily chosen to 86 be 128KB). 87 1 88 2012-04-19 Filip Pizlo <fpizlo@apple.com> 2 89 -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
r114675 r114698 1795 1795 allocator = &m_jit.globalData()->heap.allocatorForObjectWithoutDestructor(sizeof(ClassType)); 1796 1796 1797 m_jit.loadPtr(&allocator->m_f irstFreeCell, resultGPR);1797 m_jit.loadPtr(&allocator->m_freeList.head, resultGPR); 1798 1798 slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, resultGPR)); 1799 1799 … … 1807 1807 // Now that we have scratchGPR back, remove the object from the free list 1808 1808 m_jit.loadPtr(MacroAssembler::Address(resultGPR), scratchGPR); 1809 m_jit.storePtr(scratchGPR, &allocator->m_f irstFreeCell);1809 m_jit.storePtr(scratchGPR, &allocator->m_freeList.head); 1810 1810 1811 1811 // Initialize the object's classInfo pointer -
trunk/Source/JavaScriptCore/heap/CopiedSpace.cpp
r114511 r114698 38 38 , m_inCopyingPhase(false) 39 39 , m_numberOfLoanedBlocks(0) 40 , m_waterMark(0)41 40 { 42 41 } … … 53 52 CheckedBoolean CopiedSpace::tryAllocateSlowCase(size_t bytes, void** outPtr) 54 53 { 55 m_heap->activityCallback()->willAllocate();56 57 54 if (isOversize(bytes)) 58 55 return tryAllocateOversize(bytes, outPtr); 59 56 60 m_ waterMark += m_allocator.currentCapacity();57 m_heap->didAllocate(m_allocator.currentCapacity()); 61 58 62 59 if (!addNewBlock()) { … … 87 84 *outPtr = allocateFromBlock(block, bytes); 88 85 89 m_ waterMark += block->capacity();86 m_heap->didAllocate(blockSize); 90 87 91 88 return true; … … 139 136 CopiedBlock* oldBlock = oversizeBlockFor(oldPtr); 140 137 m_oversizeBlocks.remove(oldBlock); 141 m_waterMark -= oldBlock->capacity();142 138 oldBlock->m_allocation.deallocate(); 143 139 } … … 163 159 m_toSpaceSet.add(block); 164 160 m_toSpaceFilter.add(reinterpret_cast<Bits>(block)); 165 }166 167 {168 MutexLocker locker(m_memoryStatsLock);169 m_waterMark += block->capacity();170 161 } 171 162 … … 194 185 block->m_isPinned = false; 195 186 m_toSpace->push(block); 196 m_waterMark += block->capacity();197 187 continue; 198 188 } … … 212 202 m_oversizeBlocks.remove(curr); 213 203 curr->m_allocation.deallocate(); 214 } else {204 } else 215 205 curr->m_isPinned = false; 216 m_waterMark += curr->capacity();217 }218 206 curr = next; 219 207 } … … 282 270 block->m_allocation.deallocate(); 283 271 } 284 285 m_waterMark = 0;286 272 } 287 273 -
trunk/Source/JavaScriptCore/heap/CopiedSpace.h
r111877 r114698 67 67 bool contains(void*, CopiedBlock*&); 68 68 69 size_t waterMark() { return m_waterMark; }70 69 size_t size(); 71 70 size_t capacity(); … … 93 92 static CopiedBlock* oversizeBlockFor(void* ptr); 94 93 95 size_t calculateWaterMark();96 97 94 Heap* m_heap; 98 95 … … 118 115 size_t m_numberOfLoanedBlocks; 119 116 120 Mutex m_memoryStatsLock;121 size_t m_waterMark;122 123 117 static const size_t s_maxAllocationSize = 32 * KB; 124 118 static const size_t s_initialBlockNum = 16; -
trunk/Source/JavaScriptCore/heap/CopiedSpaceInlineMethods.h
r111877 r114698 56 56 m_toSpaceFilter.reset(); 57 57 m_allocator.startedCopying(); 58 59 m_waterMark = 0;60 58 61 59 ASSERT(!m_inCopyingPhase); -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r114511 r114698 314 314 , m_minBytesPerCycle(heapSizeForHint(heapSize)) 315 315 , m_lastFullGCSize(0) 316 , m_highWaterMark(m_minBytesPerCycle) 316 , m_bytesAllocatedLimit(m_minBytesPerCycle) 317 , m_bytesAllocated(0) 317 318 , m_operationInProgress(NoOperation) 318 319 , m_objectSpace(this) … … 466 467 // collecting more frequently as long as it stays alive. 467 468 468 addToWaterMark(cost); 469 didAllocate(cost); 470 if (shouldCollect()) 471 collect(DoNotSweep); 469 472 } 470 473 … … 849 852 } 850 853 851 // To avoid pathological GC churn in large heaps, we set the allocation high 852 // water mark to be proportional to the current size of the heap. The exact 853 // proportion is a bit arbitrary. A 2X multiplier gives a 1:1 (heap size : 854 // To avoid pathological GC churn in large heaps, we set the new allocation 855 // limit to be the current size of the heap. This heuristic 856 // is a bit arbitrary. Using the current size of the heap after this 857 // collection gives us a 2X multiplier, which is a 1:1 (heap size : 854 858 // new bytes allocated) proportion, and seems to work well in benchmarks. 855 859 size_t newSize = size(); 856 size_t proportionalBytes = 2 * newSize;857 860 if (fullGC) { 858 861 m_lastFullGCSize = newSize; 859 m_highWaterMark = max(proportionalBytes, m_minBytesPerCycle); 860 } 862 m_bytesAllocatedLimit = max(newSize, m_minBytesPerCycle); 863 } 864 m_bytesAllocated = 0; 861 865 double lastGCEndTime = WTF::currentTime(); 862 866 m_lastGCLength = lastGCEndTime - lastGCStartTime; … … 885 889 { 886 890 return m_activityCallback.get(); 891 } 892 893 void Heap::didAllocate(size_t bytes) 894 { 895 m_activityCallback->didAllocate(m_bytesAllocated); 896 m_bytesAllocated += bytes; 887 897 } 888 898 -
trunk/Source/JavaScriptCore/heap/Heap.h
r114511 r114698 113 113 114 114 void notifyIsSafeToCollect() { m_isSafeToCollect = true; } 115 115 116 JS_EXPORT_PRIVATE void collectAllGarbage(); 116 117 enum SweepToggle { DoNotSweep, DoSweep }; 118 bool shouldCollect(); 119 void collect(SweepToggle); 117 120 void reportExtraMemoryCost(size_t cost); 118 121 … … 145 148 void getConservativeRegisterRoots(HashSet<JSCell*>& roots); 146 149 147 void addToWaterMark(size_t);148 149 150 double lastGCLength() { return m_lastGCLength; } 150 151 151 152 JS_EXPORT_PRIVATE void discardAllCompiledCode(); 153 154 void didAllocate(size_t); 152 155 153 156 private: … … 164 167 void* allocateWithoutDestructor(size_t); 165 168 166 size_t waterMark();167 size_t highWaterMark();168 bool shouldCollect();169 170 169 static const size_t minExtraCost = 256; 171 170 static const size_t maxExtraCost = 1024 * 1024; … … 193 192 void finalizeUnconditionalFinalizers(); 194 193 195 enum SweepToggle { DoNotSweep, DoSweep };196 void collect(SweepToggle);197 194 void shrink(); 198 195 void releaseFreeBlocks(); … … 209 206 const size_t m_minBytesPerCycle; 210 207 size_t m_lastFullGCSize; 211 size_t m_highWaterMark; 208 209 size_t m_bytesAllocatedLimit; 210 size_t m_bytesAllocated; 212 211 213 212 OperationInProgress m_operationInProgress; … … 258 257 return m_objectSpace.nurseryWaterMark() >= m_minBytesPerCycle && m_isSafeToCollect; 259 258 #else 260 return waterMark() >= highWaterMark()&& m_isSafeToCollect;259 return m_bytesAllocated > m_bytesAllocatedLimit && m_isSafeToCollect; 261 260 #endif 262 261 } … … 292 291 { 293 292 MarkedBlock::blockFor(cell)->setMarked(cell); 294 }295 296 inline size_t Heap::waterMark()297 {298 return m_objectSpace.waterMark() + m_storageSpace.waterMark();299 }300 301 inline size_t Heap::highWaterMark()302 {303 return m_highWaterMark;304 }305 306 inline void Heap::addToWaterMark(size_t size)307 {308 m_objectSpace.addToWaterMark(size);309 if (waterMark() > highWaterMark())310 collect(DoNotSweep);311 293 } 312 294 -
trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp
r114511 r114698 9 9 inline void* MarkedAllocator::tryAllocateHelper() 10 10 { 11 MarkedBlock::FreeCell* firstFreeCell = m_firstFreeCell; 12 if (!firstFreeCell) { 11 if (!m_freeList.head) { 13 12 for (MarkedBlock*& block = m_currentBlock; block; block = static_cast<MarkedBlock*>(block->next())) { 14 firstFreeCell= block->sweep(MarkedBlock::SweepToFreeList);15 if ( firstFreeCell)13 m_freeList = block->sweep(MarkedBlock::SweepToFreeList); 14 if (m_freeList.head) 16 15 break; 17 m_markedSpace->didConsumeFreeList(block);18 16 block->didConsumeFreeList(); 19 17 } 20 18 21 if (! firstFreeCell)19 if (!m_freeList.head) 22 20 return 0; 23 21 } 24 22 25 ASSERT(firstFreeCell); 26 m_firstFreeCell = firstFreeCell->next; 27 return firstFreeCell; 23 MarkedBlock::FreeCell* head = m_freeList.head; 24 m_freeList.head = head->next; 25 ASSERT(head); 26 return head; 28 27 } 29 28 … … 43 42 #endif 44 43 45 m_heap->activityCallback()->willAllocate(); 44 ASSERT(!m_freeList.head); 45 m_heap->didAllocate(m_freeList.bytes); 46 46 47 47 void* result = tryAllocate(); … … 72 72 return result; 73 73 74 ASSERT( m_heap->waterMark() < m_heap->highWaterMark());74 ASSERT(!m_heap->shouldCollect()); 75 75 76 76 addBlock(allocateBlock(AllocationMustSucceed)); … … 109 109 { 110 110 ASSERT(!m_currentBlock); 111 ASSERT(!m_f irstFreeCell);111 ASSERT(!m_freeList.head); 112 112 113 113 m_blockList.append(block); 114 114 m_currentBlock = block; 115 m_f irstFreeCell= block->sweep(MarkedBlock::SweepToFreeList);115 m_freeList = block->sweep(MarkedBlock::SweepToFreeList); 116 116 } 117 117 -
trunk/Source/JavaScriptCore/heap/MarkedAllocator.h
r108444 r114698 42 42 MarkedBlock* allocateBlock(AllocationEffort); 43 43 44 MarkedBlock::Free Cell* m_firstFreeCell;44 MarkedBlock::FreeList m_freeList; 45 45 MarkedBlock* m_currentBlock; 46 46 DoublyLinkedList<HeapBlock> m_blockList; … … 52 52 53 53 inline MarkedAllocator::MarkedAllocator() 54 : m_firstFreeCell(0) 55 , m_currentBlock(0) 54 : m_currentBlock(0) 56 55 , m_cellSize(0) 57 56 , m_cellsNeedDestruction(true) … … 71 70 inline void* MarkedAllocator::allocate() 72 71 { 73 MarkedBlock::FreeCell* firstFreeCell = m_firstFreeCell;72 MarkedBlock::FreeCell* head = m_freeList.head; 74 73 // This is a light-weight fast path to cover the most common case. 75 if (UNLIKELY(! firstFreeCell))74 if (UNLIKELY(!head)) 76 75 return allocateSlowCase(); 77 76 78 m_f irstFreeCell = firstFreeCell->next;79 return firstFreeCell;77 m_freeList.head = head->next; 78 return head; 80 79 } 81 80 … … 88 87 { 89 88 if (!m_currentBlock) { 90 ASSERT(!m_f irstFreeCell);89 ASSERT(!m_freeList.head); 91 90 return; 92 91 } 93 92 94 m_currentBlock->zapFreeList(m_f irstFreeCell);95 m_f irstFreeCell= 0;93 m_currentBlock->zapFreeList(m_freeList); 94 m_freeList.head = 0; 96 95 } 97 96 -
trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp
r107495 r114698 78 78 79 79 template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode, bool destructorCallNeeded> 80 MarkedBlock::Free Cell*MarkedBlock::specializedSweep()80 MarkedBlock::FreeList MarkedBlock::specializedSweep() 81 81 { 82 82 ASSERT(blockState != Allocated && blockState != FreeListed); … … 87 87 // order of the free list. 88 88 FreeCell* head = 0; 89 size_t count = 0; 89 90 for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) { 90 91 if (blockState == Marked && m_marks.get(i)) … … 102 103 freeCell->next = head; 103 104 head = freeCell; 105 ++count; 104 106 } 105 107 } 106 108 107 109 m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Zapped); 108 return head;109 } 110 111 MarkedBlock::Free Cell*MarkedBlock::sweep(SweepMode sweepMode)110 return FreeList(head, count * cellSize()); 111 } 112 113 MarkedBlock::FreeList MarkedBlock::sweep(SweepMode sweepMode) 112 114 { 113 115 HEAP_LOG_BLOCK_STATE_TRANSITION(this); 114 116 115 117 if (sweepMode == SweepOnly && !m_cellsNeedDestruction) 116 return 0;118 return FreeList(); 117 119 118 120 if (m_cellsNeedDestruction) … … 122 124 123 125 template<bool destructorCallNeeded> 124 MarkedBlock::Free Cell*MarkedBlock::sweepHelper(SweepMode sweepMode)126 MarkedBlock::FreeList MarkedBlock::sweepHelper(SweepMode sweepMode) 125 127 { 126 128 switch (m_state) { … … 131 133 // Happens when a block transitions to fully allocated. 132 134 ASSERT(sweepMode == SweepToFreeList); 133 return 0;135 return FreeList(); 134 136 case Allocated: 135 137 ASSERT_NOT_REACHED(); 136 return 0;138 return FreeList(); 137 139 case Marked: 138 140 return sweepMode == SweepToFreeList … … 146 148 147 149 ASSERT_NOT_REACHED(); 148 return 0;149 } 150 151 void MarkedBlock::zapFreeList( FreeCell* firstFreeCell)150 return FreeList(); 151 } 152 153 void MarkedBlock::zapFreeList(const FreeList& freeList) 152 154 { 153 155 HEAP_LOG_BLOCK_STATE_TRANSITION(this); 156 FreeCell* head = freeList.head; 154 157 155 158 if (m_state == Marked) { … … 160 163 // Hence if the block is Marked we need to leave it Marked. 161 164 162 ASSERT(! firstFreeCell);165 ASSERT(!head); 163 166 164 167 return; … … 177 180 // non-zero vtables, which is consistent with the block being zapped. 178 181 179 ASSERT(! firstFreeCell);182 ASSERT(!head); 180 183 181 184 return; … … 189 192 190 193 FreeCell* next; 191 for (FreeCell* current = firstFreeCell; current; current = next) {194 for (FreeCell* current = head; current; current = next) { 192 195 next = current->next; 193 196 reinterpret_cast<JSCell*>(current)->zap(); -
trunk/Source/JavaScriptCore/heap/MarkedBlock.h
r110134 r114698 88 88 }; 89 89 90 struct FreeList { 91 FreeCell* head; 92 size_t bytes; 93 94 FreeList(); 95 FreeList(FreeCell*, size_t); 96 }; 97 90 98 struct VoidFunctor { 91 99 typedef void ReturnType; … … 106 114 107 115 enum SweepMode { SweepOnly, SweepToFreeList }; 108 Free Cell*sweep(SweepMode = SweepOnly);116 FreeList sweep(SweepMode = SweepOnly); 109 117 110 118 // While allocating from a free list, MarkedBlock temporarily has bogus … … 112 120 // of these functions: 113 121 void didConsumeFreeList(); // Call this once you've allocated all the items in the free list. 114 void zapFreeList( FreeCell* firstFreeCell); // Call this to undo the free list.122 void zapFreeList(const FreeList&); // Call this to undo the free list. 115 123 116 124 void clearMarks(); … … 164 172 165 173 enum BlockState { New, FreeListed, Allocated, Marked, Zapped }; 166 template<bool destructorCallNeeded> Free Cell*sweepHelper(SweepMode = SweepOnly);174 template<bool destructorCallNeeded> FreeList sweepHelper(SweepMode = SweepOnly); 167 175 168 176 typedef char Atom[atomSize]; … … 172 180 size_t atomNumber(const void*); 173 181 void callDestructor(JSCell*); 174 template<BlockState, SweepMode, bool destructorCallNeeded> Free Cell*specializedSweep();182 template<BlockState, SweepMode, bool destructorCallNeeded> FreeList specializedSweep(); 175 183 176 184 #if ENABLE(GGC) … … 189 197 Heap* m_heap; 190 198 }; 199 200 inline MarkedBlock::FreeList::FreeList() 201 : head(0) 202 , bytes(0) 203 { 204 } 205 206 inline MarkedBlock::FreeList::FreeList(FreeCell* head, size_t bytes) 207 : head(head) 208 , bytes(bytes) 209 { 210 } 191 211 192 212 inline size_t MarkedBlock::firstAtom() -
trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp
r113141 r114698 32 32 33 33 MarkedSpace::MarkedSpace(Heap* heap) 34 : m_waterMark(0) 35 , m_heap(heap) 34 : m_heap(heap) 36 35 { 37 36 for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) { … … 48 47 void MarkedSpace::resetAllocators() 49 48 { 50 m_waterMark = 0;51 52 49 for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) { 53 50 allocatorFor(cellSize).reset(); -
trunk/Source/JavaScriptCore/heap/MarkedSpace.h
r113141 r114698 66 66 void canonicalizeCellLivenessData(); 67 67 68 size_t waterMark();69 void addToWaterMark(size_t);70 71 68 typedef HashSet<MarkedBlock*>::iterator BlockIterator; 72 69 … … 103 100 Subspace m_normalSpace; 104 101 105 size_t m_waterMark;106 102 Heap* m_heap; 107 103 MarkedBlockSet m_blocks; 108 104 }; 109 110 inline size_t MarkedSpace::waterMark()111 {112 return m_waterMark;113 }114 115 inline void MarkedSpace::addToWaterMark(size_t size)116 {117 m_waterMark += size;118 }119 105 120 106 template<typename Functor> inline typename Functor::ReturnType MarkedSpace::forEachCell(Functor& functor) … … 198 184 } 199 185 200 inline void MarkedSpace::didConsumeFreeList(MarkedBlock* block)201 {202 m_waterMark += block->capacity();203 }204 205 186 } // namespace JSC 206 187 -
trunk/Source/JavaScriptCore/heap/WeakSet.cpp
r113508 r114698 85 85 return allocator; 86 86 87 m_heap->addToWaterMark(WeakBlock::blockSize); 87 // FIXME: This reporting of the amount allocated isn't quite accurate and 88 // probably should be reworked eventually. 89 m_heap->didAllocate(WeakBlock::blockSize); 90 if (m_heap->shouldCollect()) { 91 m_heap->collect(Heap::DoNotSweep); 88 92 89 // addToWaterMark() may cause a GC, so try again.90 if (WeakBlock::FreeCell* allocator = tryFindAllocator())91 return allocator;93 if (WeakBlock::FreeCell* allocator = tryFindAllocator()) 94 return allocator; 95 } 92 96 93 97 return addAllocator(); -
trunk/Source/JavaScriptCore/jit/JITInlineMethods.h
r114539 r114698 415 415 else 416 416 allocator = &m_globalData->heap.allocatorForObjectWithoutDestructor(sizeof(ClassType)); 417 loadPtr(&allocator->m_f irstFreeCell, result);417 loadPtr(&allocator->m_freeList.head, result); 418 418 addSlowCase(branchTestPtr(Zero, result)); 419 419 420 420 // remove the object from the free list 421 421 loadPtr(Address(result), storagePtr); 422 storePtr(storagePtr, &allocator->m_f irstFreeCell);422 storePtr(storagePtr, &allocator->m_freeList.head); 423 423 424 424 // initialize the object's structure -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r113930 r114698 292 292 sizeClassIndex * sizeof MarkedAllocator 293 293 294 const offsetOfFirstFreeCell = 295 MarkedAllocator::m_freeList + 296 MarkedBlock::FreeList::head 297 294 298 # FIXME: we can get the global data in one load from the stack. 295 299 loadp CodeBlock[cfr], scratch1 296 300 loadp CodeBlock::m_globalData[scratch1], scratch1 297 301 298 # Get the object from the free list. 299 loadp offsetOfMySizeClass + MarkedAllocator::m_firstFreeCell[scratch1], result302 # Get the object from the free list. 303 loadp offsetOfMySizeClass + offsetOfFirstFreeCell[scratch1], result 300 304 btpz result, slowCase 301 305 302 306 # Remove the object from the free list. 303 307 loadp [result], scratch2 304 storep scratch2, offsetOfMySizeClass + MarkedAllocator::m_firstFreeCell[scratch1]308 storep scratch2, offsetOfMySizeClass + offsetOfFirstFreeCell[scratch1] 305 309 306 310 # Initialize the object. -
trunk/Source/JavaScriptCore/runtime/GCActivityCallback.cpp
r114511 r114698 43 43 } 44 44 45 void DefaultGCActivityCallback:: willAllocate()45 void DefaultGCActivityCallback::didAllocate(size_t) 46 46 { 47 47 } -
trunk/Source/JavaScriptCore/runtime/GCActivityCallback.h
r114511 r114698 44 44 public: 45 45 virtual ~GCActivityCallback() { } 46 virtual void willAllocate() { }46 virtual void didAllocate(size_t) { } 47 47 virtual void didCollect() { } 48 48 virtual void didAbandonObjectGraph() { } … … 62 62 virtual ~DefaultGCActivityCallback(); 63 63 64 virtual void willAllocate();64 virtual void didAllocate(size_t); 65 65 virtual void didCollect(); 66 66 virtual void didAbandonObjectGraph(); -
trunk/Source/JavaScriptCore/runtime/GCActivityCallbackCF.cpp
r114511 r114698 58 58 const CFTimeInterval decade = 60 * 60 * 24 * 365 * 10; 59 59 const CFTimeInterval hour = 60 * 60; 60 const size_t minBytesBeforeCollect = 128 * KB; 60 61 61 62 void DefaultGCActivityCallbackPlatformData::timerDidFire(CFRunLoopTimerRef, void *info) … … 114 115 } 115 116 116 void DefaultGCActivityCallback:: willAllocate()117 void DefaultGCActivityCallback::didAllocate(size_t bytes) 117 118 { 119 if (bytes < minBytesBeforeCollect) 120 return; 118 121 scheduleTimer(d.get()); 119 122 }
Note: See TracChangeset
for help on using the changeset viewer.