Changeset 162017 in webkit


Ignore:
Timestamp:
Jan 14, 2014 3:03:01 PM (10 years ago)
Author:
mhahnenberg@apple.com
Message:

Copying should be generational
https://bugs.webkit.org/show_bug.cgi?id=126555

Reviewed by Geoffrey Garen.

This patch adds support for copying to our generational collector. Eden collections
always trigger copying. Full collections use our normal fragmentation-based heuristics.

The way this works is that the CopiedSpace now has the notion of an old generation set of CopiedBlocks
and a new generation of CopiedBlocks. During each mutator cycle new CopiedSpace allocations reside
in the new generation. When a collection occurs, those blocks are moved to the old generation.

One key thing to remember is that both new and old generation objects in the MarkedSpace can
refer to old or new generation allocations in CopiedSpace. This is why we must fire write barriers
when assigning to an old (MarkedSpace) object's Butterfly.

  • heap/CopiedAllocator.h:

(JSC::CopiedAllocator::tryAllocateDuringCopying):

  • heap/CopiedBlock.h:

(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::didEvacuateBytes):
(JSC::CopiedBlock::isOld):
(JSC::CopiedBlock::didPromote):

  • heap/CopiedBlockInlines.h:

(JSC::CopiedBlock::reportLiveBytes):
(JSC::CopiedBlock::reportLiveBytesDuringCopying):

  • heap/CopiedSpace.cpp:

(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::init):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::didStartFullCollection):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::size):
(JSC::CopiedSpace::capacity):
(JSC::CopiedSpace::isPagedOut):

  • heap/CopiedSpace.h:

(JSC::CopiedSpace::CopiedGeneration::CopiedGeneration):

  • heap/CopiedSpaceInlines.h:

(JSC::CopiedSpace::contains):
(JSC::CopiedSpace::recycleEvacuatedBlock):
(JSC::CopiedSpace::allocateBlock):
(JSC::CopiedSpace::startedCopying):

  • heap/CopyVisitor.cpp:

(JSC::CopyVisitor::copyFromShared):

  • heap/CopyVisitorInlines.h:

(JSC::CopyVisitor::allocateNewSpace):
(JSC::CopyVisitor::allocateNewSpaceSlow):

  • heap/GCThreadSharedData.cpp:

(JSC::GCThreadSharedData::didStartCopying):

  • heap/Heap.cpp:

(JSC::Heap::copyBackingStores):

  • heap/SlotVisitorInlines.h:

(JSC::SlotVisitor::copyLater):

  • heap/TinyBloomFilter.h:

(JSC::TinyBloomFilter::add):

Location:
trunk/Source/JavaScriptCore
Files:
13 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r162006 r162017  
     12014-01-10  Mark Hahnenberg  <mhahnenberg@apple.com>
     2
     3        Copying should be generational
     4        https://bugs.webkit.org/show_bug.cgi?id=126555
     5
     6        Reviewed by Geoffrey Garen.
     7
     8        This patch adds support for copying to our generational collector. Eden collections
     9        always trigger copying. Full collections use our normal fragmentation-based heuristics.
     10
     11        The way this works is that the CopiedSpace now has the notion of an old generation set of CopiedBlocks
     12        and a new generation of CopiedBlocks. During each mutator cycle new CopiedSpace allocations reside
     13        in the new generation. When a collection occurs, those blocks are moved to the old generation.
     14
     15        One key thing to remember is that both new and old generation objects in the MarkedSpace can
     16        refer to old or new generation allocations in CopiedSpace. This is why we must fire write barriers
     17        when assigning to an old (MarkedSpace) object's Butterfly.
     18
     19        * heap/CopiedAllocator.h:
     20        (JSC::CopiedAllocator::tryAllocateDuringCopying):
     21        * heap/CopiedBlock.h:
     22        (JSC::CopiedBlock::CopiedBlock):
     23        (JSC::CopiedBlock::didEvacuateBytes):
     24        (JSC::CopiedBlock::isOld):
     25        (JSC::CopiedBlock::didPromote):
     26        * heap/CopiedBlockInlines.h:
     27        (JSC::CopiedBlock::reportLiveBytes):
     28        (JSC::CopiedBlock::reportLiveBytesDuringCopying):
     29        * heap/CopiedSpace.cpp:
     30        (JSC::CopiedSpace::CopiedSpace):
     31        (JSC::CopiedSpace::~CopiedSpace):
     32        (JSC::CopiedSpace::init):
     33        (JSC::CopiedSpace::tryAllocateOversize):
     34        (JSC::CopiedSpace::tryReallocateOversize):
     35        (JSC::CopiedSpace::doneFillingBlock):
     36        (JSC::CopiedSpace::didStartFullCollection):
     37        (JSC::CopiedSpace::doneCopying):
     38        (JSC::CopiedSpace::size):
     39        (JSC::CopiedSpace::capacity):
     40        (JSC::CopiedSpace::isPagedOut):
     41        * heap/CopiedSpace.h:
     42        (JSC::CopiedSpace::CopiedGeneration::CopiedGeneration):
     43        * heap/CopiedSpaceInlines.h:
     44        (JSC::CopiedSpace::contains):
     45        (JSC::CopiedSpace::recycleEvacuatedBlock):
     46        (JSC::CopiedSpace::allocateBlock):
     47        (JSC::CopiedSpace::startedCopying):
     48        * heap/CopyVisitor.cpp:
     49        (JSC::CopyVisitor::copyFromShared):
     50        * heap/CopyVisitorInlines.h:
     51        (JSC::CopyVisitor::allocateNewSpace):
     52        (JSC::CopyVisitor::allocateNewSpaceSlow):
     53        * heap/GCThreadSharedData.cpp:
     54        (JSC::GCThreadSharedData::didStartCopying):
     55        * heap/Heap.cpp:
     56        (JSC::Heap::copyBackingStores):
     57        * heap/SlotVisitorInlines.h:
     58        (JSC::SlotVisitor::copyLater):
     59        * heap/TinyBloomFilter.h:
     60        (JSC::TinyBloomFilter::add):
     61
    1622014-01-14  Mark Lam  <mark.lam@apple.com>
    263
  • trunk/Source/JavaScriptCore/heap/CopiedAllocator.h

    r122768 r162017  
    3939    bool fastPathShouldSucceed(size_t bytes) const;
    4040    CheckedBoolean tryAllocate(size_t bytes, void** outPtr);
     41    CheckedBoolean tryAllocateDuringCopying(size_t bytes, void** outPtr);
    4142    CheckedBoolean tryReallocate(void *oldPtr, size_t oldBytes, size_t newBytes);
    4243    void* forceAllocate(size_t bytes);
     
    9192    ASSERT(is8ByteAligned(*outPtr));
    9293
     94    return true;
     95}
     96
     97inline CheckedBoolean CopiedAllocator::tryAllocateDuringCopying(size_t bytes, void** outPtr)
     98{
     99    if (!tryAllocate(bytes, outPtr))
     100        return false;
     101    m_currentBlock->reportLiveBytesDuringCopying(bytes);
    93102    return true;
    94103}
  • trunk/Source/JavaScriptCore/heap/CopiedBlock.h

    r155487 r162017  
    5050    bool isPinned();
    5151
     52    bool isOld();
    5253    bool isOversize();
     54    void didPromote();
    5355
    5456    unsigned liveBytes();
    55     void reportLiveBytes(JSCell*, CopyToken, unsigned);
     57    bool shouldReportLiveBytes(SpinLockHolder&, JSCell* owner);
     58    void reportLiveBytes(SpinLockHolder&, JSCell*, CopyToken, unsigned);
     59    void reportLiveBytesDuringCopying(unsigned);
    5660    void didSurviveGC();
    5761    void didEvacuateBytes(unsigned);
     
    8286    bool hasWorkList();
    8387    CopyWorkList& workList();
     88    SpinLock& workListLock() { return m_workListLock; }
    8489
    8590private:
     
    8994    void checkConsistency();
    9095
    91 #if ENABLE(PARALLEL_GC)
    9296    SpinLock m_workListLock;
    93 #endif
    9497    OwnPtr<CopyWorkList> m_workList;
    9598
    9699    size_t m_remaining;
    97     uintptr_t m_isPinned;
     100    bool m_isPinned : 1;
     101    bool m_isOld : 1;
    98102    unsigned m_liveBytes;
    99103#ifndef NDEBUG
     
    131135    , m_remaining(payloadCapacity())
    132136    , m_isPinned(false)
     137    , m_isOld(false)
    133138    , m_liveBytes(0)
    134139#ifndef NDEBUG
     
    136141#endif
    137142{
    138 #if ENABLE(PARALLEL_GC)
    139143    m_workListLock.Init();
    140 #endif
    141144    ASSERT(is8ByteAligned(reinterpret_cast<void*>(m_remaining)));
    142145}
     
    157160{
    158161    ASSERT(m_liveBytes >= bytes);
     162    ASSERT(m_liveObjects);
    159163    checkConsistency();
    160164    m_liveBytes -= bytes;
     
    189193}
    190194
     195inline bool CopiedBlock::isOld()
     196{
     197    return m_isOld;
     198}
     199
     200inline void CopiedBlock::didPromote()
     201{
     202    m_isOld = true;
     203}
     204
    191205inline bool CopiedBlock::isOversize()
    192206{
  • trunk/Source/JavaScriptCore/heap/CopiedBlockInlines.h

    r161615 r162017  
    2727#define CopiedBlockInlines_h
    2828
     29#include "ClassInfo.h"
    2930#include "CopiedBlock.h"
    3031#include "Heap.h"
     32#include "MarkedBlock.h"
    3133
    3234namespace JSC {
    3335   
    34 inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigned bytes)
     36inline bool CopiedBlock::shouldReportLiveBytes(SpinLockHolder&, JSCell* owner)
    3537{
    36 #if ENABLE(PARALLEL_GC)
    37     SpinLockHolder locker(&m_workListLock);
    38 #endif
     38    // We want to add to live bytes if the owner isn't part of the remembered set or
     39    // if this block was allocated during the last cycle.
     40    // If we always added live bytes we would double count for elements in the remembered
     41    // set across collections.
     42    // If we didn't always add live bytes to new blocks, we'd get too few.
     43    bool ownerIsRemembered = MarkedBlock::blockFor(owner)->isRemembered(owner);
     44    return !ownerIsRemembered || !m_isOld;
     45}
     46
     47inline void CopiedBlock::reportLiveBytes(SpinLockHolder&, JSCell* owner, CopyToken token, unsigned bytes)
     48{
     49    checkConsistency();
    3950#ifndef NDEBUG
    40     checkConsistency();
    4151    m_liveObjects++;
    4252#endif
    4353    m_liveBytes += bytes;
     54    checkConsistency();
     55    ASSERT(m_liveBytes <= CopiedBlock::blockSize);
    4456
    4557    if (isPinned())
     
    5769}
    5870
     71inline void CopiedBlock::reportLiveBytesDuringCopying(unsigned bytes)
     72{
     73    checkConsistency();
     74    // This doesn't need to be locked because the thread that calls this function owns the current block.
     75    m_isOld = true;
     76#ifndef NDEBUG
     77    m_liveObjects++;
     78#endif
     79    m_liveBytes += bytes;
     80    checkConsistency();
     81    ASSERT(m_liveBytes <= CopiedBlock::blockSize);
     82}
     83
    5984} // namespace JSC
    6085
  • trunk/Source/JavaScriptCore/heap/CopiedSpace.cpp

    r161615 r162017  
    3636CopiedSpace::CopiedSpace(Heap* heap)
    3737    : m_heap(heap)
    38     , m_toSpace(0)
    39     , m_fromSpace(0)
    4038    , m_inCopyingPhase(false)
    4139    , m_shouldDoCopyPhase(false)
     
    4745CopiedSpace::~CopiedSpace()
    4846{
    49     while (!m_toSpace->isEmpty())
    50         m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_toSpace->removeHead()));
    51 
    52     while (!m_fromSpace->isEmpty())
    53         m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_fromSpace->removeHead()));
    54 
    55     while (!m_oversizeBlocks.isEmpty())
    56         m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(m_oversizeBlocks.removeHead()));
     47    while (!m_oldGen.toSpace->isEmpty())
     48        m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_oldGen.toSpace->removeHead()));
     49
     50    while (!m_oldGen.fromSpace->isEmpty())
     51        m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_oldGen.fromSpace->removeHead()));
     52
     53    while (!m_oldGen.oversizeBlocks.isEmpty())
     54        m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(m_oldGen.oversizeBlocks.removeHead()));
     55
     56    while (!m_newGen.toSpace->isEmpty())
     57        m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_newGen.toSpace->removeHead()));
     58
     59    while (!m_newGen.fromSpace->isEmpty())
     60        m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_newGen.fromSpace->removeHead()));
     61
     62    while (!m_newGen.oversizeBlocks.isEmpty())
     63        m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(m_newGen.oversizeBlocks.removeHead()));
     64
     65    ASSERT(m_oldGen.toSpace->isEmpty());
     66    ASSERT(m_oldGen.fromSpace->isEmpty());
     67    ASSERT(m_oldGen.oversizeBlocks.isEmpty());
     68    ASSERT(m_newGen.toSpace->isEmpty());
     69    ASSERT(m_newGen.fromSpace->isEmpty());
     70    ASSERT(m_newGen.oversizeBlocks.isEmpty());
    5771}
    5872
    5973void CopiedSpace::init()
    6074{
    61     m_toSpace = &m_blocks1;
    62     m_fromSpace = &m_blocks2;
    63    
     75    m_oldGen.toSpace = &m_oldGen.blocks1;
     76    m_oldGen.fromSpace = &m_oldGen.blocks2;
     77   
     78    m_newGen.toSpace = &m_newGen.blocks1;
     79    m_newGen.fromSpace = &m_newGen.blocks2;
     80
    6481    allocateBlock();
    6582}   
     
    84101   
    85102    CopiedBlock* block = CopiedBlock::create(m_heap->blockAllocator().allocateCustomSize(sizeof(CopiedBlock) + bytes, CopiedBlock::blockSize));
    86     m_oversizeBlocks.push(block);
    87     m_blockFilter.add(reinterpret_cast<Bits>(block));
     103    m_newGen.oversizeBlocks.push(block);
     104    m_newGen.blockFilter.add(reinterpret_cast<Bits>(block));
    88105    m_blockSet.add(block);
     106    ASSERT(!block->isOld());
    89107   
    90108    CopiedAllocator allocator;
     
    139157    CopiedBlock* oldBlock = CopiedSpace::blockFor(oldPtr);
    140158    if (oldBlock->isOversize()) {
    141         m_oversizeBlocks.remove(oldBlock);
     159        if (oldBlock->isOld())
     160            m_oldGen.oversizeBlocks.remove(oldBlock);
     161        else
     162            m_newGen.oversizeBlocks.remove(oldBlock);
    142163        m_blockSet.remove(oldBlock);
    143164        m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(oldBlock));
     
    166187
    167188    {
     189        // Always put the block into the old gen because it's being promoted!
    168190        SpinLockHolder locker(&m_toSpaceLock);
    169         m_toSpace->push(block);
     191        m_oldGen.toSpace->push(block);
    170192        m_blockSet.add(block);
    171         m_blockFilter.add(reinterpret_cast<Bits>(block));
     193        m_oldGen.blockFilter.add(reinterpret_cast<Bits>(block));
    172194    }
    173195
     
    182204}
    183205
    184 void CopiedSpace::startedCopying()
    185 {
    186     std::swap(m_fromSpace, m_toSpace);
    187 
    188     m_blockFilter.reset();
    189     m_allocator.resetCurrentBlock();
    190 
    191     CopiedBlock* next = 0;
    192     size_t totalLiveBytes = 0;
    193     size_t totalUsableBytes = 0;
    194     for (CopiedBlock* block = m_fromSpace->head(); block; block = next) {
    195         next = block->next();
    196         if (!block->isPinned() && block->canBeRecycled()) {
    197             recycleEvacuatedBlock(block);
    198             continue;
    199         }
    200         totalLiveBytes += block->liveBytes();
    201         totalUsableBytes += block->payloadCapacity();
    202     }
    203 
    204     CopiedBlock* block = m_oversizeBlocks.head();
    205     while (block) {
    206         CopiedBlock* next = block->next();
    207         if (block->isPinned()) {
    208             m_blockFilter.add(reinterpret_cast<Bits>(block));
    209             totalLiveBytes += block->payloadCapacity();
    210             totalUsableBytes += block->payloadCapacity();
    211             block->didSurviveGC();
    212         } else {
    213             m_oversizeBlocks.remove(block);
    214             m_blockSet.remove(block);
    215             m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(block));
    216         }
    217         block = next;
    218     }
    219 
    220     double markedSpaceBytes = m_heap->objectSpace().capacity();
    221     double totalFragmentation = ((double)totalLiveBytes + markedSpaceBytes) / ((double)totalUsableBytes + markedSpaceBytes);
    222     m_shouldDoCopyPhase = totalFragmentation <= Options::minHeapUtilization();
    223     if (!m_shouldDoCopyPhase)
    224         return;
    225 
    226     ASSERT(m_shouldDoCopyPhase);
    227     ASSERT(!m_inCopyingPhase);
    228     ASSERT(!m_numberOfLoanedBlocks);
    229     m_inCopyingPhase = true;
     206void CopiedSpace::didStartFullCollection()
     207{
     208    ASSERT(heap()->operationInProgress() == FullCollection);
     209    ASSERT(m_oldGen.fromSpace->isEmpty());
     210    ASSERT(m_newGen.fromSpace->isEmpty());
     211
     212#ifndef NDEBUG
     213    for (CopiedBlock* block = m_newGen.toSpace->head(); block; block = block->next())
     214        ASSERT(!block->liveBytes());
     215
     216    for (CopiedBlock* block = m_newGen.oversizeBlocks.head(); block; block = block->next())
     217        ASSERT(!block->liveBytes());
     218#endif
     219
     220    for (CopiedBlock* block = m_oldGen.toSpace->head(); block; block = block->next())
     221        block->didSurviveGC();
     222
     223    for (CopiedBlock* block = m_oldGen.oversizeBlocks.head(); block; block = block->next())
     224        block->didSurviveGC();
    230225}
    231226
     
    241236    m_inCopyingPhase = false;
    242237
    243     while (!m_fromSpace->isEmpty()) {
    244         CopiedBlock* block = m_fromSpace->removeHead();
    245         // All non-pinned blocks in from-space should have been reclaimed as they were evacuated.
    246         ASSERT(block->isPinned() || !m_shouldDoCopyPhase);
    247         block->didSurviveGC();
     238    DoublyLinkedList<CopiedBlock>* toSpace;
     239    DoublyLinkedList<CopiedBlock>* fromSpace;
     240    TinyBloomFilter* blockFilter;
     241    if (heap()->operationInProgress() == FullCollection) {
     242        toSpace = m_oldGen.toSpace;
     243        fromSpace = m_oldGen.fromSpace;
     244        blockFilter = &m_oldGen.blockFilter;
     245    } else {
     246        toSpace = m_newGen.toSpace;
     247        fromSpace = m_newGen.fromSpace;
     248        blockFilter = &m_newGen.blockFilter;
     249    }
     250
     251    while (!fromSpace->isEmpty()) {
     252        CopiedBlock* block = fromSpace->removeHead();
    248253        // We don't add the block to the blockSet because it was never removed.
    249254        ASSERT(m_blockSet.contains(block));
    250         m_blockFilter.add(reinterpret_cast<Bits>(block));
    251         m_toSpace->push(block);
    252     }
    253 
    254     if (!m_toSpace->head())
    255         allocateBlock();
    256     else
    257         m_allocator.setCurrentBlock(m_toSpace->head());
     255        blockFilter->add(reinterpret_cast<Bits>(block));
     256        toSpace->push(block);
     257    }
     258
     259    if (heap()->operationInProgress() == EdenCollection) {
     260        m_oldGen.toSpace->append(*m_newGen.toSpace);
     261        m_oldGen.oversizeBlocks.append(m_newGen.oversizeBlocks);
     262        m_oldGen.blockFilter.add(m_newGen.blockFilter);
     263        m_newGen.blockFilter.reset();
     264    }
     265
     266    ASSERT(m_newGen.toSpace->isEmpty());
     267    ASSERT(m_newGen.fromSpace->isEmpty());
     268    ASSERT(m_newGen.oversizeBlocks.isEmpty());
     269
     270    allocateBlock();
    258271
    259272    m_shouldDoCopyPhase = false;
     
    264277    size_t calculatedSize = 0;
    265278
    266     for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
    267         calculatedSize += block->size();
    268 
    269     for (CopiedBlock* block = m_fromSpace->head(); block; block = block->next())
    270         calculatedSize += block->size();
    271 
    272     for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
     279    for (CopiedBlock* block = m_oldGen.toSpace->head(); block; block = block->next())
     280        calculatedSize += block->size();
     281
     282    for (CopiedBlock* block = m_oldGen.fromSpace->head(); block; block = block->next())
     283        calculatedSize += block->size();
     284
     285    for (CopiedBlock* block = m_oldGen.oversizeBlocks.head(); block; block = block->next())
     286        calculatedSize += block->size();
     287
     288    for (CopiedBlock* block = m_newGen.toSpace->head(); block; block = block->next())
     289        calculatedSize += block->size();
     290
     291    for (CopiedBlock* block = m_newGen.fromSpace->head(); block; block = block->next())
     292        calculatedSize += block->size();
     293
     294    for (CopiedBlock* block = m_newGen.oversizeBlocks.head(); block; block = block->next())
    273295        calculatedSize += block->size();
    274296
     
    280302    size_t calculatedCapacity = 0;
    281303
    282     for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
    283         calculatedCapacity += block->capacity();
    284 
    285     for (CopiedBlock* block = m_fromSpace->head(); block; block = block->next())
    286         calculatedCapacity += block->capacity();
    287 
    288     for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
     304    for (CopiedBlock* block = m_oldGen.toSpace->head(); block; block = block->next())
     305        calculatedCapacity += block->capacity();
     306
     307    for (CopiedBlock* block = m_oldGen.fromSpace->head(); block; block = block->next())
     308        calculatedCapacity += block->capacity();
     309
     310    for (CopiedBlock* block = m_oldGen.oversizeBlocks.head(); block; block = block->next())
     311        calculatedCapacity += block->capacity();
     312
     313    for (CopiedBlock* block = m_newGen.toSpace->head(); block; block = block->next())
     314        calculatedCapacity += block->capacity();
     315
     316    for (CopiedBlock* block = m_newGen.fromSpace->head(); block; block = block->next())
     317        calculatedCapacity += block->capacity();
     318
     319    for (CopiedBlock* block = m_newGen.oversizeBlocks.head(); block; block = block->next())
    289320        calculatedCapacity += block->capacity();
    290321
     
    312343bool CopiedSpace::isPagedOut(double deadline)
    313344{
    314     return isBlockListPagedOut(deadline, m_toSpace)
    315         || isBlockListPagedOut(deadline, m_fromSpace)
    316         || isBlockListPagedOut(deadline, &m_oversizeBlocks);
    317 }
    318 
    319 void CopiedSpace::didStartFullCollection()
    320 {
    321     ASSERT(heap()->operationInProgress() == FullCollection);
    322 
    323     ASSERT(m_fromSpace->isEmpty());
    324 
    325     for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
    326         block->didSurviveGC();
    327 
    328     for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
    329         block->didSurviveGC();
     345    return isBlockListPagedOut(deadline, m_oldGen.toSpace)
     346        || isBlockListPagedOut(deadline, m_oldGen.fromSpace)
     347        || isBlockListPagedOut(deadline, &m_oldGen.oversizeBlocks)
     348        || isBlockListPagedOut(deadline, m_newGen.toSpace)
     349        || isBlockListPagedOut(deadline, m_newGen.fromSpace)
     350        || isBlockListPagedOut(deadline, &m_newGen.oversizeBlocks);
    330351}
    331352
  • trunk/Source/JavaScriptCore/heap/CopiedSpace.h

    r161615 r162017  
    2929#include "CopiedAllocator.h"
    3030#include "HeapBlock.h"
     31#include "HeapOperation.h"
    3132#include "TinyBloomFilter.h"
    3233#include <wtf/Assertions.h>
     
    6364    void didStartFullCollection();
    6465
     66    template <HeapOperation collectionType>
    6567    void startedCopying();
     68    void startedEdenCopy();
     69    void startedFullCopy();
    6670    void doneCopying();
    6771    bool isInCopyPhase() { return m_inCopyingPhase; }
     
    96100
    97101    void doneFillingBlock(CopiedBlock*, CopiedBlock**);
    98     void recycleEvacuatedBlock(CopiedBlock*);
     102    void recycleEvacuatedBlock(CopiedBlock*, HeapOperation collectionType);
    99103    void recycleBorrowedBlock(CopiedBlock*);
    100104
     
    103107    CopiedAllocator m_allocator;
    104108
    105     TinyBloomFilter m_blockFilter;
    106109    HashSet<CopiedBlock*> m_blockSet;
    107110
    108111    SpinLock m_toSpaceLock;
    109112
    110     DoublyLinkedList<CopiedBlock>* m_toSpace;
    111     DoublyLinkedList<CopiedBlock>* m_fromSpace;
    112    
    113     DoublyLinkedList<CopiedBlock> m_blocks1;
    114     DoublyLinkedList<CopiedBlock> m_blocks2;
    115     DoublyLinkedList<CopiedBlock> m_oversizeBlocks;
     113    struct CopiedGeneration {
     114        CopiedGeneration()
     115            : toSpace(0)
     116            , fromSpace(0)
     117        {
     118        }
     119
     120        DoublyLinkedList<CopiedBlock>* toSpace;
     121        DoublyLinkedList<CopiedBlock>* fromSpace;
     122       
     123        DoublyLinkedList<CopiedBlock> blocks1;
     124        DoublyLinkedList<CopiedBlock> blocks2;
     125        DoublyLinkedList<CopiedBlock> oversizeBlocks;
     126
     127        TinyBloomFilter blockFilter;
     128    };
     129
     130    CopiedGeneration m_oldGen;
     131    CopiedGeneration m_newGen;
    116132   
    117133    bool m_inCopyingPhase;
  • trunk/Source/JavaScriptCore/heap/CopiedSpaceInlines.h

    r158583 r162017  
    3838inline bool CopiedSpace::contains(CopiedBlock* block)
    3939{
    40     return !m_blockFilter.ruleOut(reinterpret_cast<Bits>(block)) && m_blockSet.contains(block);
     40    return (!m_newGen.blockFilter.ruleOut(reinterpret_cast<Bits>(block)) || !m_oldGen.blockFilter.ruleOut(reinterpret_cast<Bits>(block)))
     41        && m_blockSet.contains(block);
    4142}
    4243
     
    9394}
    9495
    95 inline void CopiedSpace::recycleEvacuatedBlock(CopiedBlock* block)
     96inline void CopiedSpace::recycleEvacuatedBlock(CopiedBlock* block, HeapOperation collectionType)
    9697{
    9798    ASSERT(block);
     
    101102        SpinLockHolder locker(&m_toSpaceLock);
    102103        m_blockSet.remove(block);
    103         m_fromSpace->remove(block);
     104        if (collectionType == EdenCollection)
     105            m_newGen.fromSpace->remove(block);
     106        else
     107            m_oldGen.fromSpace->remove(block);
    104108    }
    105109    m_heap->blockAllocator().deallocate(CopiedBlock::destroy(block));
     
    142146    CopiedBlock* block = CopiedBlock::create(m_heap->blockAllocator().allocate<CopiedBlock>());
    143147       
    144     m_toSpace->push(block);
    145     m_blockFilter.add(reinterpret_cast<Bits>(block));
     148    m_newGen.toSpace->push(block);
     149    m_newGen.blockFilter.add(reinterpret_cast<Bits>(block));
    146150    m_blockSet.add(block);
    147151    m_allocator.setCurrentBlock(block);
     
    175179}
    176180
     181template <HeapOperation collectionType>
     182inline void CopiedSpace::startedCopying()
     183{
     184    DoublyLinkedList<CopiedBlock>* fromSpace;
     185    DoublyLinkedList<CopiedBlock>* oversizeBlocks;
     186    TinyBloomFilter* blockFilter;
     187    if (collectionType == FullCollection) {
     188        ASSERT(m_oldGen.fromSpace->isEmpty());
     189        ASSERT(m_newGen.fromSpace->isEmpty());
     190
     191        m_oldGen.toSpace->append(*m_newGen.toSpace);
     192        m_oldGen.oversizeBlocks.append(m_newGen.oversizeBlocks);
     193
     194        ASSERT(m_newGen.toSpace->isEmpty());
     195        ASSERT(m_newGen.fromSpace->isEmpty());
     196        ASSERT(m_newGen.oversizeBlocks.isEmpty());
     197
     198        std::swap(m_oldGen.fromSpace, m_oldGen.toSpace);
     199        fromSpace = m_oldGen.fromSpace;
     200        oversizeBlocks = &m_oldGen.oversizeBlocks;
     201        blockFilter = &m_oldGen.blockFilter;
     202    } else {
     203        std::swap(m_newGen.fromSpace, m_newGen.toSpace);
     204        fromSpace = m_newGen.fromSpace;
     205        oversizeBlocks = &m_newGen.oversizeBlocks;
     206        blockFilter = &m_newGen.blockFilter;
     207    }
     208
     209    blockFilter->reset();
     210    m_allocator.resetCurrentBlock();
     211
     212    CopiedBlock* next = 0;
     213    size_t totalLiveBytes = 0;
     214    size_t totalUsableBytes = 0;
     215    for (CopiedBlock* block = fromSpace->head(); block; block = next) {
     216        next = block->next();
     217        if (!block->isPinned() && block->canBeRecycled()) {
     218            recycleEvacuatedBlock(block, collectionType);
     219            continue;
     220        }
     221        ASSERT(block->liveBytes() <= CopiedBlock::blockSize);
     222        totalLiveBytes += block->liveBytes();
     223        totalUsableBytes += block->payloadCapacity();
     224        block->didPromote();
     225    }
     226
     227    CopiedBlock* block = oversizeBlocks->head();
     228    while (block) {
     229        CopiedBlock* next = block->next();
     230        if (block->isPinned()) {
     231            blockFilter->add(reinterpret_cast<Bits>(block));
     232            totalLiveBytes += block->payloadCapacity();
     233            totalUsableBytes += block->payloadCapacity();
     234            block->didPromote();
     235        } else {
     236            oversizeBlocks->remove(block);
     237            m_blockSet.remove(block);
     238            m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(block));
     239        }
     240        block = next;
     241    }
     242
     243    double markedSpaceBytes = m_heap->objectSpace().capacity();
     244    double totalFragmentation = static_cast<double>(totalLiveBytes + markedSpaceBytes) / static_cast<double>(totalUsableBytes + markedSpaceBytes);
     245    m_shouldDoCopyPhase = m_heap->operationInProgress() == EdenCollection || totalFragmentation <= Options::minHeapUtilization();
     246    if (!m_shouldDoCopyPhase) {
     247        if (Options::logGC())
     248            dataLog("Skipped copying, ");
     249        return;
     250    }
     251
     252    if (Options::logGC())
     253        dataLogF("Did copy, ");
     254    ASSERT(m_shouldDoCopyPhase);
     255    ASSERT(!m_numberOfLoanedBlocks);
     256    ASSERT(!m_inCopyingPhase);
     257    m_inCopyingPhase = true;
     258}
     259
    177260} // namespace JSC
    178261
  • trunk/Source/JavaScriptCore/heap/CopyVisitor.cpp

    r153720 r162017  
    5858
    5959            ASSERT(!block->liveBytes());
    60             m_shared.m_copiedSpace->recycleEvacuatedBlock(block);
     60            m_shared.m_copiedSpace->recycleEvacuatedBlock(block, m_shared.m_vm->heap.operationInProgress());
    6161        }
    6262        m_shared.getNextBlocksToCopy(next, end);
  • trunk/Source/JavaScriptCore/heap/CopyVisitorInlines.h

    r155487 r162017  
    5656{
    5757    void* result = 0; // Compilers don't realize that this will be assigned.
    58     if (LIKELY(m_copiedAllocator.tryAllocate(bytes, &result)))
     58    if (LIKELY(m_copiedAllocator.tryAllocateDuringCopying(bytes, &result)))
    5959        return result;
    6060   
     
    7171
    7272    void* result = 0;
    73     CheckedBoolean didSucceed = m_copiedAllocator.tryAllocate(bytes, &result);
     73    CheckedBoolean didSucceed = m_copiedAllocator.tryAllocateDuringCopying(bytes, &result);
    7474    ASSERT(didSucceed);
    7575    return result;
  • trunk/Source/JavaScriptCore/heap/GCThreadSharedData.cpp

    r155317 r162017  
    182182    {
    183183        SpinLockHolder locker(&m_copyLock);
    184         WTF::copyToVector(m_copiedSpace->m_blockSet, m_blocksToCopy);
     184        if (m_vm->heap.operationInProgress() == EdenCollection) {
     185            // Reset the vector to be empty, but don't throw away the backing store.
     186            m_blocksToCopy.shrink(0);
     187            for (CopiedBlock* block = m_copiedSpace->m_newGen.fromSpace->head(); block; block = block->next())
     188                m_blocksToCopy.append(block);
     189        } else {
     190            ASSERT(m_vm->heap.operationInProgress() == FullCollection);
     191            WTF::copyToVector(m_copiedSpace->m_blockSet, m_blocksToCopy);
     192        }
    185193        m_copyIndex = 0;
    186194    }
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r161914 r162017  
    642642void Heap::copyBackingStores()
    643643{
    644     if (collectionType == EdenCollection)
    645         return;
    646 
    647     m_storageSpace.startedCopying();
     644    m_storageSpace.startedCopying<collectionType>();
    648645    if (m_storageSpace.shouldDoCopyPhase()) {
    649646        m_sharedData.didStartCopying();
  • trunk/Source/JavaScriptCore/heap/SlotVisitorInlines.h

    r161914 r162017  
    226226{
    227227    ASSERT(bytes);
    228     // We don't do any copying during EdenCollections.
    229     ASSERT(heap()->operationInProgress() != EdenCollection);
    230 
    231     m_bytesCopied += bytes;
    232 
    233228    CopiedBlock* block = CopiedSpace::blockFor(ptr);
    234229    if (block->isOversize()) {
     
    237232    }
    238233
    239     block->reportLiveBytes(owner, token, bytes);
     234    SpinLockHolder locker(&block->workListLock());
     235    if (heap()->operationInProgress() == FullCollection || block->shouldReportLiveBytes(locker, owner)) {
     236        m_bytesCopied += bytes;
     237        block->reportLiveBytes(locker, owner, token, bytes);
     238    }
    240239}
    241240   
  • trunk/Source/JavaScriptCore/heap/TinyBloomFilter.h

    r105442 r162017  
    3636
    3737    void add(Bits);
     38    void add(TinyBloomFilter&);
    3839    bool ruleOut(Bits) const; // True for 0.
    3940    void reset();
     
    5152{
    5253    m_bits |= bits;
     54}
     55
     56inline void TinyBloomFilter::add(TinyBloomFilter& other)
     57{
     58    m_bits |= other.m_bits;
    5359}
    5460
Note: See TracChangeset for help on using the changeset viewer.