Changeset 180797 in webkit


Ignore:
Timestamp:
Feb 27, 2015 4:29:22 PM (9 years ago)
Author:
ggaren@apple.com
Message:

bmalloc: Pathological madvise churn on the free(malloc(x)) benchmark
https://bugs.webkit.org/show_bug.cgi?id=142058

Reviewed by Andreas Kling.

The churn was caused by repeatedly splitting an object with physical
pages from an object without, and then merging them back together again.
The merge would conservatively forget that we had physical pages, forcing
a new call to madvise on the next allocation.

This patch more strictly segregates objects in the heap from objects in
the VM heap, with these changes:

(1) Objects in the heap are not allowed to merge with objects in the VM
heap, and vice versa -- since that would erase our precise knowledge of
which physical pages had been allocated.

(2) The VM heap is exclusively responsible for allocating and deallocating
physical pages.

(3) The heap free list must consider entries for objects that are in the
VM heap to be invalid, and vice versa. (This condition can arise
because the free list does not eagerly remove items.)

With these changes, we can know that any valid object in the heap's free
list already has physical pages, and does not need to call madvise.

Note that the VM heap -- as before -- might sometimes contain ranges
or pieces of ranges that have physical pages, since we allow splitting
of ranges at granularities smaller than the VM page size. These ranges
can eventually merge with ranges in the heap during scavenging.

  • bmalloc.xcodeproj/project.pbxproj:
  • bmalloc/BoundaryTag.h:

(bmalloc::BoundaryTag::owner):
(bmalloc::BoundaryTag::setOwner):
(bmalloc::BoundaryTag::initSentinel):
(bmalloc::BoundaryTag::hasPhysicalPages): Deleted.
(bmalloc::BoundaryTag::setHasPhysicalPages): Deleted. Replaced the concept
of "has physical pages" with a bit indicating which heap owns the large
object. This is a more precise concept, since the old bit was really a
Yes / Maybe bit.

  • bmalloc/Deallocator.cpp:
  • bmalloc/FreeList.cpp: Adopt

(bmalloc::FreeList::takeGreedy):
(bmalloc::FreeList::take):
(bmalloc::FreeList::removeInvalidAndDuplicateEntries):

  • bmalloc/FreeList.h:

(bmalloc::FreeList::push): Added API for considering the owner when
deciding if a free list entry is valid.

  • bmalloc/Heap.cpp:

(bmalloc::Heap::Heap): Adopt new API.

(bmalloc::Heap::scavengeLargeRanges): Scavenge all ranges with no minimum,
since some ranges might be able to merge with ranges in the VM heap, and
they won't be allowed to until we scavenge them.

(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::allocateMediumPage):
(bmalloc::Heap::allocateLarge): New VM heap API makes this function
simpler, since we always get back physical pages now.

  • bmalloc/Heap.h:
  • bmalloc/LargeObject.h:

(bmalloc::LargeObject::end):
(bmalloc::LargeObject::owner):
(bmalloc::LargeObject::setOwner):
(bmalloc::LargeObject::isValidAndFree):
(bmalloc::LargeObject::merge): Do not merge objects across heaps since
that causes madvise churn.
(bmalloc::LargeObject::validateSelf):
(bmalloc::LargeObject::init):
(bmalloc::LargeObject::hasPhysicalPages): Deleted.
(bmalloc::LargeObject::setHasPhysicalPages): Deleted. Propogate the Owner API.

  • bmalloc/Owner.h: Added.
  • bmalloc/SegregatedFreeList.cpp:

(bmalloc::SegregatedFreeList::SegregatedFreeList):
(bmalloc::SegregatedFreeList::insert):
(bmalloc::SegregatedFreeList::takeGreedy):
(bmalloc::SegregatedFreeList::take):

  • bmalloc/SegregatedFreeList.h: Propogate the owner API.
  • bmalloc/VMAllocate.h:

(bmalloc::vmDeallocatePhysicalPagesSloppy):
(bmalloc::vmAllocatePhysicalPagesSloppy): Clarified these functions and
removed an edge case.

  • bmalloc/VMHeap.cpp:

(bmalloc::VMHeap::VMHeap):

  • bmalloc/VMHeap.h:

(bmalloc::VMHeap::allocateSmallPage):
(bmalloc::VMHeap::allocateMediumPage):
(bmalloc::VMHeap::allocateLargeObject):
(bmalloc::VMHeap::deallocateLargeObject): Be sure to give each object
a new chance to merge, since it might have been prohibited from merging
before by virtue of not being in the VM heap.

(bmalloc::VMHeap::allocateLargeRange): Deleted.
(bmalloc::VMHeap::deallocateLargeRange): Deleted.

Location:
trunk/Source/bmalloc
Files:
1 added
14 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/bmalloc/ChangeLog

    r180701 r180797  
     12015-02-27  Geoffrey Garen  <ggaren@apple.com>
     2
     3        bmalloc: Pathological madvise churn on the free(malloc(x)) benchmark
     4        https://bugs.webkit.org/show_bug.cgi?id=142058
     5
     6        Reviewed by Andreas Kling.
     7
     8        The churn was caused by repeatedly splitting an object with physical
     9        pages from an object without, and then merging them back together again.
     10        The merge would conservatively forget that we had physical pages, forcing
     11        a new call to madvise on the next allocation.
     12
     13        This patch more strictly segregates objects in the heap from objects in
     14        the VM heap, with these changes:
     15
     16        (1) Objects in the heap are not allowed to merge with objects in the VM
     17        heap, and vice versa -- since that would erase our precise knowledge of
     18        which physical pages had been allocated.
     19
     20        (2) The VM heap is exclusively responsible for allocating and deallocating
     21        physical pages.
     22
     23        (3) The heap free list must consider entries for objects that are in the
     24        VM heap to be invalid, and vice versa. (This condition can arise
     25        because the free list does not eagerly remove items.)
     26
     27        With these changes, we can know that any valid object in the heap's free
     28        list already has physical pages, and does not need to call madvise.
     29
     30        Note that the VM heap -- as before -- might sometimes contain ranges
     31        or pieces of ranges that have physical pages, since we allow splitting
     32        of ranges at granularities smaller than the VM page size. These ranges
     33        can eventually merge with ranges in the heap during scavenging.
     34
     35        * bmalloc.xcodeproj/project.pbxproj:
     36
     37        * bmalloc/BoundaryTag.h:
     38        (bmalloc::BoundaryTag::owner):
     39        (bmalloc::BoundaryTag::setOwner):
     40        (bmalloc::BoundaryTag::initSentinel):
     41        (bmalloc::BoundaryTag::hasPhysicalPages): Deleted.
     42        (bmalloc::BoundaryTag::setHasPhysicalPages): Deleted. Replaced the concept
     43        of "has physical pages" with a bit indicating which heap owns the large
     44        object. This is a more precise concept, since the old bit was really a
     45        Yes / Maybe bit.
     46
     47        * bmalloc/Deallocator.cpp:
     48
     49        * bmalloc/FreeList.cpp: Adopt
     50        (bmalloc::FreeList::takeGreedy):
     51        (bmalloc::FreeList::take):
     52        (bmalloc::FreeList::removeInvalidAndDuplicateEntries):
     53        * bmalloc/FreeList.h:
     54        (bmalloc::FreeList::push): Added API for considering the owner when
     55        deciding if a free list entry is valid.
     56
     57        * bmalloc/Heap.cpp:
     58        (bmalloc::Heap::Heap): Adopt new API.
     59
     60        (bmalloc::Heap::scavengeLargeRanges): Scavenge all ranges with no minimum,
     61        since some ranges might be able to merge with ranges in the VM heap, and
     62        they won't be allowed to until we scavenge them.
     63
     64        (bmalloc::Heap::allocateSmallPage):
     65        (bmalloc::Heap::allocateMediumPage):
     66        (bmalloc::Heap::allocateLarge): New VM heap API makes this function
     67        simpler, since we always get back physical pages now.
     68
     69        * bmalloc/Heap.h:
     70        * bmalloc/LargeObject.h:
     71        (bmalloc::LargeObject::end):
     72        (bmalloc::LargeObject::owner):
     73        (bmalloc::LargeObject::setOwner):
     74        (bmalloc::LargeObject::isValidAndFree):
     75        (bmalloc::LargeObject::merge): Do not merge objects across heaps since
     76        that causes madvise churn.
     77        (bmalloc::LargeObject::validateSelf):
     78        (bmalloc::LargeObject::init):
     79        (bmalloc::LargeObject::hasPhysicalPages): Deleted.
     80        (bmalloc::LargeObject::setHasPhysicalPages): Deleted. Propogate the Owner API.
     81
     82        * bmalloc/Owner.h: Added.
     83
     84        * bmalloc/SegregatedFreeList.cpp:
     85        (bmalloc::SegregatedFreeList::SegregatedFreeList):
     86        (bmalloc::SegregatedFreeList::insert):
     87        (bmalloc::SegregatedFreeList::takeGreedy):
     88        (bmalloc::SegregatedFreeList::take):
     89        * bmalloc/SegregatedFreeList.h: Propogate the owner API.
     90
     91        * bmalloc/VMAllocate.h:
     92        (bmalloc::vmDeallocatePhysicalPagesSloppy):
     93        (bmalloc::vmAllocatePhysicalPagesSloppy): Clarified these functions and
     94        removed an edge case.
     95
     96        * bmalloc/VMHeap.cpp:
     97        (bmalloc::VMHeap::VMHeap):
     98        * bmalloc/VMHeap.h:
     99        (bmalloc::VMHeap::allocateSmallPage):
     100        (bmalloc::VMHeap::allocateMediumPage):
     101        (bmalloc::VMHeap::allocateLargeObject):
     102        (bmalloc::VMHeap::deallocateLargeObject): Be sure to give each object
     103        a new chance to merge, since it might have been prohibited from merging
     104        before by virtue of not being in the VM heap.
     105
     106        (bmalloc::VMHeap::allocateLargeRange): Deleted.
     107        (bmalloc::VMHeap::deallocateLargeRange): Deleted.
     108
    11092015-02-26  Geoffrey Garen  <ggaren@apple.com>
    2110
  • trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj

    r180694 r180797  
    2727                14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C919C818FCC59F0028DB43 /* BPlatform.h */; settings = {ATTRIBUTES = (Private, ); }; };
    2828                14CC394C18EA8858004AFE34 /* libbmalloc.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 14F271BE18EA3963008C152F /* libbmalloc.a */; };
     29                14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */ = {isa = PBXBuildFile; fileRef = 14D2CD9A1AA12CFB00770440 /* Owner.h */; settings = {ATTRIBUTES = (Private, ); }; };
    2930                14DD788C18F48CAE00950702 /* LargeChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147AAA8818CD17CE002201E4 /* LargeChunk.h */; settings = {ATTRIBUTES = (Private, ); }; };
    3031                14DD788D18F48CC600950702 /* BeginTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64518B54A700076FA3F /* BeginTag.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    139140                14C919C818FCC59F0028DB43 /* BPlatform.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BPlatform.h; path = bmalloc/BPlatform.h; sourceTree = "<group>"; };
    140141                14CC394418EA8743004AFE34 /* libmbmalloc.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libmbmalloc.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
     142                14D2CD9A1AA12CFB00770440 /* Owner.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Owner.h; path = bmalloc/Owner.h; sourceTree = "<group>"; };
    141143                14D9DB4517F2447100EAAB79 /* FixedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = FixedVector.h; path = bmalloc/FixedVector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
    142144                14DA32071885F9E6007269E0 /* Line.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Line.h; path = bmalloc/Line.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
     
    225227                                147AAA8818CD17CE002201E4 /* LargeChunk.h */,
    226228                                14C6216E1A9A9A6200E72293 /* LargeObject.h */,
     229                                14D2CD9A1AA12CFB00770440 /* Owner.h */,
    227230                                146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */,
    228231                                146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */,
     
    354357                                14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
    355358                                14DD78B718F48D6B00950702 /* MediumLine.h in Headers */,
     359                                14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */,
    356360                                14DD78B618F48D6B00950702 /* MediumChunk.h in Headers */,
    357361                                14DD78BC18F48D6B00950702 /* SmallLine.h in Headers */,
  • trunk/Source/bmalloc/bmalloc/BoundaryTag.h

    r180701 r180797  
    2828
    2929#include "BAssert.h"
     30#include "Owner.h"
    3031#include "Range.h"
    3132#include "Sizes.h"
     
    5051    void setEnd(bool isEnd) { m_isEnd = isEnd; }
    5152
    52     bool hasPhysicalPages() { return m_hasPhysicalPages; }
    53     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
     53    Owner owner() { return m_owner; }
     54    void setOwner(Owner owner) { m_owner = owner; }
    5455   
    5556    bool isMarked() { return m_isMarked; }
     
    8586    bool m_isFree: 1;
    8687    bool m_isEnd: 1;
    87     bool m_hasPhysicalPages: 1;
     88    Owner m_owner: 1;
    8889    bool m_isMarked: 1;
    8990    unsigned m_compactBegin: compactBeginBits;
     
    122123    setRange(Range(nullptr, largeMin));
    123124    setFree(false);
     125    setOwner(Owner::VMHeap);
    124126}
    125127
  • trunk/Source/bmalloc/bmalloc/Deallocator.cpp

    r178621 r180797  
    2525
    2626#include "BAssert.h"
    27 #include "BeginTag.h"
    2827#include "LargeChunk.h"
    2928#include "Deallocator.h"
  • trunk/Source/bmalloc/bmalloc/FreeList.cpp

    r180701 r180797  
    2424 */
    2525
    26 #include "BeginTag.h"
    2726#include "LargeChunk.h"
    2827#include "FreeList.h"
     
    3130namespace bmalloc {
    3231
    33 LargeObject FreeList::takeGreedy(size_t size)
     32LargeObject FreeList::takeGreedy(Owner owner)
    3433{
    3534    for (size_t i = m_vector.size(); i-- > 0; ) {
     
    3736        // so we need to validate each free list entry before using it.
    3837        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
    39         if (!largeObject.isValidAndFree(m_vector[i].size())) {
     38        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
    4039            m_vector.pop(i);
    4140            continue;
    4241        }
    43 
    44         if (largeObject.size() < size)
    45             continue;
    4642
    4743        m_vector.pop(i);
     
    5248}
    5349
    54 LargeObject FreeList::take(size_t size)
     50LargeObject FreeList::take(Owner owner, size_t size)
    5551{
    5652    LargeObject first;
     
    6056        // we need to validate each free list entry before using it.
    6157        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
    62         if (!largeObject.isValidAndFree(m_vector[i].size())) {
     58        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
    6359            m_vector.pop(i);
    6460            continue;
     
    7773}
    7874
    79 LargeObject FreeList::take(size_t alignment, size_t size, size_t unalignedSize)
     75LargeObject FreeList::take(Owner owner, size_t alignment, size_t size, size_t unalignedSize)
    8076{
    8177    BASSERT(isPowerOfTwo(alignment));
     
    8884        // we need to validate each free list entry before using it.
    8985        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
    90         if (!largeObject.isValidAndFree(m_vector[i].size())) {
     86        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
    9187            m_vector.pop(i);
    9288            continue;
     
    108104}
    109105
    110 void FreeList::removeInvalidAndDuplicateEntries()
     106void FreeList::removeInvalidAndDuplicateEntries(Owner owner)
    111107{
    112108    for (size_t i = m_vector.size(); i-- > 0; ) {
    113109        LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin());
    114         if (!largeObject.isValidAndFree(m_vector[i].size())) {
     110        if (!largeObject.isValidAndFree(owner, m_vector[i].size())) {
    115111            m_vector.pop(i);
    116112            continue;
  • trunk/Source/bmalloc/bmalloc/FreeList.h

    r180701 r180797  
    3838    FreeList();
    3939
    40     void push(const LargeObject&);
     40    void push(Owner, const LargeObject&);
    4141
    42     LargeObject take(size_t);
    43     LargeObject take(size_t alignment, size_t, size_t unalignedSize);
     42    LargeObject take(Owner, size_t);
     43    LargeObject take(Owner, size_t alignment, size_t, size_t unalignedSize);
    4444   
    45     LargeObject takeGreedy(size_t);
     45    LargeObject takeGreedy(Owner);
    4646
    47     void removeInvalidAndDuplicateEntries();
     47    void removeInvalidAndDuplicateEntries(Owner);
    4848   
    4949private:
     
    5858}
    5959
    60 inline void FreeList::push(const LargeObject& largeObject)
     60inline void FreeList::push(Owner owner, const LargeObject& largeObject)
    6161{
    6262    BASSERT(largeObject.isFree());
    6363    if (m_vector.size() == m_limit) {
    64         removeInvalidAndDuplicateEntries();
     64        removeInvalidAndDuplicateEntries(owner);
    6565        m_limit = std::max(m_vector.size() * freeListGrowFactor, freeListSearchDepth);
    6666    }
  • trunk/Source/bmalloc/bmalloc/Heap.cpp

    r180576 r180797  
    4747
    4848Heap::Heap(std::lock_guard<StaticMutex>&)
    49     : m_isAllocatingPages(false)
     49    : m_largeObjects(Owner::Heap)
     50    , m_isAllocatingPages(false)
    5051    , m_scavenger(*this, &Heap::concurrentScavenge)
    5152{
     
    145146        }
    146147
    147         LargeObject largeObject = m_largeObjects.takeGreedy(vmPageSize);
     148        LargeObject largeObject = m_largeObjects.takeGreedy();
    148149        if (!largeObject)
    149150            return;
    150         m_vmHeap.deallocateLargeRange(lock, largeObject);
     151        m_vmHeap.deallocateLargeObject(lock, largeObject);
    151152    }
    152153}
     
    241242        if (m_smallPages.size())
    242243            return m_smallPages.pop();
    243        
    244         SmallPage* page = m_vmHeap.allocateSmallPage();
    245         vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
    246         return page;
     244        return m_vmHeap.allocateSmallPage();
    247245    }();
    248246
     
    266264        if (m_mediumPages.size())
    267265            return m_mediumPages.pop();
    268        
    269         MediumPage* page = m_vmHeap.allocateMediumPage();
    270         vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
    271         return page;
     266        return m_vmHeap.allocateMediumPage();
    272267    }();
    273268
     
    376371
    377372    largeObject.setFree(false);
    378 
    379     if (!largeObject.hasPhysicalPages()) {
    380         vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
    381         largeObject.setHasPhysicalPages(true);
    382     }
    383    
    384373    return largeObject.begin();
    385374}
     
    395384    LargeObject largeObject = m_largeObjects.take(size);
    396385    if (!largeObject)
    397         largeObject = m_vmHeap.allocateLargeRange(size);
     386        largeObject = m_vmHeap.allocateLargeObject(size);
    398387
    399388    return allocateLarge(lock, largeObject, size);
     
    416405    LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
    417406    if (!largeObject)
    418         largeObject = m_vmHeap.allocateLargeRange(alignment, size, unalignedSize);
     407        largeObject = m_vmHeap.allocateLargeObject(alignment, size, unalignedSize);
    419408
    420409    size_t alignmentMask = alignment - 1;
    421     if (!test(largeObject.begin(), alignmentMask))
    422         return allocateLarge(lock, largeObject, size);
    423 
    424     // Because we allocate VM left-to-right, we must explicitly allocate the
    425     // unaligned space on the left in order to break off the aligned space
    426     // we want in the middle.
    427     size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
    428     std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
    429     allocateLarge(lock, pair.first, prefixSize);
    430     allocateLarge(lock, pair.second, size);
    431     deallocateLarge(lock, pair.first);
    432     return pair.second.begin();
     410    if (test(largeObject.begin(), alignmentMask)) {
     411        size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin();
     412        std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize);
     413        m_largeObjects.insert(pair.first);
     414        largeObject = pair.second;
     415    }
     416
     417    return allocateLarge(lock, largeObject, size);
    433418}
    434419
  • trunk/Source/bmalloc/bmalloc/Heap.h

    r180576 r180797  
    8787    void splitLarge(BeginTag*, size_t, EndTag*&, Range&);
    8888    void mergeLarge(BeginTag*&, EndTag*&, Range&);
    89     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);
    90     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);
     89    void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     90    void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
    9191   
    9292    void concurrentScavenge();
  • trunk/Source/bmalloc/bmalloc/LargeObject.h

    r180701 r180797  
    4747
    4848    char* begin() const { return static_cast<char*>(m_object); }
     49    char* end() const { return begin() + size(); }
    4950    size_t size() const { return m_beginTag->size(); }
    5051    Range range() const { return Range(m_object, size()); }
     
    5354    bool isFree() const;
    5455   
    55     bool hasPhysicalPages() const;
    56     void setHasPhysicalPages(bool) const;
     56    Owner owner() const;
     57    void setOwner(Owner) const;
    5758   
    5859    bool isMarked() const;
    5960    void setMarked(bool) const;
    6061   
    61     bool isValidAndFree(size_t) const;
     62    bool isValidAndFree(Owner, size_t) const;
    6263
    6364    LargeObject merge() const;
     
    117118}
    118119
    119 inline bool LargeObject::hasPhysicalPages() const
    120 {
    121     validate();
    122     return m_beginTag->hasPhysicalPages();
    123 }
    124 
    125 inline void LargeObject::setHasPhysicalPages(bool hasPhysicalPages) const
    126 {
    127     validate();
    128     m_beginTag->setHasPhysicalPages(hasPhysicalPages);
    129     m_endTag->setHasPhysicalPages(hasPhysicalPages);
     120inline Owner LargeObject::owner() const
     121{
     122    validate();
     123    return m_beginTag->owner();
     124}
     125
     126inline void LargeObject::setOwner(Owner owner) const
     127{
     128    validate();
     129    m_beginTag->setOwner(owner);
     130    m_endTag->setOwner(owner);
    130131}
    131132
     
    143144}
    144145
    145 inline bool LargeObject::isValidAndFree(size_t expectedSize) const
     146inline bool LargeObject::isValidAndFree(Owner expectedOwner, size_t expectedSize) const
    146147{
    147148    if (!m_beginTag->isFree())
     
    157158        return false;
    158159
     160    if (m_beginTag->owner() != expectedOwner)
     161        return false;
     162   
    159163    return true;
    160164}
     
    164168    validate();
    165169    BASSERT(isFree());
    166 
    167     bool hasPhysicalPages = m_beginTag->hasPhysicalPages();
    168170
    169171    BeginTag* beginTag = m_beginTag;
    170172    EndTag* endTag = m_endTag;
    171173    Range range = this->range();
     174    Owner owner = this->owner();
    172175   
    173176    EndTag* prev = beginTag->prev();
    174     if (prev->isFree()) {
     177    if (prev->isFree() && prev->owner() == owner) {
    175178        Range left(range.begin() - prev->size(), prev->size());
    176179        range = Range(left.begin(), left.size() + range.size());
    177         hasPhysicalPages &= prev->hasPhysicalPages();
    178180
    179181        prev->clear();
     
    184186
    185187    BeginTag* next = endTag->next();
    186     if (next->isFree()) {
     188    if (next->isFree() && next->owner() == owner) {
    187189        Range right(range.end(), next->size());
    188190        range = Range(range.begin(), range.size() + right.size());
    189191
    190         hasPhysicalPages &= next->hasPhysicalPages();
    191 
    192192        endTag->clear();
    193193        next->clear();
     
    198198    beginTag->setRange(range);
    199199    beginTag->setFree(true);
    200     beginTag->setHasPhysicalPages(hasPhysicalPages);
     200    beginTag->setOwner(owner);
    201201    endTag->init(beginTag);
    202202
     
    239239    BASSERT(m_beginTag->size() == m_endTag->size());
    240240    BASSERT(m_beginTag->isFree() == m_endTag->isFree());
    241     BASSERT(m_beginTag->hasPhysicalPages() == m_endTag->hasPhysicalPages());
     241    BASSERT(m_beginTag->owner() == m_endTag->owner());
    242242    BASSERT(m_beginTag->isMarked() == m_endTag->isMarked());
    243243}
     
    265265    beginTag->setRange(range);
    266266    beginTag->setFree(true);
    267     beginTag->setHasPhysicalPages(false);
     267    beginTag->setOwner(Owner::VMHeap);
    268268
    269269    EndTag* endTag = LargeChunk::endTag(range.begin(), range.size());
  • trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp

    r180693 r180797  
    2828namespace bmalloc {
    2929
    30 SegregatedFreeList::SegregatedFreeList()
     30SegregatedFreeList::SegregatedFreeList(Owner owner)
     31    : m_owner(owner)
    3132{
    3233    BASSERT(static_cast<size_t>(&select(largeMax) - m_freeLists.begin()) == m_freeLists.size() - 1);
     
    3536void SegregatedFreeList::insert(const LargeObject& largeObject)
    3637{
     38    BASSERT(largeObject.owner() == m_owner);
    3739    auto& list = select(largeObject.size());
    38     list.push(largeObject);
     40    list.push(m_owner, largeObject);
    3941}
    4042
    41 LargeObject SegregatedFreeList::takeGreedy(size_t size)
     43LargeObject SegregatedFreeList::takeGreedy()
    4244{
    4345    for (size_t i = m_freeLists.size(); i-- > 0; ) {
    44         LargeObject largeObject = m_freeLists[i].takeGreedy(size);
     46        LargeObject largeObject = m_freeLists[i].takeGreedy(m_owner);
    4547        if (!largeObject)
    4648            continue;
     
    5456{
    5557    for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
    56         LargeObject largeObject = list->take(size);
     58        LargeObject largeObject = list->take(m_owner, size);
    5759        if (!largeObject)
    5860            continue;
     
    6668{
    6769    for (auto* list = &select(size); list != m_freeLists.end(); ++list) {
    68         LargeObject largeObject = list->take(alignment, size, unalignedSize);
     70        LargeObject largeObject = list->take(m_owner, alignment, size, unalignedSize);
    6971        if (!largeObject)
    7072            continue;
  • trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h

    r180693 r180797  
    3434class SegregatedFreeList {
    3535public:
    36     SegregatedFreeList();
     36    SegregatedFreeList(Owner);
    3737
    3838    void insert(const LargeObject&);
     
    5555    // removes stale items from the free list while searching. Eagerly removes
    5656    // the returned object from the free list.
    57     LargeObject takeGreedy(size_t);
    58    
     57    LargeObject takeGreedy();
     58
    5959private:
    6060    FreeList& select(size_t);
    6161
     62    Owner m_owner;
    6263    std::array<FreeList, 19> m_freeLists;
    6364};
  • trunk/Source/bmalloc/bmalloc/VMAllocate.h

    r179923 r180797  
    132132}
    133133
    134 // Trims requests that are un-page-aligned. NOTE: size must be at least a page.
     134// Trims requests that are un-page-aligned.
    135135inline void vmDeallocatePhysicalPagesSloppy(void* p, size_t size)
    136136{
    137     BASSERT(size >= vmPageSize);
    138 
    139137    char* begin = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p));
    140138    char* end = roundDownToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
    141139
    142     Range range(begin, end - begin);
    143     if (!range)
     140    if (begin >= end)
    144141        return;
    145     vmDeallocatePhysicalPages(range.begin(), range.size());
     142
     143    vmDeallocatePhysicalPages(begin, end - begin);
    146144}
    147145
     
    152150    char* end = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p) + size);
    153151
    154     Range range(begin, end - begin);
    155     if (!range)
     152    if (begin >= end)
    156153        return;
    157     vmAllocatePhysicalPages(range.begin(), range.size());
     154
     155    vmAllocatePhysicalPages(begin, end - begin);
    158156}
    159157
  • trunk/Source/bmalloc/bmalloc/VMHeap.cpp

    r180693 r180797  
    3434
    3535VMHeap::VMHeap()
     36    : m_largeObjects(Owner::VMHeap)
    3637{
    3738}
  • trunk/Source/bmalloc/bmalloc/VMHeap.h

    r180604 r180797  
    5353    SmallPage* allocateSmallPage();
    5454    MediumPage* allocateMediumPage();
    55     LargeObject allocateLargeRange(size_t);
    56     LargeObject allocateLargeRange(size_t alignment, size_t, size_t unalignedSize);
     55    LargeObject allocateLargeObject(size_t);
     56    LargeObject allocateLargeObject(size_t alignment, size_t, size_t unalignedSize);
    5757
    5858    void deallocateSmallPage(std::unique_lock<StaticMutex>&, SmallPage*);
    5959    void deallocateMediumPage(std::unique_lock<StaticMutex>&, MediumPage*);
    60     void deallocateLargeRange(std::unique_lock<StaticMutex>&, LargeObject&);
     60    void deallocateLargeObject(std::unique_lock<StaticMutex>&, LargeObject&);
    6161
    6262private:
     63    LargeObject allocateLargeObject(LargeObject&, size_t);
    6364    void grow();
    6465
     
    7677        grow();
    7778
    78     return m_smallPages.pop();
     79    SmallPage* page = m_smallPages.pop();
     80    vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
     81    return page;
    7982}
    8083
     
    8487        grow();
    8588
    86     return m_mediumPages.pop();
     89    MediumPage* page = m_mediumPages.pop();
     90    vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize);
     91    return page;
    8792}
    8893
    89 inline LargeObject VMHeap::allocateLargeRange(size_t size)
     94inline LargeObject VMHeap::allocateLargeObject(LargeObject& largeObject, size_t size)
     95{
     96    BASSERT(largeObject.isFree());
     97
     98    if (largeObject.size() - size > largeMin) {
     99        std::pair<LargeObject, LargeObject> split = largeObject.split(size);
     100        largeObject = split.first;
     101        m_largeObjects.insert(split.second);
     102    }
     103
     104    vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
     105    largeObject.setOwner(Owner::Heap);
     106    return largeObject.begin();
     107}
     108
     109inline LargeObject VMHeap::allocateLargeObject(size_t size)
    90110{
    91111    LargeObject largeObject = m_largeObjects.take(size);
     
    95115        BASSERT(largeObject);
    96116    }
    97     return largeObject;
     117
     118    return allocateLargeObject(largeObject, size);
    98119}
    99120
    100 inline LargeObject VMHeap::allocateLargeRange(size_t alignment, size_t size, size_t unalignedSize)
     121inline LargeObject VMHeap::allocateLargeObject(size_t alignment, size_t size, size_t unalignedSize)
    101122{
    102123    LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize);
     
    106127        BASSERT(largeObject);
    107128    }
    108     return largeObject;
     129
     130    size_t alignmentMask = alignment - 1;
     131    if (test(largeObject.begin(), alignmentMask))
     132        return allocateLargeObject(largeObject, unalignedSize);
     133    return allocateLargeObject(largeObject, size);
    109134}
    110135
     
    127152}
    128153
    129 inline void VMHeap::deallocateLargeRange(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)
     154inline void VMHeap::deallocateLargeObject(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)
    130155{
    131     // Temporarily mark this range as allocated to prevent clients from merging
    132     // with it and then reallocating it while we're messing with its physical pages.
    133     largeObject.setFree(false);
     156    largeObject.setOwner(Owner::VMHeap);
     157   
     158    // If we couldn't merge with our neighbors before because they were in the
     159    // VM heap, we can merge with them now.
     160    LargeObject merged = largeObject.merge();
     161
     162    // Temporarily mark this object as allocated to prevent clients from merging
     163    // with it or allocating it while we're messing with its physical pages.
     164    merged.setFree(false);
    134165
    135166    lock.unlock();
    136     vmDeallocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());
     167    vmDeallocatePhysicalPagesSloppy(merged.begin(), merged.size());
    137168    lock.lock();
    138169
    139     largeObject.setFree(true);
    140     largeObject.setHasPhysicalPages(false);
     170    merged.setFree(true);
    141171
    142     m_largeObjects.insert(largeObject);
     172    m_largeObjects.insert(merged);
    143173}
    144174
Note: See TracChangeset for help on using the changeset viewer.