Changeset 180797 in webkit
- Timestamp:
- Feb 27, 2015 4:29:22 PM (9 years ago)
- Location:
- trunk/Source/bmalloc
- Files:
-
- 1 added
- 14 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/bmalloc/ChangeLog
r180701 r180797 1 2015-02-27 Geoffrey Garen <ggaren@apple.com> 2 3 bmalloc: Pathological madvise churn on the free(malloc(x)) benchmark 4 https://bugs.webkit.org/show_bug.cgi?id=142058 5 6 Reviewed by Andreas Kling. 7 8 The churn was caused by repeatedly splitting an object with physical 9 pages from an object without, and then merging them back together again. 10 The merge would conservatively forget that we had physical pages, forcing 11 a new call to madvise on the next allocation. 12 13 This patch more strictly segregates objects in the heap from objects in 14 the VM heap, with these changes: 15 16 (1) Objects in the heap are not allowed to merge with objects in the VM 17 heap, and vice versa -- since that would erase our precise knowledge of 18 which physical pages had been allocated. 19 20 (2) The VM heap is exclusively responsible for allocating and deallocating 21 physical pages. 22 23 (3) The heap free list must consider entries for objects that are in the 24 VM heap to be invalid, and vice versa. (This condition can arise 25 because the free list does not eagerly remove items.) 26 27 With these changes, we can know that any valid object in the heap's free 28 list already has physical pages, and does not need to call madvise. 29 30 Note that the VM heap -- as before -- might sometimes contain ranges 31 or pieces of ranges that have physical pages, since we allow splitting 32 of ranges at granularities smaller than the VM page size. These ranges 33 can eventually merge with ranges in the heap during scavenging. 34 35 * bmalloc.xcodeproj/project.pbxproj: 36 37 * bmalloc/BoundaryTag.h: 38 (bmalloc::BoundaryTag::owner): 39 (bmalloc::BoundaryTag::setOwner): 40 (bmalloc::BoundaryTag::initSentinel): 41 (bmalloc::BoundaryTag::hasPhysicalPages): Deleted. 42 (bmalloc::BoundaryTag::setHasPhysicalPages): Deleted. Replaced the concept 43 of "has physical pages" with a bit indicating which heap owns the large 44 object. This is a more precise concept, since the old bit was really a 45 Yes / Maybe bit. 46 47 * bmalloc/Deallocator.cpp: 48 49 * bmalloc/FreeList.cpp: Adopt 50 (bmalloc::FreeList::takeGreedy): 51 (bmalloc::FreeList::take): 52 (bmalloc::FreeList::removeInvalidAndDuplicateEntries): 53 * bmalloc/FreeList.h: 54 (bmalloc::FreeList::push): Added API for considering the owner when 55 deciding if a free list entry is valid. 56 57 * bmalloc/Heap.cpp: 58 (bmalloc::Heap::Heap): Adopt new API. 59 60 (bmalloc::Heap::scavengeLargeRanges): Scavenge all ranges with no minimum, 61 since some ranges might be able to merge with ranges in the VM heap, and 62 they won't be allowed to until we scavenge them. 63 64 (bmalloc::Heap::allocateSmallPage): 65 (bmalloc::Heap::allocateMediumPage): 66 (bmalloc::Heap::allocateLarge): New VM heap API makes this function 67 simpler, since we always get back physical pages now. 68 69 * bmalloc/Heap.h: 70 * bmalloc/LargeObject.h: 71 (bmalloc::LargeObject::end): 72 (bmalloc::LargeObject::owner): 73 (bmalloc::LargeObject::setOwner): 74 (bmalloc::LargeObject::isValidAndFree): 75 (bmalloc::LargeObject::merge): Do not merge objects across heaps since 76 that causes madvise churn. 77 (bmalloc::LargeObject::validateSelf): 78 (bmalloc::LargeObject::init): 79 (bmalloc::LargeObject::hasPhysicalPages): Deleted. 80 (bmalloc::LargeObject::setHasPhysicalPages): Deleted. Propogate the Owner API. 81 82 * bmalloc/Owner.h: Added. 83 84 * bmalloc/SegregatedFreeList.cpp: 85 (bmalloc::SegregatedFreeList::SegregatedFreeList): 86 (bmalloc::SegregatedFreeList::insert): 87 (bmalloc::SegregatedFreeList::takeGreedy): 88 (bmalloc::SegregatedFreeList::take): 89 * bmalloc/SegregatedFreeList.h: Propogate the owner API. 90 91 * bmalloc/VMAllocate.h: 92 (bmalloc::vmDeallocatePhysicalPagesSloppy): 93 (bmalloc::vmAllocatePhysicalPagesSloppy): Clarified these functions and 94 removed an edge case. 95 96 * bmalloc/VMHeap.cpp: 97 (bmalloc::VMHeap::VMHeap): 98 * bmalloc/VMHeap.h: 99 (bmalloc::VMHeap::allocateSmallPage): 100 (bmalloc::VMHeap::allocateMediumPage): 101 (bmalloc::VMHeap::allocateLargeObject): 102 (bmalloc::VMHeap::deallocateLargeObject): Be sure to give each object 103 a new chance to merge, since it might have been prohibited from merging 104 before by virtue of not being in the VM heap. 105 106 (bmalloc::VMHeap::allocateLargeRange): Deleted. 107 (bmalloc::VMHeap::deallocateLargeRange): Deleted. 108 1 109 2015-02-26 Geoffrey Garen <ggaren@apple.com> 2 110 -
trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
r180694 r180797 27 27 14C919C918FCC59F0028DB43 /* BPlatform.h in Headers */ = {isa = PBXBuildFile; fileRef = 14C919C818FCC59F0028DB43 /* BPlatform.h */; settings = {ATTRIBUTES = (Private, ); }; }; 28 28 14CC394C18EA8858004AFE34 /* libbmalloc.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 14F271BE18EA3963008C152F /* libbmalloc.a */; }; 29 14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */ = {isa = PBXBuildFile; fileRef = 14D2CD9A1AA12CFB00770440 /* Owner.h */; settings = {ATTRIBUTES = (Private, ); }; }; 29 30 14DD788C18F48CAE00950702 /* LargeChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 147AAA8818CD17CE002201E4 /* LargeChunk.h */; settings = {ATTRIBUTES = (Private, ); }; }; 30 31 14DD788D18F48CC600950702 /* BeginTag.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F64518B54A700076FA3F /* BeginTag.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 139 140 14C919C818FCC59F0028DB43 /* BPlatform.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BPlatform.h; path = bmalloc/BPlatform.h; sourceTree = "<group>"; }; 140 141 14CC394418EA8743004AFE34 /* libmbmalloc.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libmbmalloc.dylib; sourceTree = BUILT_PRODUCTS_DIR; }; 142 14D2CD9A1AA12CFB00770440 /* Owner.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Owner.h; path = bmalloc/Owner.h; sourceTree = "<group>"; }; 141 143 14D9DB4517F2447100EAAB79 /* FixedVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = FixedVector.h; path = bmalloc/FixedVector.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; }; 142 144 14DA32071885F9E6007269E0 /* Line.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = Line.h; path = bmalloc/Line.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; }; … … 225 227 147AAA8818CD17CE002201E4 /* LargeChunk.h */, 226 228 14C6216E1A9A9A6200E72293 /* LargeObject.h */, 229 14D2CD9A1AA12CFB00770440 /* Owner.h */, 227 230 146BEE2118C845AE0002D5A2 /* SegregatedFreeList.cpp */, 228 231 146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */, … … 354 357 14DD78C818F48D7500950702 /* FixedVector.h in Headers */, 355 358 14DD78B718F48D6B00950702 /* MediumLine.h in Headers */, 359 14D2CD9B1AA12CFB00770440 /* Owner.h in Headers */, 356 360 14DD78B618F48D6B00950702 /* MediumChunk.h in Headers */, 357 361 14DD78BC18F48D6B00950702 /* SmallLine.h in Headers */, -
trunk/Source/bmalloc/bmalloc/BoundaryTag.h
r180701 r180797 28 28 29 29 #include "BAssert.h" 30 #include "Owner.h" 30 31 #include "Range.h" 31 32 #include "Sizes.h" … … 50 51 void setEnd(bool isEnd) { m_isEnd = isEnd; } 51 52 52 bool hasPhysicalPages() { return m_hasPhysicalPages; }53 void set HasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }53 Owner owner() { return m_owner; } 54 void setOwner(Owner owner) { m_owner = owner; } 54 55 55 56 bool isMarked() { return m_isMarked; } … … 85 86 bool m_isFree: 1; 86 87 bool m_isEnd: 1; 87 bool m_hasPhysicalPages: 1;88 Owner m_owner: 1; 88 89 bool m_isMarked: 1; 89 90 unsigned m_compactBegin: compactBeginBits; … … 122 123 setRange(Range(nullptr, largeMin)); 123 124 setFree(false); 125 setOwner(Owner::VMHeap); 124 126 } 125 127 -
trunk/Source/bmalloc/bmalloc/Deallocator.cpp
r178621 r180797 25 25 26 26 #include "BAssert.h" 27 #include "BeginTag.h"28 27 #include "LargeChunk.h" 29 28 #include "Deallocator.h" -
trunk/Source/bmalloc/bmalloc/FreeList.cpp
r180701 r180797 24 24 */ 25 25 26 #include "BeginTag.h"27 26 #include "LargeChunk.h" 28 27 #include "FreeList.h" … … 31 30 namespace bmalloc { 32 31 33 LargeObject FreeList::takeGreedy( size_t size)32 LargeObject FreeList::takeGreedy(Owner owner) 34 33 { 35 34 for (size_t i = m_vector.size(); i-- > 0; ) { … … 37 36 // so we need to validate each free list entry before using it. 38 37 LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin()); 39 if (!largeObject.isValidAndFree( m_vector[i].size())) {38 if (!largeObject.isValidAndFree(owner, m_vector[i].size())) { 40 39 m_vector.pop(i); 41 40 continue; 42 41 } 43 44 if (largeObject.size() < size)45 continue;46 42 47 43 m_vector.pop(i); … … 52 48 } 53 49 54 LargeObject FreeList::take( size_t size)50 LargeObject FreeList::take(Owner owner, size_t size) 55 51 { 56 52 LargeObject first; … … 60 56 // we need to validate each free list entry before using it. 61 57 LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin()); 62 if (!largeObject.isValidAndFree( m_vector[i].size())) {58 if (!largeObject.isValidAndFree(owner, m_vector[i].size())) { 63 59 m_vector.pop(i); 64 60 continue; … … 77 73 } 78 74 79 LargeObject FreeList::take( size_t alignment, size_t size, size_t unalignedSize)75 LargeObject FreeList::take(Owner owner, size_t alignment, size_t size, size_t unalignedSize) 80 76 { 81 77 BASSERT(isPowerOfTwo(alignment)); … … 88 84 // we need to validate each free list entry before using it. 89 85 LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin()); 90 if (!largeObject.isValidAndFree( m_vector[i].size())) {86 if (!largeObject.isValidAndFree(owner, m_vector[i].size())) { 91 87 m_vector.pop(i); 92 88 continue; … … 108 104 } 109 105 110 void FreeList::removeInvalidAndDuplicateEntries( )106 void FreeList::removeInvalidAndDuplicateEntries(Owner owner) 111 107 { 112 108 for (size_t i = m_vector.size(); i-- > 0; ) { 113 109 LargeObject largeObject(LargeObject::DoNotValidate, m_vector[i].begin()); 114 if (!largeObject.isValidAndFree( m_vector[i].size())) {110 if (!largeObject.isValidAndFree(owner, m_vector[i].size())) { 115 111 m_vector.pop(i); 116 112 continue; -
trunk/Source/bmalloc/bmalloc/FreeList.h
r180701 r180797 38 38 FreeList(); 39 39 40 void push( const LargeObject&);40 void push(Owner, const LargeObject&); 41 41 42 LargeObject take( size_t);43 LargeObject take( size_t alignment, size_t, size_t unalignedSize);42 LargeObject take(Owner, size_t); 43 LargeObject take(Owner, size_t alignment, size_t, size_t unalignedSize); 44 44 45 LargeObject takeGreedy( size_t);45 LargeObject takeGreedy(Owner); 46 46 47 void removeInvalidAndDuplicateEntries( );47 void removeInvalidAndDuplicateEntries(Owner); 48 48 49 49 private: … … 58 58 } 59 59 60 inline void FreeList::push( const LargeObject& largeObject)60 inline void FreeList::push(Owner owner, const LargeObject& largeObject) 61 61 { 62 62 BASSERT(largeObject.isFree()); 63 63 if (m_vector.size() == m_limit) { 64 removeInvalidAndDuplicateEntries( );64 removeInvalidAndDuplicateEntries(owner); 65 65 m_limit = std::max(m_vector.size() * freeListGrowFactor, freeListSearchDepth); 66 66 } -
trunk/Source/bmalloc/bmalloc/Heap.cpp
r180576 r180797 47 47 48 48 Heap::Heap(std::lock_guard<StaticMutex>&) 49 : m_isAllocatingPages(false) 49 : m_largeObjects(Owner::Heap) 50 , m_isAllocatingPages(false) 50 51 , m_scavenger(*this, &Heap::concurrentScavenge) 51 52 { … … 145 146 } 146 147 147 LargeObject largeObject = m_largeObjects.takeGreedy( vmPageSize);148 LargeObject largeObject = m_largeObjects.takeGreedy(); 148 149 if (!largeObject) 149 150 return; 150 m_vmHeap.deallocateLarge Range(lock, largeObject);151 m_vmHeap.deallocateLargeObject(lock, largeObject); 151 152 } 152 153 } … … 241 242 if (m_smallPages.size()) 242 243 return m_smallPages.pop(); 243 244 SmallPage* page = m_vmHeap.allocateSmallPage(); 245 vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize); 246 return page; 244 return m_vmHeap.allocateSmallPage(); 247 245 }(); 248 246 … … 266 264 if (m_mediumPages.size()) 267 265 return m_mediumPages.pop(); 268 269 MediumPage* page = m_vmHeap.allocateMediumPage(); 270 vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize); 271 return page; 266 return m_vmHeap.allocateMediumPage(); 272 267 }(); 273 268 … … 376 371 377 372 largeObject.setFree(false); 378 379 if (!largeObject.hasPhysicalPages()) {380 vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size());381 largeObject.setHasPhysicalPages(true);382 }383 384 373 return largeObject.begin(); 385 374 } … … 395 384 LargeObject largeObject = m_largeObjects.take(size); 396 385 if (!largeObject) 397 largeObject = m_vmHeap.allocateLarge Range(size);386 largeObject = m_vmHeap.allocateLargeObject(size); 398 387 399 388 return allocateLarge(lock, largeObject, size); … … 416 405 LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize); 417 406 if (!largeObject) 418 largeObject = m_vmHeap.allocateLarge Range(alignment, size, unalignedSize);407 largeObject = m_vmHeap.allocateLargeObject(alignment, size, unalignedSize); 419 408 420 409 size_t alignmentMask = alignment - 1; 421 if (!test(largeObject.begin(), alignmentMask)) 422 return allocateLarge(lock, largeObject, size); 423 424 // Because we allocate VM left-to-right, we must explicitly allocate the 425 // unaligned space on the left in order to break off the aligned space 426 // we want in the middle. 427 size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin(); 428 std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize); 429 allocateLarge(lock, pair.first, prefixSize); 430 allocateLarge(lock, pair.second, size); 431 deallocateLarge(lock, pair.first); 432 return pair.second.begin(); 410 if (test(largeObject.begin(), alignmentMask)) { 411 size_t prefixSize = roundUpToMultipleOf(alignment, largeObject.begin() + largeMin) - largeObject.begin(); 412 std::pair<LargeObject, LargeObject> pair = largeObject.split(prefixSize); 413 m_largeObjects.insert(pair.first); 414 largeObject = pair.second; 415 } 416 417 return allocateLarge(lock, largeObject, size); 433 418 } 434 419 -
trunk/Source/bmalloc/bmalloc/Heap.h
r180576 r180797 87 87 void splitLarge(BeginTag*, size_t, EndTag*&, Range&); 88 88 void mergeLarge(BeginTag*&, EndTag*&, Range&); 89 void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);90 void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& hasPhysicalPages);89 void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap); 90 void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap); 91 91 92 92 void concurrentScavenge(); -
trunk/Source/bmalloc/bmalloc/LargeObject.h
r180701 r180797 47 47 48 48 char* begin() const { return static_cast<char*>(m_object); } 49 char* end() const { return begin() + size(); } 49 50 size_t size() const { return m_beginTag->size(); } 50 51 Range range() const { return Range(m_object, size()); } … … 53 54 bool isFree() const; 54 55 55 bool hasPhysicalPages() const;56 void set HasPhysicalPages(bool) const;56 Owner owner() const; 57 void setOwner(Owner) const; 57 58 58 59 bool isMarked() const; 59 60 void setMarked(bool) const; 60 61 61 bool isValidAndFree( size_t) const;62 bool isValidAndFree(Owner, size_t) const; 62 63 63 64 LargeObject merge() const; … … 117 118 } 118 119 119 inline bool LargeObject::hasPhysicalPages() const120 { 121 validate(); 122 return m_beginTag-> hasPhysicalPages();123 } 124 125 inline void LargeObject::set HasPhysicalPages(bool hasPhysicalPages) const126 { 127 validate(); 128 m_beginTag->set HasPhysicalPages(hasPhysicalPages);129 m_endTag->set HasPhysicalPages(hasPhysicalPages);120 inline Owner LargeObject::owner() const 121 { 122 validate(); 123 return m_beginTag->owner(); 124 } 125 126 inline void LargeObject::setOwner(Owner owner) const 127 { 128 validate(); 129 m_beginTag->setOwner(owner); 130 m_endTag->setOwner(owner); 130 131 } 131 132 … … 143 144 } 144 145 145 inline bool LargeObject::isValidAndFree( size_t expectedSize) const146 inline bool LargeObject::isValidAndFree(Owner expectedOwner, size_t expectedSize) const 146 147 { 147 148 if (!m_beginTag->isFree()) … … 157 158 return false; 158 159 160 if (m_beginTag->owner() != expectedOwner) 161 return false; 162 159 163 return true; 160 164 } … … 164 168 validate(); 165 169 BASSERT(isFree()); 166 167 bool hasPhysicalPages = m_beginTag->hasPhysicalPages();168 170 169 171 BeginTag* beginTag = m_beginTag; 170 172 EndTag* endTag = m_endTag; 171 173 Range range = this->range(); 174 Owner owner = this->owner(); 172 175 173 176 EndTag* prev = beginTag->prev(); 174 if (prev->isFree() ) {177 if (prev->isFree() && prev->owner() == owner) { 175 178 Range left(range.begin() - prev->size(), prev->size()); 176 179 range = Range(left.begin(), left.size() + range.size()); 177 hasPhysicalPages &= prev->hasPhysicalPages();178 180 179 181 prev->clear(); … … 184 186 185 187 BeginTag* next = endTag->next(); 186 if (next->isFree() ) {188 if (next->isFree() && next->owner() == owner) { 187 189 Range right(range.end(), next->size()); 188 190 range = Range(range.begin(), range.size() + right.size()); 189 191 190 hasPhysicalPages &= next->hasPhysicalPages();191 192 192 endTag->clear(); 193 193 next->clear(); … … 198 198 beginTag->setRange(range); 199 199 beginTag->setFree(true); 200 beginTag->set HasPhysicalPages(hasPhysicalPages);200 beginTag->setOwner(owner); 201 201 endTag->init(beginTag); 202 202 … … 239 239 BASSERT(m_beginTag->size() == m_endTag->size()); 240 240 BASSERT(m_beginTag->isFree() == m_endTag->isFree()); 241 BASSERT(m_beginTag-> hasPhysicalPages() == m_endTag->hasPhysicalPages());241 BASSERT(m_beginTag->owner() == m_endTag->owner()); 242 242 BASSERT(m_beginTag->isMarked() == m_endTag->isMarked()); 243 243 } … … 265 265 beginTag->setRange(range); 266 266 beginTag->setFree(true); 267 beginTag->set HasPhysicalPages(false);267 beginTag->setOwner(Owner::VMHeap); 268 268 269 269 EndTag* endTag = LargeChunk::endTag(range.begin(), range.size()); -
trunk/Source/bmalloc/bmalloc/SegregatedFreeList.cpp
r180693 r180797 28 28 namespace bmalloc { 29 29 30 SegregatedFreeList::SegregatedFreeList() 30 SegregatedFreeList::SegregatedFreeList(Owner owner) 31 : m_owner(owner) 31 32 { 32 33 BASSERT(static_cast<size_t>(&select(largeMax) - m_freeLists.begin()) == m_freeLists.size() - 1); … … 35 36 void SegregatedFreeList::insert(const LargeObject& largeObject) 36 37 { 38 BASSERT(largeObject.owner() == m_owner); 37 39 auto& list = select(largeObject.size()); 38 list.push( largeObject);40 list.push(m_owner, largeObject); 39 41 } 40 42 41 LargeObject SegregatedFreeList::takeGreedy( size_t size)43 LargeObject SegregatedFreeList::takeGreedy() 42 44 { 43 45 for (size_t i = m_freeLists.size(); i-- > 0; ) { 44 LargeObject largeObject = m_freeLists[i].takeGreedy( size);46 LargeObject largeObject = m_freeLists[i].takeGreedy(m_owner); 45 47 if (!largeObject) 46 48 continue; … … 54 56 { 55 57 for (auto* list = &select(size); list != m_freeLists.end(); ++list) { 56 LargeObject largeObject = list->take( size);58 LargeObject largeObject = list->take(m_owner, size); 57 59 if (!largeObject) 58 60 continue; … … 66 68 { 67 69 for (auto* list = &select(size); list != m_freeLists.end(); ++list) { 68 LargeObject largeObject = list->take( alignment, size, unalignedSize);70 LargeObject largeObject = list->take(m_owner, alignment, size, unalignedSize); 69 71 if (!largeObject) 70 72 continue; -
trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h
r180693 r180797 34 34 class SegregatedFreeList { 35 35 public: 36 SegregatedFreeList( );36 SegregatedFreeList(Owner); 37 37 38 38 void insert(const LargeObject&); … … 55 55 // removes stale items from the free list while searching. Eagerly removes 56 56 // the returned object from the free list. 57 LargeObject takeGreedy( size_t);58 57 LargeObject takeGreedy(); 58 59 59 private: 60 60 FreeList& select(size_t); 61 61 62 Owner m_owner; 62 63 std::array<FreeList, 19> m_freeLists; 63 64 }; -
trunk/Source/bmalloc/bmalloc/VMAllocate.h
r179923 r180797 132 132 } 133 133 134 // Trims requests that are un-page-aligned. NOTE: size must be at least a page.134 // Trims requests that are un-page-aligned. 135 135 inline void vmDeallocatePhysicalPagesSloppy(void* p, size_t size) 136 136 { 137 BASSERT(size >= vmPageSize);138 139 137 char* begin = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p)); 140 138 char* end = roundDownToMultipleOf<vmPageSize>(static_cast<char*>(p) + size); 141 139 142 Range range(begin, end - begin); 143 if (!range) 140 if (begin >= end) 144 141 return; 145 vmDeallocatePhysicalPages(range.begin(), range.size()); 142 143 vmDeallocatePhysicalPages(begin, end - begin); 146 144 } 147 145 … … 152 150 char* end = roundUpToMultipleOf<vmPageSize>(static_cast<char*>(p) + size); 153 151 154 Range range(begin, end - begin); 155 if (!range) 152 if (begin >= end) 156 153 return; 157 vmAllocatePhysicalPages(range.begin(), range.size()); 154 155 vmAllocatePhysicalPages(begin, end - begin); 158 156 } 159 157 -
trunk/Source/bmalloc/bmalloc/VMHeap.cpp
r180693 r180797 34 34 35 35 VMHeap::VMHeap() 36 : m_largeObjects(Owner::VMHeap) 36 37 { 37 38 } -
trunk/Source/bmalloc/bmalloc/VMHeap.h
r180604 r180797 53 53 SmallPage* allocateSmallPage(); 54 54 MediumPage* allocateMediumPage(); 55 LargeObject allocateLarge Range(size_t);56 LargeObject allocateLarge Range(size_t alignment, size_t, size_t unalignedSize);55 LargeObject allocateLargeObject(size_t); 56 LargeObject allocateLargeObject(size_t alignment, size_t, size_t unalignedSize); 57 57 58 58 void deallocateSmallPage(std::unique_lock<StaticMutex>&, SmallPage*); 59 59 void deallocateMediumPage(std::unique_lock<StaticMutex>&, MediumPage*); 60 void deallocateLarge Range(std::unique_lock<StaticMutex>&, LargeObject&);60 void deallocateLargeObject(std::unique_lock<StaticMutex>&, LargeObject&); 61 61 62 62 private: 63 LargeObject allocateLargeObject(LargeObject&, size_t); 63 64 void grow(); 64 65 … … 76 77 grow(); 77 78 78 return m_smallPages.pop(); 79 SmallPage* page = m_smallPages.pop(); 80 vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize); 81 return page; 79 82 } 80 83 … … 84 87 grow(); 85 88 86 return m_mediumPages.pop(); 89 MediumPage* page = m_mediumPages.pop(); 90 vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize); 91 return page; 87 92 } 88 93 89 inline LargeObject VMHeap::allocateLargeRange(size_t size) 94 inline LargeObject VMHeap::allocateLargeObject(LargeObject& largeObject, size_t size) 95 { 96 BASSERT(largeObject.isFree()); 97 98 if (largeObject.size() - size > largeMin) { 99 std::pair<LargeObject, LargeObject> split = largeObject.split(size); 100 largeObject = split.first; 101 m_largeObjects.insert(split.second); 102 } 103 104 vmAllocatePhysicalPagesSloppy(largeObject.begin(), largeObject.size()); 105 largeObject.setOwner(Owner::Heap); 106 return largeObject.begin(); 107 } 108 109 inline LargeObject VMHeap::allocateLargeObject(size_t size) 90 110 { 91 111 LargeObject largeObject = m_largeObjects.take(size); … … 95 115 BASSERT(largeObject); 96 116 } 97 return largeObject; 117 118 return allocateLargeObject(largeObject, size); 98 119 } 99 120 100 inline LargeObject VMHeap::allocateLarge Range(size_t alignment, size_t size, size_t unalignedSize)121 inline LargeObject VMHeap::allocateLargeObject(size_t alignment, size_t size, size_t unalignedSize) 101 122 { 102 123 LargeObject largeObject = m_largeObjects.take(alignment, size, unalignedSize); … … 106 127 BASSERT(largeObject); 107 128 } 108 return largeObject; 129 130 size_t alignmentMask = alignment - 1; 131 if (test(largeObject.begin(), alignmentMask)) 132 return allocateLargeObject(largeObject, unalignedSize); 133 return allocateLargeObject(largeObject, size); 109 134 } 110 135 … … 127 152 } 128 153 129 inline void VMHeap::deallocateLarge Range(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject)154 inline void VMHeap::deallocateLargeObject(std::unique_lock<StaticMutex>& lock, LargeObject& largeObject) 130 155 { 131 // Temporarily mark this range as allocated to prevent clients from merging 132 // with it and then reallocating it while we're messing with its physical pages. 133 largeObject.setFree(false); 156 largeObject.setOwner(Owner::VMHeap); 157 158 // If we couldn't merge with our neighbors before because they were in the 159 // VM heap, we can merge with them now. 160 LargeObject merged = largeObject.merge(); 161 162 // Temporarily mark this object as allocated to prevent clients from merging 163 // with it or allocating it while we're messing with its physical pages. 164 merged.setFree(false); 134 165 135 166 lock.unlock(); 136 vmDeallocatePhysicalPagesSloppy( largeObject.begin(), largeObject.size());167 vmDeallocatePhysicalPagesSloppy(merged.begin(), merged.size()); 137 168 lock.lock(); 138 169 139 largeObject.setFree(true); 140 largeObject.setHasPhysicalPages(false); 170 merged.setFree(true); 141 171 142 m_largeObjects.insert( largeObject);172 m_largeObjects.insert(merged); 143 173 } 144 174
Note: See TracChangeset
for help on using the changeset viewer.