Changeset 167502 in webkit
- Timestamp:
- Apr 18, 2014 1:17:59 PM (10 years ago)
- Location:
- trunk/Source/bmalloc
- Files:
-
- 5 added
- 21 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/bmalloc/ChangeLog
r167292 r167502 1 2014-04-18 Geoffrey Garen <ggaren@apple.com> 2 3 bmalloc: Added an XSmall line size 4 https://bugs.webkit.org/show_bug.cgi?id=131851 5 6 Reviewed by Sam Weinig. 7 8 Reduces malloc footprint on Membuster recordings by 10%. 9 10 This is a throughput regression, but we're still way ahead of TCMalloc. 11 I have some ideas for how to recover the regression -- but I wanted to 12 get this win in first. 13 14 Full set of benchmark results: 15 16 bmalloc> ~/webkit/PerformanceTests/MallocBench/run-malloc-benchmarks --measure-heap nopatch:~/scratch/Build-nopatch/Release/ patch:~/webkit/WebKitBuild/Release/ 17 18 nopatch patch Δ 19 Peak Memory: 20 reddit_memory_warning 7,896kB 7,532kB ^ 1.05x smaller 21 flickr_memory_warning 12,968kB 12,324kB ^ 1.05x smaller 22 theverge_memory_warning 16,672kB 15,200kB ^ 1.1x smaller 23 24 <geometric mean> 11,952kB 11,216kB ^ 1.07x smaller 25 <arithmetic mean> 12,512kB 11,685kB ^ 1.07x smaller 26 <harmonic mean> 11,375kB 10,726kB ^ 1.06x smaller 27 28 Memory at End: 29 reddit_memory_warning 7,320kB 6,856kB ^ 1.07x smaller 30 flickr_memory_warning 10,848kB 9,692kB ^ 1.12x smaller 31 theverge_memory_warning 16,380kB 14,872kB ^ 1.1x smaller 32 33 <geometric mean> 10,916kB 9,961kB ^ 1.1x smaller 34 <arithmetic mean> 11,516kB 10,473kB ^ 1.1x smaller 35 <harmonic mean> 10,350kB 9,485kB ^ 1.09x smaller 36 37 MallocBench> ~/webkit/PerformanceTests/MallocBench/run-malloc-benchmarks nopatch:~/scratch/Build-nopatch/Release/ patch:~/webkit/WebKitBuild/Release/ 38 39 nopatch patch Δ 40 Execution Time: 41 churn 127ms 151ms ! 1.19x slower 42 list_allocate 130ms 164ms ! 1.26x slower 43 tree_allocate 109ms 127ms ! 1.17x slower 44 tree_churn 115ms 120ms ! 1.04x slower 45 facebook 240ms 259ms ! 1.08x slower 46 fragment 91ms 131ms ! 1.44x slower 47 fragment_iterate 105ms 106ms ! 1.01x slower 48 message_one 260ms 259ms ^ 1.0x faster 49 message_many 149ms 154ms ! 1.03x slower 50 medium 194ms 248ms ! 1.28x slower 51 big 157ms 160ms ! 1.02x slower 52 53 <geometric mean> 144ms 163ms ! 1.13x slower 54 <arithmetic mean> 152ms 171ms ! 1.12x slower 55 <harmonic mean> 137ms 156ms ! 1.14x slower 56 57 MallocBench> ~/webkit/PerformanceTests/MallocBench/run-malloc-benchmarks nopatch:~/scratch/Build-nopatch/Release/ patch:~/webkit/WebKitBuild/Release/ 58 59 nopatch patch Δ 60 Execution Time: 61 churn 126ms 148ms ! 1.17x slower 62 churn --parallel 62ms 76ms ! 1.23x slower 63 list_allocate 130ms 164ms ! 1.26x slower 64 list_allocate --parallel 120ms 175ms ! 1.46x slower 65 tree_allocate 111ms 127ms ! 1.14x slower 66 tree_allocate --parallel 95ms 135ms ! 1.42x slower 67 tree_churn 115ms 124ms ! 1.08x slower 68 tree_churn --parallel 107ms 126ms ! 1.18x slower 69 facebook 240ms 276ms ! 1.15x slower 70 facebook --parallel 802ms 1,088ms ! 1.36x slower 71 fragment 92ms 130ms ! 1.41x slower 72 fragment --parallel 66ms 124ms ! 1.88x slower 73 fragment_iterate 109ms 127ms ! 1.17x slower 74 fragment_iterate --parallel 55ms 64ms ! 1.16x slower 75 message_one 260ms 260ms 76 message_many 170ms 238ms ! 1.4x slower 77 medium 185ms 250ms ! 1.35x slower 78 medium --parallel 210ms 334ms ! 1.59x slower 79 big 150ms 169ms ! 1.13x slower 80 big --parallel 138ms 144ms ! 1.04x slower 81 82 <geometric mean> 135ms 170ms ! 1.26x slower 83 <arithmetic mean> 167ms 214ms ! 1.28x slower 84 <harmonic mean> 117ms 148ms ! 1.26x slower 85 86 MallocBench> ~/webkit/PerformanceTests/MallocBench/run-malloc-benchmarks TC:~/scratch/Build-TCMalloc/Release/ patch:~/webkit/WebKitBuild/Release/ 87 88 TC patch Δ 89 Peak Memory: 90 reddit_memory_warning 13,836kB 13,436kB ^ 1.03x smaller 91 flickr_memory_warning 24,868kB 25,188kB ! 1.01x bigger 92 theverge_memory_warning 24,504kB 26,636kB ! 1.09x bigger 93 94 <geometric mean> 20,353kB 20,812kB ! 1.02x bigger 95 <arithmetic mean> 21,069kB 21,753kB ! 1.03x bigger 96 <harmonic mean> 19,570kB 19,780kB ! 1.01x bigger 97 98 Memory at End: 99 reddit_memory_warning 8,656kB 10,016kB ! 1.16x bigger 100 flickr_memory_warning 11,844kB 13,784kB ! 1.16x bigger 101 theverge_memory_warning 18,516kB 22,748kB ! 1.23x bigger 102 103 <geometric mean> 12,382kB 14,644kB ! 1.18x bigger 104 <arithmetic mean> 13,005kB 15,516kB ! 1.19x bigger 105 <harmonic mean> 11,813kB 13,867kB ! 1.17x bigger 106 107 MallocBench> ~/webkit/PerformanceTests/MallocBench/run-malloc-benchmarks TC:~/scratch/Build-TCMalloc/Release/ patch:~/webkit/WebKitBuild/Release/ 108 109 TC patch Δ 110 Execution Time: 111 churn 416ms 148ms ^ 2.81x faster 112 list_allocate 463ms 164ms ^ 2.82x faster 113 tree_allocate 292ms 127ms ^ 2.3x faster 114 tree_churn 157ms 120ms ^ 1.31x faster 115 facebook 327ms 276ms ^ 1.18x faster 116 fragment 335ms 129ms ^ 2.6x faster 117 fragment_iterate 344ms 108ms ^ 3.19x faster 118 message_one 386ms 258ms ^ 1.5x faster 119 message_many 410ms 154ms ^ 2.66x faster 120 medium 391ms 245ms ^ 1.6x faster 121 big 261ms 167ms ^ 1.56x faster 122 123 <geometric mean> 332ms 164ms ^ 2.02x faster 124 <arithmetic mean> 344ms 172ms ^ 1.99x faster 125 <harmonic mean> 317ms 157ms ^ 2.02x faster 126 127 * bmalloc.xcodeproj/project.pbxproj: 128 * bmalloc/Allocator.cpp: 129 (bmalloc::Allocator::Allocator): Don't assume that each allocator's 130 index corresponds with its size. Instead, use the size selection function 131 explicitly. Now that we have XSmall, some small allocator entries are 132 unused. 133 134 (bmalloc::Allocator::scavenge): 135 (bmalloc::Allocator::log): 136 (bmalloc::Allocator::processXSmallAllocatorLog): 137 (bmalloc::Allocator::allocateSlowCase): 138 * bmalloc/Allocator.h: 139 (bmalloc::Allocator::xSmallAllocatorFor): 140 (bmalloc::Allocator::allocateFastCase): 141 * bmalloc/Chunk.h: 142 * bmalloc/Deallocator.cpp: 143 (bmalloc::Deallocator::scavenge): 144 (bmalloc::Deallocator::processObjectLog): 145 (bmalloc::Deallocator::deallocateSlowCase): 146 (bmalloc::Deallocator::deallocateXSmallLine): 147 (bmalloc::Deallocator::allocateXSmallLine): 148 * bmalloc/Deallocator.h: 149 (bmalloc::Deallocator::deallocateFastCase): 150 * bmalloc/Heap.cpp: 151 (bmalloc::Heap::scavenge): 152 (bmalloc::Heap::scavengeXSmallPages): 153 (bmalloc::Heap::allocateXSmallLineSlowCase): 154 * bmalloc/Heap.h: 155 (bmalloc::Heap::deallocateXSmallLine): 156 (bmalloc::Heap::allocateXSmallLine): 157 * bmalloc/LargeChunk.h: 158 (bmalloc::LargeChunk::get): 159 (bmalloc::LargeChunk::endTag): 160 * bmalloc/Line.h: 161 * bmalloc/MediumAllocator.h: 162 (bmalloc::MediumAllocator::allocate): 163 (bmalloc::MediumAllocator::refill): 164 * bmalloc/ObjectType.cpp: 165 (bmalloc::objectType): 166 * bmalloc/ObjectType.h: 167 (bmalloc::isXSmall): 168 (bmalloc::isSmall): 169 (bmalloc::isMedium): 170 (bmalloc::isLarge): 171 (bmalloc::isSmallOrMedium): Deleted. 172 * bmalloc/SegregatedFreeList.h: I boiler-plate copied existing code for 173 handling small objects. There's probably a reasonable way to share this 174 code in the future -- I'll look into that once it's stopped changing. 175 176 * bmalloc/Sizes.h: Tweaked size classes to make Membuster happy. This 177 is the main reason things got slower. 178 179 * bmalloc/SmallAllocator.h: 180 (bmalloc::SmallAllocator::allocate): 181 * bmalloc/SmallTraits.h: 182 * bmalloc/VMHeap.cpp: 183 (bmalloc::VMHeap::allocateXSmallChunk): 184 * bmalloc/VMHeap.h: 185 (bmalloc::VMHeap::allocateXSmallPage): 186 (bmalloc::VMHeap::deallocateXSmallPage): 187 * bmalloc/XSmallAllocator.h: Added. 188 (bmalloc::XSmallAllocator::isNull): 189 (bmalloc::XSmallAllocator::canAllocate): 190 (bmalloc::XSmallAllocator::XSmallAllocator): 191 (bmalloc::XSmallAllocator::line): 192 (bmalloc::XSmallAllocator::allocate): 193 (bmalloc::XSmallAllocator::objectCount): 194 (bmalloc::XSmallAllocator::derefCount): 195 (bmalloc::XSmallAllocator::refill): 196 * bmalloc/XSmallChunk.h: Added. 197 * bmalloc/XSmallLine.h: Added. 198 * bmalloc/XSmallPage.h: Added. 199 * bmalloc/XSmallTraits.h: Added. 200 * bmalloc/bmalloc.h: 201 (bmalloc::api::realloc): Boiler-plate copy, as above. 202 1 203 2014-04-14 Geoffrey Garen <ggaren@apple.com> 2 204 -
trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
r167289 r167502 12 12 1400274B18F89C3D00115C97 /* BoundaryTagInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 14105E7B18DBD7AF003A106E /* BoundaryTagInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; 13 13 1400274C18F89C3D00115C97 /* SegregatedFreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 146BEE1E18C841C50002D5A2 /* SegregatedFreeList.h */; settings = {ATTRIBUTES = (Private, ); }; }; 14 142FCC78190080B8009032D4 /* XSmallChunk.h in Headers */ = {isa = PBXBuildFile; fileRef = 142FCC74190080B8009032D4 /* XSmallChunk.h */; }; 15 142FCC79190080B8009032D4 /* XSmallLine.h in Headers */ = {isa = PBXBuildFile; fileRef = 142FCC75190080B8009032D4 /* XSmallLine.h */; }; 16 142FCC7A190080B8009032D4 /* XSmallPage.h in Headers */ = {isa = PBXBuildFile; fileRef = 142FCC76190080B8009032D4 /* XSmallPage.h */; }; 17 142FCC7B190080B8009032D4 /* XSmallTraits.h in Headers */ = {isa = PBXBuildFile; fileRef = 142FCC77190080B8009032D4 /* XSmallTraits.h */; }; 18 142FCC7D1900815E009032D4 /* XSmallAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 142FCC7C1900815E009032D4 /* XSmallAllocator.h */; }; 14 19 1448C30018F3754600502839 /* mbmalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1448C2FF18F3754300502839 /* mbmalloc.cpp */; }; 15 20 1448C30118F3754C00502839 /* bmalloc.h in Headers */ = {isa = PBXBuildFile; fileRef = 1448C2FE18F3754300502839 /* bmalloc.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 83 88 1417F65218BA88A00076FA3F /* AsyncTask.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AsyncTask.h; path = bmalloc/AsyncTask.h; sourceTree = "<group>"; }; 84 89 1421A87718EE462A00B4DD68 /* Algorithm.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = Algorithm.h; path = bmalloc/Algorithm.h; sourceTree = "<group>"; }; 90 142FCC74190080B8009032D4 /* XSmallChunk.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XSmallChunk.h; path = bmalloc/XSmallChunk.h; sourceTree = "<group>"; }; 91 142FCC75190080B8009032D4 /* XSmallLine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XSmallLine.h; path = bmalloc/XSmallLine.h; sourceTree = "<group>"; }; 92 142FCC76190080B8009032D4 /* XSmallPage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XSmallPage.h; path = bmalloc/XSmallPage.h; sourceTree = "<group>"; }; 93 142FCC77190080B8009032D4 /* XSmallTraits.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XSmallTraits.h; path = bmalloc/XSmallTraits.h; sourceTree = "<group>"; }; 94 142FCC7C1900815E009032D4 /* XSmallAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = XSmallAllocator.h; path = bmalloc/XSmallAllocator.h; sourceTree = "<group>"; }; 85 95 143E29E918CAE8BE00FE8A0F /* MediumPage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = MediumPage.h; path = bmalloc/MediumPage.h; sourceTree = "<group>"; }; 86 96 143E29ED18CAE90500FE8A0F /* SmallPage.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = SmallPage.h; path = bmalloc/SmallPage.h; sourceTree = "<group>"; }; … … 196 206 143E29ED18CAE90500FE8A0F /* SmallPage.h */, 197 207 147AAA9718CE5FB6002201E4 /* SmallTraits.h */, 208 142FCC74190080B8009032D4 /* XSmallChunk.h */, 209 142FCC75190080B8009032D4 /* XSmallLine.h */, 210 142FCC76190080B8009032D4 /* XSmallPage.h */, 211 142FCC77190080B8009032D4 /* XSmallTraits.h */, 198 212 ); 199 213 name = "heap: small | medium"; … … 238 252 1413E47018A0661700546D68 /* MediumAllocator.h */, 239 253 1413E462189DE1CD00546D68 /* SmallAllocator.h */, 254 142FCC7C1900815E009032D4 /* XSmallAllocator.h */, 240 255 ); 241 256 name = cache; … … 291 306 buildActionMask = 2147483647; 292 307 files = ( 308 142FCC7B190080B8009032D4 /* XSmallTraits.h in Headers */, 293 309 14DD78B518F48D6B00950702 /* Line.h in Headers */, 294 310 14DD78CF18F48D7500950702 /* Vector.h in Headers */, … … 299 315 14DD78B418F48D6B00950702 /* Chunk.h in Headers */, 300 316 14DD78CA18F48D7500950702 /* Mutex.h in Headers */, 317 142FCC7D1900815E009032D4 /* XSmallAllocator.h in Headers */, 301 318 14DD78D118F48EC600950702 /* XLargeChunk.h in Headers */, 302 319 14DD78B918F48D6B00950702 /* MediumTraits.h in Headers */, … … 319 336 14DD78C918F48D7500950702 /* Inline.h in Headers */, 320 337 1400274A18F89C2300115C97 /* VMHeap.h in Headers */, 338 142FCC7A190080B8009032D4 /* XSmallPage.h in Headers */, 321 339 1400274918F89C1300115C97 /* Heap.h in Headers */, 322 340 14DD78B818F48D6B00950702 /* MediumPage.h in Headers */, … … 324 342 14DD78BD18F48D6B00950702 /* SmallPage.h in Headers */, 325 343 1400274B18F89C3D00115C97 /* BoundaryTagInlines.h in Headers */, 344 142FCC79190080B8009032D4 /* XSmallLine.h in Headers */, 345 142FCC78190080B8009032D4 /* XSmallChunk.h in Headers */, 326 346 14DD788E18F48CCD00950702 /* BoundaryTag.h in Headers */, 327 347 14DD78C818F48D7500950702 /* FixedVector.h in Headers */, -
trunk/Source/bmalloc/bmalloc/Allocator.cpp
r167292 r167502 44 44 { 45 45 unsigned short size = alignment; 46 for (auto& allocator : m_smallAllocators) { 47 allocator = SmallAllocator(size); 48 size += alignment; 49 } 46 for ( ; size <= xSmallMax; size += alignment) 47 xSmallAllocatorFor(size) = XSmallAllocator(size); 48 49 for ( ; size <= smallMax; size += alignment) 50 smallAllocatorFor(size) = SmallAllocator(size); 50 51 } 51 52 … … 57 58 void Allocator::scavenge() 58 59 { 60 for (auto& allocator : m_xSmallAllocators) 61 log(allocator); 62 processXSmallAllocatorLog(); 63 59 64 for (auto& allocator : m_smallAllocators) 60 65 log(allocator); … … 65 70 } 66 71 72 void Allocator::log(XSmallAllocator& allocator) 73 { 74 if (allocator.isNull()) 75 return; 76 77 if (m_xSmallAllocatorLog.size() == m_xSmallAllocatorLog.capacity()) 78 processXSmallAllocatorLog(); 79 80 m_xSmallAllocatorLog.push(std::make_pair(allocator.line(), allocator.derefCount())); 81 } 82 83 void Allocator::processXSmallAllocatorLog() 84 { 85 std::lock_guard<Mutex> lock(PerProcess<Heap>::mutex()); 86 87 for (auto& logEntry : m_xSmallAllocatorLog) { 88 if (!logEntry.first->deref(lock, logEntry.second)) 89 continue; 90 m_deallocator.deallocateXSmallLine(lock, logEntry.first); 91 } 92 m_xSmallAllocatorLog.clear(); 93 } 94 67 95 void Allocator::log(SmallAllocator& allocator) 68 96 { 97 if (allocator.isNull()) 98 return; 99 69 100 if (m_smallAllocatorLog.size() == m_smallAllocatorLog.capacity()) 70 101 processSmallAllocatorLog(); 71 102 72 if (allocator.isNull())73 return;74 75 103 m_smallAllocatorLog.push(std::make_pair(allocator.line(), allocator.derefCount())); 76 104 } … … 90 118 void Allocator::log(MediumAllocator& allocator) 91 119 { 120 if (allocator.isNull()) 121 return; 122 92 123 if (m_mediumAllocatorLog.size() == m_mediumAllocatorLog.capacity()) 93 124 processMediumAllocatorLog(); 94 95 if (allocator.isNull())96 return;97 125 98 126 m_mediumAllocatorLog.push(std::make_pair(allocator.line(), allocator.derefCount())); … … 145 173 BASSERT(!allocateFastCase(size, dummy)); 146 174 ) 175 if (size <= xSmallMax) { 176 XSmallAllocator& allocator = xSmallAllocatorFor(size); 177 log(allocator); 178 allocator.refill(m_deallocator.allocateXSmallLine()); 179 return allocator.allocate(); 180 } 181 147 182 if (size <= smallMax) { 148 183 SmallAllocator& allocator = smallAllocatorFor(size); -
trunk/Source/bmalloc/bmalloc/Allocator.h
r167292 r167502 31 31 #include "Sizes.h" 32 32 #include "SmallAllocator.h" 33 #include "XSmallAllocator.h" 33 34 #include <array> 34 35 … … 51 52 52 53 private: 54 XSmallAllocator& xSmallAllocatorFor(size_t); 53 55 SmallAllocator& smallAllocatorFor(size_t); 56 54 57 void* allocateFastCase(SmallAllocator&); 55 58 … … 58 61 void* allocateXLarge(size_t); 59 62 63 void log(XSmallAllocator&); 60 64 void log(SmallAllocator&); 61 65 void log(MediumAllocator&); 62 66 67 void processXSmallAllocatorLog(); 63 68 void processSmallAllocatorLog(); 64 69 void processMediumAllocatorLog(); … … 66 71 Deallocator& m_deallocator; 67 72 73 std::array<XSmallAllocator, xSmallMax / alignment> m_xSmallAllocators; 68 74 std::array<SmallAllocator, smallMax / alignment> m_smallAllocators; 69 75 MediumAllocator m_mediumAllocator; 70 76 77 FixedVector<std::pair<XSmallLine*, unsigned char>, xSmallAllocatorLogCapacity> m_xSmallAllocatorLog; 71 78 FixedVector<std::pair<SmallLine*, unsigned char>, smallAllocatorLogCapacity> m_smallAllocatorLog; 72 79 FixedVector<std::pair<MediumLine*, unsigned char>, mediumAllocatorLogCapacity> m_mediumAllocatorLog; 73 80 }; 81 82 inline XSmallAllocator& Allocator::xSmallAllocatorFor(size_t size) 83 { 84 size_t index = mask((size - 1ul) / alignment, m_xSmallAllocators.size() - 1); 85 return m_xSmallAllocators[index]; 86 } 74 87 75 88 inline SmallAllocator& Allocator::smallAllocatorFor(size_t size) … … 81 94 inline bool Allocator::allocateFastCase(size_t size, void*& object) 82 95 { 83 if (size > smallMax) 84 return false; 96 if (size <= xSmallMax) { 97 XSmallAllocator& allocator = xSmallAllocatorFor(size); 98 if (!allocator.canAllocate()) 99 return false; 100 101 object = allocator.allocate(); 102 return true; 103 } 104 105 if (size <= smallMax) { 106 SmallAllocator& allocator = smallAllocatorFor(size); 107 if (!allocator.canAllocate()) 108 return false; 85 109 86 SmallAllocator& allocator = smallAllocatorFor(size); 87 if (!allocator.canAllocate()) 88 return false; 89 90 object = allocator.allocate(); 91 return true; 110 object = allocator.allocate(); 111 return true; 112 } 113 114 return false; 92 115 } 93 116 -
trunk/Source/bmalloc/bmalloc/Chunk.h
r166956 r167502 78 78 inline auto Chunk<Traits>::get(void* object) -> Chunk* 79 79 { 80 BASSERT( isSmallOrMedium(object));80 BASSERT(!isLarge(object)); 81 81 return static_cast<Chunk*>(mask(object, chunkMask)); 82 82 } -
trunk/Source/bmalloc/bmalloc/Deallocator.cpp
r167292 r167502 32 32 #include "PerProcess.h" 33 33 #include "SmallChunk.h" 34 #include "XSmallChunk.h" 34 35 #include <algorithm> 35 36 #include <sys/mman.h> … … 58 59 Heap* heap = PerProcess<Heap>::getFastCase(); 59 60 61 while (m_xSmallLineCache.size()) 62 heap->deallocateXSmallLine(lock, m_xSmallLineCache.pop()); 60 63 while (m_smallLineCache.size()) 61 64 heap->deallocateSmallLine(lock, m_smallLineCache.pop()); … … 81 84 82 85 for (auto object : m_objectLog) { 83 if (isSmall(object)) { 86 if (isXSmall(object)) { 87 XSmallLine* line = XSmallLine::get(object); 88 if (!line->deref(lock)) 89 continue; 90 deallocateXSmallLine(lock, line); 91 } else if (isSmall(object)) { 84 92 SmallLine* line = SmallLine::get(object); 85 93 if (!line->deref(lock)) … … 87 95 deallocateSmallLine(lock, line); 88 96 } else { 89 BASSERT(is SmallOrMedium(object));97 BASSERT(isMedium(object)); 90 98 MediumLine* line = MediumLine::get(object); 91 99 if (!line->deref(lock)) … … 105 113 return; 106 114 107 if ( isSmallOrMedium(object)) {115 if (!isLarge(object)) { 108 116 processObjectLog(); 109 117 m_objectLog.push(object); … … 126 134 } 127 135 136 void Deallocator::deallocateXSmallLine(std::lock_guard<Mutex>& lock, XSmallLine* line) 137 { 138 if (m_xSmallLineCache.size() == m_xSmallLineCache.capacity()) 139 return PerProcess<Heap>::getFastCase()->deallocateXSmallLine(lock, line); 140 141 m_xSmallLineCache.push(line); 142 } 143 128 144 SmallLine* Deallocator::allocateSmallLine() 129 145 { … … 137 153 138 154 return m_smallLineCache.pop(); 155 } 156 157 XSmallLine* Deallocator::allocateXSmallLine() 158 { 159 if (!m_xSmallLineCache.size()) { 160 std::lock_guard<Mutex> lock(PerProcess<Heap>::mutex()); 161 Heap* heap = PerProcess<Heap>::getFastCase(); 162 163 while (m_xSmallLineCache.size() != m_xSmallLineCache.capacity()) 164 m_xSmallLineCache.push(heap->allocateXSmallLine(lock)); 165 } 166 167 return m_xSmallLineCache.pop(); 139 168 } 140 169 -
trunk/Source/bmalloc/bmalloc/Deallocator.h
r167292 r167502 31 31 #include "Sizes.h" 32 32 #include "SmallLine.h" 33 #include "XSmallLine.h" 33 34 34 35 namespace bmalloc { … … 44 45 bool deallocateFastCase(void*); 45 46 void deallocateSlowCase(void*); 47 48 void deallocateXSmallLine(std::lock_guard<Mutex>&, XSmallLine*); 49 XSmallLine* allocateXSmallLine(); 46 50 47 51 void deallocateSmallLine(std::lock_guard<Mutex>&, SmallLine*); … … 59 63 60 64 FixedVector<void*, deallocatorLogCapacity> m_objectLog; 65 FixedVector<XSmallLine*, xSmallLineCacheCapacity> m_xSmallLineCache; 61 66 FixedVector<SmallLine*, smallLineCacheCapacity> m_smallLineCache; 62 67 FixedVector<MediumLine*, mediumLineCacheCapacity> m_mediumLineCache; … … 65 70 inline bool Deallocator::deallocateFastCase(void* object) 66 71 { 67 if ( !isSmallOrMedium(object))72 if (isLarge(object)) 68 73 return false; 69 74 -
trunk/Source/bmalloc/bmalloc/Heap.cpp
r167292 r167502 33 33 #include "SmallChunk.h" 34 34 #include "XLargeChunk.h" 35 #include "XSmallChunk.h" 35 36 #include <thread> 36 37 … … 61 62 void Heap::scavenge(std::unique_lock<Mutex>& lock, std::chrono::milliseconds sleepDuration) 62 63 { 64 scavengeXSmallPages(lock, sleepDuration); 63 65 scavengeSmallPages(lock, sleepDuration); 64 66 scavengeMediumPages(lock, sleepDuration); … … 84 86 } 85 87 88 void Heap::scavengeXSmallPages(std::unique_lock<Mutex>& lock, std::chrono::milliseconds sleepDuration) 89 { 90 while (1) { 91 if (m_isAllocatingPages) { 92 m_isAllocatingPages = false; 93 94 sleep(lock, sleepDuration); 95 continue; 96 } 97 98 if (!m_xSmallPages.size()) 99 return; 100 m_vmHeap.deallocateXSmallPage(lock, m_xSmallPages.pop()); 101 } 102 } 103 86 104 void Heap::scavengeMediumPages(std::unique_lock<Mutex>& lock, std::chrono::milliseconds sleepDuration) 87 105 { … … 115 133 m_vmHeap.deallocateLargeRange(lock, range); 116 134 } 135 } 136 137 XSmallLine* Heap::allocateXSmallLineSlowCase(std::lock_guard<Mutex>& lock) 138 { 139 m_isAllocatingPages = true; 140 141 XSmallPage* page = [this]() { 142 if (m_xSmallPages.size()) 143 return m_xSmallPages.pop(); 144 145 XSmallPage* page = m_vmHeap.allocateXSmallPage(); 146 vmAllocatePhysicalPages(page->begin()->begin(), vmPageSize); 147 return page; 148 }(); 149 150 XSmallLine* line = page->begin(); 151 for (auto it = line + 1; it != page->end(); ++it) 152 m_xSmallLines.push(it); 153 154 page->ref(lock); 155 return line; 117 156 } 118 157 -
trunk/Source/bmalloc/bmalloc/Heap.h
r167292 r167502 29 29 #include "FixedVector.h" 30 30 #include "VMHeap.h" 31 #include "MediumLine.h"32 31 #include "Mutex.h" 33 #include "SmallPage.h"34 32 #include "MediumChunk.h" 35 #include "MediumPage.h"36 33 #include "SegregatedFreeList.h" 37 34 #include "SmallChunk.h" 38 #include "SmallLine.h"39 35 #include "Vector.h" 36 #include "XSmallChunk.h" 40 37 #include <array> 41 38 #include <mutex> … … 50 47 Heap(std::lock_guard<Mutex>&); 51 48 49 XSmallLine* allocateXSmallLine(std::lock_guard<Mutex>&); 50 void deallocateXSmallLine(std::lock_guard<Mutex>&, XSmallLine*); 51 52 52 SmallLine* allocateSmallLine(std::lock_guard<Mutex>&); 53 53 void deallocateSmallLine(std::lock_guard<Mutex>&, SmallLine*); … … 67 67 ~Heap() = delete; 68 68 69 XSmallLine* allocateXSmallLineSlowCase(std::lock_guard<Mutex>&); 69 70 SmallLine* allocateSmallLineSlowCase(std::lock_guard<Mutex>&); 70 71 MediumLine* allocateMediumLineSlowCase(std::lock_guard<Mutex>&); … … 79 80 80 81 void concurrentScavenge(); 82 void scavengeXSmallPages(std::unique_lock<Mutex>&, std::chrono::milliseconds); 81 83 void scavengeSmallPages(std::unique_lock<Mutex>&, std::chrono::milliseconds); 82 84 void scavengeMediumPages(std::unique_lock<Mutex>&, std::chrono::milliseconds); 83 85 void scavengeLargeRanges(std::unique_lock<Mutex>&, std::chrono::milliseconds); 84 86 87 Vector<XSmallLine*> m_xSmallLines; 85 88 Vector<SmallLine*> m_smallLines; 86 89 Vector<MediumLine*> m_mediumLines; 87 90 91 Vector<XSmallPage*> m_xSmallPages; 88 92 Vector<SmallPage*> m_smallPages; 89 93 Vector<MediumPage*> m_mediumPages; … … 96 100 AsyncTask<Heap, decltype(&Heap::concurrentScavenge)> m_scavenger; 97 101 }; 102 103 inline void Heap::deallocateXSmallLine(std::lock_guard<Mutex>& lock, XSmallLine* line) 104 { 105 XSmallPage* page = XSmallPage::get(line); 106 if (page->deref(lock)) { 107 m_xSmallPages.push(page); 108 m_scavenger.run(); 109 return; 110 } 111 m_xSmallLines.push(line); 112 } 113 114 inline XSmallLine* Heap::allocateXSmallLine(std::lock_guard<Mutex>& lock) 115 { 116 while (m_xSmallLines.size()) { 117 XSmallLine* line = m_xSmallLines.pop(); 118 XSmallPage* page = XSmallPage::get(line); 119 if (!page->refCount(lock)) // The line was promoted to the small pages list. 120 continue; 121 page->ref(lock); 122 return line; 123 } 124 125 return allocateXSmallLineSlowCase(lock); 126 } 98 127 99 128 inline void Heap::deallocateSmallLine(std::lock_guard<Mutex>& lock, SmallLine* line) -
trunk/Source/bmalloc/bmalloc/LargeChunk.h
r166956 r167502 77 77 inline LargeChunk* LargeChunk::get(void* object) 78 78 { 79 BASSERT( !isSmallOrMedium(object));79 BASSERT(isLarge(object)); 80 80 return static_cast<LargeChunk*>(mask(object, largeChunkMask)); 81 81 } … … 90 90 inline EndTag* LargeChunk::endTag(void* object, size_t size) 91 91 { 92 BASSERT( !isSmallOrMedium(object));92 BASSERT(isLarge(object)); 93 93 94 94 LargeChunk* chunk = get(object); -
trunk/Source/bmalloc/bmalloc/Line.h
r166956 r167502 59 59 inline auto Line<Traits>::get(void* object) -> Line* 60 60 { 61 BASSERT( isSmallOrMedium(object));61 BASSERT(!isLarge(object)); 62 62 Chunk* chunk = Chunk::get(object); 63 63 size_t lineNumber = (reinterpret_cast<char*>(object) - reinterpret_cast<char*>(chunk)) / lineSize; -
trunk/Source/bmalloc/bmalloc/MediumAllocator.h
r166956 r167502 74 74 m_remaining -= size; 75 75 void* object = m_end - m_remaining - size; 76 BASSERT( isSmallOrMedium(object) && !isSmall(object));76 BASSERT(objectType(object) == Medium); 77 77 78 78 ++m_objectCount; … … 100 100 m_remaining = mediumLineSize; 101 101 m_objectCount = 0; 102 BASSERT(objectType(m_end - 1) == Medium); 102 103 } 103 104 -
trunk/Source/bmalloc/bmalloc/ObjectType.cpp
r166893 r167502 31 31 ObjectType objectType(void* object) 32 32 { 33 if (isSmallOrMedium(object)) { 34 if (isSmall(object)) 35 return Small; 33 switch (mask(reinterpret_cast<uintptr_t>(object), typeMask)) { 34 case xSmallType: { 35 return XSmall; 36 } 37 case smallType: { 38 return Small; 39 } 40 case mediumType: { 36 41 return Medium; 37 42 } 38 39 BeginTag* beginTag = LargeChunk::beginTag(object); 40 if (!beginTag->isXLarge()) 41 return Large; 42 return XLarge; 43 case largeType: { 44 BeginTag* beginTag = LargeChunk::beginTag(object); 45 if (!beginTag->isXLarge()) 46 return Large; 47 return XLarge; 48 } 49 default: { 50 RELEASE_BASSERT(false); 51 return XLarge; 52 } 53 } 43 54 } 44 55 -
trunk/Source/bmalloc/bmalloc/ObjectType.h
r166956 r167502 32 32 namespace bmalloc { 33 33 34 enum ObjectType { Small, Medium, Large, XLarge };34 enum ObjectType { XSmall, Small, Medium, Large, XLarge }; 35 35 36 36 ObjectType objectType(void*); 37 37 38 inline bool is SmallOrMedium(void* object)38 inline bool isXSmall(void* object) 39 39 { 40 return test(object, smallOrMediumTypeMask);40 return mask(reinterpret_cast<uintptr_t>(object), typeMask) == xSmallType; 41 41 } 42 42 43 inline bool isSmall(void* smallOrMedium)43 inline bool isSmall(void* object) 44 44 { 45 BASSERT(isSmallOrMedium(smallOrMedium)); 46 return test(smallOrMedium, smallOrMediumSmallTypeMask); 45 return mask(reinterpret_cast<uintptr_t>(object), typeMask) == smallType; 46 } 47 48 inline bool isMedium(void* object) 49 { 50 return mask(reinterpret_cast<uintptr_t>(object), typeMask) == mediumType; 51 } 52 53 inline bool isLarge(void* object) 54 { 55 return mask(reinterpret_cast<uintptr_t>(object), typeMask) == largeType; 47 56 } 48 57 -
trunk/Source/bmalloc/bmalloc/SegregatedFreeList.h
r166893 r167502 59 59 Range takeGreedy(List&, size_t); 60 60 61 std::array<List, 1 9> m_lists;61 std::array<List, 18> m_lists; 62 62 }; 63 63 -
trunk/Source/bmalloc/bmalloc/Sizes.h
r166893 r167502 48 48 static const size_t superChunkSize = 32 * MB; 49 49 50 static const size_t smallMax = 256; 51 static const size_t smallLineSize = 512; 50 static const size_t xSmallMax = 64; 51 static const size_t xSmallLineSize = 256; 52 static const size_t xSmallLineMask = ~(xSmallLineSize - 1ul); 53 54 static const size_t xSmallChunkSize = superChunkSize / 4; 55 static const size_t xSmallChunkOffset = superChunkSize * 1 / 4; 56 static const size_t xSmallChunkMask = ~(xSmallChunkSize - 1ul); 57 58 static const size_t smallMax = 128; 59 static const size_t smallLineSize = 256; 52 60 static const size_t smallLineMask = ~(smallLineSize - 1ul); 53 61 54 62 static const size_t smallChunkSize = superChunkSize / 4; 55 static const size_t smallChunkOffset = superChunkSize * 3/ 4;63 static const size_t smallChunkOffset = superChunkSize * 2 / 4; 56 64 static const size_t smallChunkMask = ~(smallChunkSize - 1ul); 57 65 58 static const size_t mediumMax = 1024;59 static const size_t mediumLineSize = 2048;66 static const size_t mediumMax = 256; 67 static const size_t mediumLineSize = 512; 60 68 static const size_t mediumLineMask = ~(mediumLineSize - 1ul); 61 69 62 70 static const size_t mediumChunkSize = superChunkSize / 4; 63 static const size_t mediumChunkOffset = superChunkSize * 2/ 4;71 static const size_t mediumChunkOffset = superChunkSize * 3 / 4; 64 72 static const size_t mediumChunkMask = ~(mediumChunkSize - 1ul); 65 73 66 static const size_t largeChunkSize = superChunkSize / 2;67 static const size_t largeChunkOffset = 0;74 static const size_t largeChunkSize = superChunkSize / 4; 75 static const size_t largeChunkOffset = superChunkSize * 0 / 4; 68 76 static const size_t largeChunkMask = ~(largeChunkSize - 1ul); 69 77 70 78 static const size_t largeAlignment = 64; 71 79 static const size_t largeMax = largeChunkSize * 99 / 100; // Plenty of room for metadata. 72 static const size_t largeMin = 1024;80 static const size_t largeMin = mediumMax; 73 81 74 82 static const size_t segregatedFreeListSearchDepth = 16; 75 83 76 84 static const uintptr_t typeMask = (superChunkSize - 1) & ~((superChunkSize / 4) - 1); // 4 taggable chunks 85 static const uintptr_t xSmallType = (superChunkSize + xSmallChunkOffset) & typeMask; 77 86 static const uintptr_t smallType = (superChunkSize + smallChunkOffset) & typeMask; 78 87 static const uintptr_t mediumType = (superChunkSize + mediumChunkOffset) & typeMask; 79 static const uintptr_t largeTypeMask = ~(mediumType & smallType); 80 static const uintptr_t smallOrMediumTypeMask = mediumType & smallType; 81 static const uintptr_t smallOrMediumSmallTypeMask = smallType ^ mediumType; // Only valid if object is known to be small or medium. 82 88 static const uintptr_t largeType = (superChunkSize + largeChunkOffset) & typeMask; 89 83 90 static const size_t deallocatorLogCapacity = 256; 84 91 92 static const size_t xSmallLineCacheCapacity = 32; 85 93 static const size_t smallLineCacheCapacity = 16; 86 94 static const size_t mediumLineCacheCapacity = 8; 87 95 96 static const size_t xSmallAllocatorLogCapacity = 32; 88 97 static const size_t smallAllocatorLogCapacity = 16; 89 98 static const size_t mediumAllocatorLogCapacity = 8; -
trunk/Source/bmalloc/bmalloc/SmallAllocator.h
r166956 r167502 29 29 #include "BAssert.h" 30 30 #include "SmallChunk.h" 31 #include "SmallLine.h"32 31 33 32 namespace bmalloc { … … 86 85 char* result = m_ptr; 87 86 m_ptr += m_size; 88 BASSERT( isSmall(result));87 BASSERT(objectType(result) == Small); 89 88 return result; 90 89 } -
trunk/Source/bmalloc/bmalloc/SmallTraits.h
r166893 r167502 41 41 42 42 static const size_t lineSize = smallLineSize; 43 static const size_t minimumObjectSize = alignment;43 static const size_t minimumObjectSize = xSmallMax + alignment; 44 44 static const size_t chunkSize = smallChunkSize; 45 45 static const size_t chunkOffset = smallChunkOffset; -
trunk/Source/bmalloc/bmalloc/VMHeap.cpp
r166893 r167502 37 37 } 38 38 39 void VMHeap::allocateXSmallChunk() 40 { 41 XSmallChunk* chunk = XSmallChunk::create(); 42 for (auto* it = chunk->begin(); it != chunk->end(); ++it) 43 m_xSmallPages.push(it); 44 } 45 39 46 void VMHeap::allocateSmallChunk() 40 47 { -
trunk/Source/bmalloc/bmalloc/VMHeap.h
r166893 r167502 35 35 #include "SmallChunk.h" 36 36 #include "Vector.h" 37 #include "XSmallChunk.h" 37 38 38 39 namespace bmalloc { … … 46 47 VMHeap(); 47 48 49 XSmallPage* allocateXSmallPage(); 48 50 SmallPage* allocateSmallPage(); 49 51 MediumPage* allocateMediumPage(); 50 52 Range allocateLargeRange(size_t); 51 53 54 void deallocateXSmallPage(std::unique_lock<Mutex>&, XSmallPage*); 52 55 void deallocateSmallPage(std::unique_lock<Mutex>&, SmallPage*); 53 56 void deallocateMediumPage(std::unique_lock<Mutex>&, MediumPage*); … … 55 58 56 59 private: 60 void allocateXSmallChunk(); 57 61 void allocateSmallChunk(); 58 62 void allocateMediumChunk(); 59 63 Range allocateLargeChunk(); 60 64 65 Vector<XSmallPage*> m_xSmallPages; 61 66 Vector<SmallPage*> m_smallPages; 62 67 Vector<MediumPage*> m_mediumPages; 63 68 SegregatedFreeList m_largeRanges; 64 69 }; 70 71 inline XSmallPage* VMHeap::allocateXSmallPage() 72 { 73 if (!m_xSmallPages.size()) 74 allocateXSmallChunk(); 75 76 return m_xSmallPages.pop(); 77 } 65 78 66 79 inline SmallPage* VMHeap::allocateSmallPage() … … 86 99 range = allocateLargeChunk(); 87 100 return range; 101 } 102 103 inline void VMHeap::deallocateXSmallPage(std::unique_lock<Mutex>& lock, XSmallPage* page) 104 { 105 lock.unlock(); 106 vmDeallocatePhysicalPages(page->begin()->begin(), vmPageSize); 107 lock.lock(); 108 109 m_xSmallPages.push(page); 88 110 } 89 111 -
trunk/Source/bmalloc/bmalloc/bmalloc.h
r167292 r167502 51 51 52 52 size_t oldSize = 0; 53 switch(objectType(object)) { 53 switch (objectType(object)) { 54 case XSmall: { 55 // We don't have an exact size, but we can calculate a maximum. 56 void* end = roundUpToMultipleOf<xSmallLineSize>(static_cast<char*>(object) + 1); 57 oldSize = static_cast<char*>(end) - static_cast<char*>(object); 58 break; 59 } 54 60 case Small: { 55 61 // We don't have an exact size, but we can calculate a maximum.
Note: See TracChangeset
for help on using the changeset viewer.