Changeset 64695 in webkit
- Timestamp:
- Aug 4, 2010 5:18:58 PM (14 years ago)
- Location:
- trunk
- Files:
-
- 2 added
- 13 edited
- 1 copied
Legend:
- Unmodified
- Added
- Removed
-
trunk/JavaScriptCore/ChangeLog
r64684 r64695 1 2010-08-04 Gavin Barraclough <barraclough@apple.com> 2 3 Reviewed by Sam Weinig. 4 5 Bug 43515 - Fix small design issues with PageAllocation, split out PageReservation. 6 7 The PageAllocation class has a number of issues: 8 * Changes in bug #43269 accidentally switched SYMBIAN over to use malloc/free to allocate 9 blocks of memory for the GC heap, instead of allocating RChunks. Revert this change in 10 behaviour. 11 * In order for PageAllocation to work correctly on WinCE we should be decommitting memory 12 before deallocating. In order to simplify understanding the expected state at deallocate, 13 split behaviour out into PageAllocation and PageReservation classes. Require that all 14 memory be decommitted before calling deallocate on a PageReservation, add asserts to 15 enforce this. 16 * add many missing asserts. 17 * inline more functions. 18 * remove ability to create sub-PageAllocations from an existing PageAllocations object - 19 this presented an interface that would allow sub regions to be deallocated, which would 20 not have provided expected behaviour. 21 * remove writable/executable arguments to commit, this value can be cached at the point 22 the memory is reserved. 23 * remove writable/executable arguments to allocateAligned, protection other than RW is not 24 supported. 25 * add missing checks for overflow & failed allocation to mmap path through allocateAligned. 26 27 * JavaScriptCore.xcodeproj/project.pbxproj: 28 * jit/ExecutableAllocator.cpp: 29 (JSC::ExecutableAllocator::intializePageSize): 30 * jit/ExecutableAllocator.h: 31 (JSC::ExecutablePool::Allocation::Allocation): 32 (JSC::ExecutablePool::Allocation::base): 33 (JSC::ExecutablePool::Allocation::size): 34 (JSC::ExecutablePool::Allocation::operator!): 35 * jit/ExecutableAllocatorFixedVMPool.cpp: 36 (JSC::FixedVMPoolAllocator::reuse): 37 (JSC::FixedVMPoolAllocator::coalesceFreeSpace): 38 (JSC::FixedVMPoolAllocator::FixedVMPoolAllocator): 39 (JSC::FixedVMPoolAllocator::alloc): 40 (JSC::FixedVMPoolAllocator::free): 41 (JSC::FixedVMPoolAllocator::allocInternal): 42 * runtime/AlignedMemoryAllocator.h: 43 (JSC::::allocate): 44 (JSC::::AlignedMemoryAllocator): 45 * runtime/Collector.cpp: 46 (JSC::Heap::allocateBlock): 47 * runtime/Collector.h: 48 * wtf/PageAllocation.cpp: 49 * wtf/PageAllocation.h: 50 (WTF::PageAllocation::operator!): 51 (WTF::PageAllocation::allocate): 52 (WTF::PageAllocation::allocateAt): 53 (WTF::PageAllocation::allocateAligned): 54 (WTF::PageAllocation::deallocate): 55 (WTF::PageAllocation::pageSize): 56 (WTF::PageAllocation::systemAllocate): 57 (WTF::PageAllocation::systemAllocateAt): 58 (WTF::PageAllocation::systemAllocateAligned): 59 (WTF::PageAllocation::systemDeallocate): 60 (WTF::PageAllocation::systemPageSize): 61 * wtf/PageReservation.h: Copied from JavaScriptCore/wtf/PageAllocation.h. 62 (WTF::PageReservation::PageReservation): 63 (WTF::PageReservation::commit): 64 (WTF::PageReservation::decommit): 65 (WTF::PageReservation::reserve): 66 (WTF::PageReservation::reserveAt): 67 (WTF::PageReservation::deallocate): 68 (WTF::PageReservation::systemCommit): 69 (WTF::PageReservation::systemDecommit): 70 (WTF::PageReservation::systemReserve): 71 (WTF::PageReservation::systemReserveAt): 72 * wtf/Platform.h: 73 1 74 2010-08-04 Sheriff Bot <webkit.review.bot@gmail.com> 2 75 -
trunk/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r64626 r64695 215 215 868BFA18117CF19900B908B1 /* WTFString.h in Headers */ = {isa = PBXBuildFile; fileRef = 868BFA16117CF19900B908B1 /* WTFString.h */; settings = {ATTRIBUTES = (Private, ); }; }; 216 216 868BFA60117D048200B908B1 /* StaticConstructors.h in Headers */ = {isa = PBXBuildFile; fileRef = 868BFA5F117D048200B908B1 /* StaticConstructors.h */; settings = {ATTRIBUTES = (Private, ); }; }; 217 8690231512092D5C00630AF9 /* PageReservation.h in Headers */ = {isa = PBXBuildFile; fileRef = 8690231412092D5C00630AF9 /* PageReservation.h */; settings = {ATTRIBUTES = (Private, ); }; }; 217 218 8698B86910D44D9400D8D01B /* StringBuilder.h in Headers */ = {isa = PBXBuildFile; fileRef = 8698B86810D44D9400D8D01B /* StringBuilder.h */; settings = {ATTRIBUTES = (Private, ); }; }; 218 219 8698BB3910D86BAF00D8D01B /* UStringImpl.h in Headers */ = {isa = PBXBuildFile; fileRef = 8698BB3710D86BAF00D8D01B /* UStringImpl.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 794 795 868BFA16117CF19900B908B1 /* WTFString.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = WTFString.h; path = text/WTFString.h; sourceTree = "<group>"; }; 795 796 868BFA5F117D048200B908B1 /* StaticConstructors.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StaticConstructors.h; sourceTree = "<group>"; }; 797 8690231412092D5C00630AF9 /* PageReservation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PageReservation.h; sourceTree = "<group>"; }; 796 798 8698B86810D44D9400D8D01B /* StringBuilder.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StringBuilder.h; sourceTree = "<group>"; }; 797 799 8698BB3710D86BAF00D8D01B /* UStringImpl.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = UStringImpl.h; sourceTree = "<group>"; }; … … 1461 1463 8627E5E911F1281900A313B5 /* PageAllocation.cpp */, 1462 1464 8627E5EA11F1281900A313B5 /* PageAllocation.h */, 1465 8690231412092D5C00630AF9 /* PageReservation.h */, 1463 1466 44DD48520FAEA85000D6B4EB /* PassOwnPtr.h */, 1464 1467 6580F795094070560082C219 /* PassRefPtr.h */, … … 2104 2107 4409D8470FAF80A200523B87 /* OwnPtrCommon.h in Headers */, 2105 2108 8627E5EC11F1281900A313B5 /* PageAllocation.h in Headers */, 2109 8690231512092D5C00630AF9 /* PageReservation.h in Headers */, 2106 2110 BC18C44B0E16F5CD00B34460 /* Parser.h in Headers */, 2107 2111 93052C350FB792190048FDC3 /* ParserArena.h in Headers */, … … 2304 2308 buildConfigurationList = 149C277108902AFE008A9EFC /* Build configuration list for PBXProject "JavaScriptCore" */; 2305 2309 compatibilityVersion = "Xcode 2.4"; 2306 developmentRegion = English;2307 2310 hasScannedForEncodings = 1; 2308 2311 knownRegions = ( -
trunk/JavaScriptCore/jit/ExecutableAllocator.cpp
r64608 r64695 46 46 ExecutableAllocator::pageSize = 256 * 1024; 47 47 #else 48 ExecutableAllocator::pageSize = PageAllocation::page size();48 ExecutableAllocator::pageSize = PageAllocation::pageSize(); 49 49 #endif 50 50 } -
trunk/JavaScriptCore/jit/ExecutableAllocator.h
r64619 r64695 86 86 87 87 class ExecutablePool : public RefCounted<ExecutablePool> { 88 private: 88 public: 89 #if ENABLE(EXECUTABLE_ALLOCATOR_DEMAND) 89 90 typedef PageAllocation Allocation; 91 #else 92 class Allocation { 93 public: 94 Allocation(void* base, size_t size) 95 : m_base(base) 96 , m_size(size) 97 { 98 } 99 void* base() { return m_base; } 100 size_t size() { return m_size; } 101 bool operator!() const { return !m_base; } 102 103 private: 104 void* m_base; 105 size_t m_size; 106 }; 107 #endif 90 108 typedef Vector<Allocation, 2> AllocationList; 91 109 92 public:93 110 static PassRefPtr<ExecutablePool> create(size_t n); 94 111 -
trunk/JavaScriptCore/jit/ExecutableAllocatorFixedVMPool.cpp
r64608 r64695 36 36 #include <unistd.h> 37 37 #include <wtf/AVLTree.h> 38 #include <wtf/PageReservation.h> 38 39 #include <wtf/VMTags.h> 39 40 … … 132 133 void reuse(void* position, size_t size) 133 134 { 134 bool okay = m_allocation.commit(position, size , EXECUTABLE_POOL_WRITABLE, true);135 bool okay = m_allocation.commit(position, size); 135 136 ASSERT_UNUSED(okay, okay); 136 137 } 137 138 138 139 // All addition to the free list should go through this method, rather than 139 // calling insert directly, to avoid multiple entries be ging added with the140 // calling insert directly, to avoid multiple entries being added with the 140 141 // same key. All nodes being added should be singletons, they should not 141 142 // already be a part of a chain. … … 156 157 157 158 // We do not attempt to coalesce addition, which may lead to fragmentation; 158 // instead we periodically perform a sweep to try to coalesce neig boring159 // instead we periodically perform a sweep to try to coalesce neighboring 159 160 // entries in m_freeList. Presently this is triggered at the point 16MB 160 161 // of memory has been released. … … 169 170 // Each entry in m_freeList might correspond to multiple 170 171 // free chunks of memory (of the same size). Walk the chain 171 // (this is likely of cou se only be one entry long!) adding172 // (this is likely of course only be one entry long!) adding 172 173 // each entry to the Vector (at reseting the next in chain 173 174 // pointer to separate each node out). … … 284 285 // But! - as a temporary workaround for some plugin problems (rdar://problem/6812854), 285 286 // for now instead of 2^26 bits of ASLR lets stick with 25 bits of randomization plus 286 // 2^24, which should put up somewhere in the middle of use space (in the address range287 // 2^24, which should put up somewhere in the middle of userspace (in the address range 287 288 // 0x200000000000 .. 0x5fffffffffff). 288 289 #if VM_POOL_ASLR … … 291 292 randomLocation += (1 << 24); 292 293 randomLocation <<= 21; 293 m_allocation = Page Allocation::reserveAt(reinterpret_cast<void*>(randomLocation), false, totalHeapSize, PageAllocation::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true);294 m_allocation = PageReservation::reserveAt(reinterpret_cast<void*>(randomLocation), false, totalHeapSize, PageAllocation::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 294 295 #else 295 m_allocation = Page Allocation::reserve(totalHeapSize, PageAllocation::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true);296 m_allocation = PageReservation::reserve(totalHeapSize, PageAllocation::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 296 297 #endif 297 298 … … 304 305 } 305 306 306 PageAllocation alloc(size_t size)307 { 308 return PageAllocation(allocInternal(size), size, m_allocation);309 } 310 311 void free( PageAllocation allocation)307 ExecutablePool::Allocation alloc(size_t size) 308 { 309 return ExecutablePool::Allocation(allocInternal(size), size); 310 } 311 312 void free(ExecutablePool::Allocation allocation) 312 313 { 313 314 void* pointer = allocation.base(); … … 357 358 m_commonSizedAllocations.removeLast(); 358 359 } else { 359 // Se rach m_freeList for a suitable sized chunk to allocate memory from.360 // Search m_freeList for a suitable sized chunk to allocate memory from. 360 361 FreeListEntry* entry = m_freeList.search(size, m_freeList.GREATER_EQUAL); 361 362 362 363 // This is bad news. 363 364 if (!entry) { 364 // Errk! Lets take a last-ditch desp aration attempt at defragmentation...365 // Errk! Lets take a last-ditch desperation attempt at defragmentation... 365 366 coalesceFreeSpace(); 366 367 // Did that free up a large enough chunk? … … 424 425 size_t m_countFreedSinceLastCoalesce; 425 426 426 Page Allocation m_allocation;427 PageReservation m_allocation; 427 428 }; 428 429 -
trunk/JavaScriptCore/runtime/AlignedMemoryAllocator.h
r64624 r64695 22 22 23 23 #include <wtf/Bitmap.h> 24 #include <wtf/Page Allocation.h>24 #include <wtf/PageReservation.h> 25 25 26 26 namespace JSC { … … 43 43 template<size_t blockSize> class AlignedMemoryAllocator; 44 44 45 #if HAVE( ALIGNED_ALLOCATE)45 #if HAVE(PAGE_ALLOCATE_ALIGNED) 46 46 47 47 template<size_t blockSize> … … 77 77 inline AlignedMemory<blockSize> AlignedMemoryAllocator<blockSize>::allocate() 78 78 { 79 return AlignedMemory<blockSize>(PageAllocation::allocateAligned(blockSize ));79 return AlignedMemory<blockSize>(PageAllocation::allocateAligned(blockSize, PageAllocation::JSGCHeapPages)); 80 80 } 81 81 … … 129 129 static const size_t bitmapSize = reservationSize / blockSize; 130 130 131 Page Allocation m_reservation;131 PageReservation m_reservation; 132 132 size_t m_nextFree; 133 133 uintptr_t m_reservationBase; … … 137 137 template<size_t blockSize> 138 138 AlignedMemoryAllocator<blockSize>::AlignedMemoryAllocator() 139 : m_reservation(Page Allocation::reserve(reservationSize + blockSize))139 : m_reservation(PageReservation::reserve(reservationSize + blockSize, PageAllocation::JSGCHeapPages)) 140 140 , m_nextFree(0) 141 141 { … … 146 146 // check that blockSize is a multiple of pageSize and that 147 147 // reservationSize is a multiple of blockSize 148 ASSERT(!(blockSize & (PageAllocation::page size() - 1)));148 ASSERT(!(blockSize & (PageAllocation::pageSize() - 1))); 149 149 ASSERT(!(reservationSize & (blockSize - 1))); 150 150 -
trunk/JavaScriptCore/runtime/Collector.cpp
r64684 r64695 192 192 AlignedBlock allocation = m_blockallocator.allocate(); 193 193 CollectorBlock* block = static_cast<CollectorBlock*>(allocation.base()); 194 if (!block) 195 CRASH(); 194 196 195 197 // Initialize block. -
trunk/JavaScriptCore/runtime/Collector.h
r64624 r64695 60 60 const size_t BLOCK_SIZE = 64 * 1024; // 64k 61 61 #else 62 const size_t BLOCK_SIZE = 64 * 4096; // 256k62 const size_t BLOCK_SIZE = 256 * 1024; // 256k 63 63 #endif 64 64 -
trunk/JavaScriptCore/wtf/PageAllocation.cpp
r64456 r64695 27 27 #include "config.h" 28 28 #include "PageAllocation.h" 29 30 #if HAVE(ERRNO_H) 31 #include <errno.h> 32 #endif 33 34 #if HAVE(MMAP) 35 #include <sys/mman.h> 36 #include <unistd.h> 37 #endif 38 39 #if OS(WINDOWS) 40 #include "windows.h" 41 #endif 42 43 #if OS(SYMBIAN) 44 #include <e32hal.h> 45 #endif 29 #include "PageReservation.h" 46 30 47 31 namespace WTF { 48 32 49 #if HAVE(MMAP) 50 51 bool PageAllocation::commit(void* start, size_t size, bool, bool) const 52 { 53 #if HAVE(MADV_FREE_REUSE) 54 while (madvise(start, size, MADV_FREE_REUSE) == -1 && errno == EAGAIN) { } 55 #else 56 UNUSED_PARAM(start); 57 UNUSED_PARAM(size); 58 #endif 59 return true; 60 } 61 62 void PageAllocation::decommit(void* start, size_t size) const 63 { 64 #if HAVE(MADV_FREE_REUSE) 65 while (madvise(start, size, MADV_FREE_REUSABLE) == -1 && errno == EAGAIN) { } 66 #elif HAVE(MADV_FREE) 67 while (madvise(start, size, MADV_FREE) == -1 && errno == EAGAIN) { } 68 #elif HAVE(MADV_DONTNEED) 69 while (madvise(start, size, MADV_DONTNEED) == -1 && errno == EAGAIN) { } 70 #else 71 UNUSED_PARAM(start); 72 UNUSED_PARAM(size); 73 #endif 74 } 75 76 PageAllocation PageAllocation::allocate(size_t size, Usage usage, bool writable, bool executable) 77 { 78 return allocateAt(0, false, size, usage, writable, executable); 79 } 80 81 PageAllocation PageAllocation::reserve(size_t size, Usage usage, bool writable, bool executable) 82 { 83 return reserveAt(0, false, size, usage, writable, executable); 84 } 85 86 PageAllocation PageAllocation::allocateAt(void* address, bool fixed, size_t size, Usage usage, bool writable, bool executable) 87 { 88 int flags = MAP_PRIVATE | MAP_ANON; 89 if (fixed) 90 flags |= MAP_FIXED; 91 92 int protection = PROT_READ; 93 if (writable) 94 protection |= PROT_WRITE; 95 if (executable) 96 protection |= PROT_EXEC; 97 98 void* base = mmap(address, size, protection, flags, usage, 0); 99 if (base == MAP_FAILED) 100 base = 0; 101 102 return PageAllocation(base, size); 103 } 104 105 PageAllocation PageAllocation::reserveAt(void* address, bool fixed, size_t size, Usage usage, bool writable, bool executable) 106 { 107 PageAllocation result = allocateAt(address, fixed, size, usage, writable, executable); 108 if (!!result) 109 result.decommit(result.base(), size); 110 return result; 111 } 112 113 void PageAllocation::deallocate() 114 { 115 int result = munmap(m_base, m_size); 116 ASSERT_UNUSED(result, !result); 117 m_base = 0; 118 } 119 120 size_t PageAllocation::pagesize() 121 { 122 static size_t size = 0; 123 if (!size) 124 size = getpagesize(); 125 return size; 126 } 127 128 #elif HAVE(VIRTUALALLOC) 129 130 static DWORD protection(bool writable, bool executable) 131 { 132 if (executable) 133 return writable ?PAGE_EXECUTE_READWRITE : PAGE_EXECUTE_READ; 134 return writable ?PAGE_READWRITE : PAGE_READONLY; 135 } 136 137 bool PageAllocation::commit(void* start, size_t size, bool writable, bool executable) const 138 { 139 return VirtualAlloc(start, size, MEM_COMMIT, protection(writable, executable)) == start; 140 } 141 142 void PageAllocation::decommit(void* start, size_t size) const 143 { 144 VirtualFree(start, size, MEM_DECOMMIT); 145 } 146 147 PageAllocation PageAllocation::allocate(size_t size, Usage, bool writable, bool executable) 148 { 149 return PageAllocation(VirtualAlloc(0, size, MEM_COMMIT | MEM_RESERVE, protection(writable, executable)), size); 150 } 151 152 PageAllocation PageAllocation::reserve(size_t size, Usage usage, bool writable, bool executable) 153 { 154 return PageAllocation(VirtualAlloc(0, size, MEM_RESERVE, protection(writable, executable)), size); 155 } 156 157 void PageAllocation::deallocate() 158 { 159 VirtualFree(m_base, 0, MEM_RELEASE); 160 m_base = 0; 161 } 162 163 size_t PageAllocation::pagesize() 164 { 165 static size_t size = 0; 166 if (!size) { 167 SYSTEM_INFO system_info; 168 GetSystemInfo(&system_info); 169 size = system_info.dwPageSize; 170 } 171 return size; 172 } 173 174 #elif OS(SYMBIAN) 175 176 bool PageAllocation::commit(void* start, size_t size, bool writable, bool executable) const 177 { 178 if (m_chunk) { 179 intptr_t offset = reinterpret_cast<intptr_t>(base()) - reinterpret_cast<intptr_t>(start); 180 m_chunk->Commit(offset, size); 181 } 182 return true; 183 } 184 185 void PageAllocation::decommit(void* start, size_t size) const 186 { 187 if (m_chunk) { 188 intptr_t offset = reinterpret_cast<intptr_t>(base()) - reinterpret_cast<intptr_t>(start); 189 m_chunk->Decommit(offset, size); 190 } 191 } 192 193 PageAllocation PageAllocation::allocate(size_t size, Usage usage, bool writable, bool executable) 194 { 195 if (!executable) 196 return PageAllocation(fastMalloc(size), size, 0); 197 RChunk* rchunk = new RChunk(); 198 TInt errorCode = rchunk->CreateLocalCode(size, size); 199 return PageAllocation(rchunk->Base(), size, rchunk); 200 } 201 202 PageAllocation PageAllocation::reserve(size_t size, Usage usage, bool writable, bool executable) 203 { 204 if (!executable) 205 return PageAllocation(fastMalloc(size), size, 0); 206 RChunk* rchunk = new RChunk(); 207 TInt errorCode = rchunk->CreateLocalCode(0, size); 208 return PageAllocation(rchunk->Base(), size, rchunk); 209 } 210 211 void PageAllocation::deallocate() 212 { 213 if (m_chunk) { 214 m_chunk->Close(); 215 delete m_chunk; 216 } else 217 fastFree(m_base); 218 m_base = 0; 219 } 220 221 size_t PageAllocation::pagesize() 222 { 223 static TInt page_size = 0; 224 if (!page_size) 225 UserHal::PageSizeInBytes(page_size); 226 return page_size; 227 } 228 229 #endif 33 size_t PageAllocation::s_pageSize = 0; 230 34 231 35 } -
trunk/JavaScriptCore/wtf/PageAllocation.h
r64626 r64695 31 31 #include <wtf/VMTags.h> 32 32 33 #if OS(DARWIN) 34 #include <mach/mach_init.h> 35 #include <mach/vm_map.h> 36 #endif 37 38 #if OS(HAIKU) 39 #include <OS.h> 40 #endif 41 42 #if OS(WINDOWS) 43 #include <malloc.h> 44 #include <windows.h> 45 #endif 46 33 47 #if OS(SYMBIAN) 48 #include <e32hal.h> 34 49 #include <e32std.h> 35 50 #endif 36 51 37 #if HAVE(MMAP) 38 #define PAGE_ALLOCATION_ALLOCATE_AT 1 39 #else 40 #define PAGE_ALLOCATION_ALLOCATE_AT 0 52 #if HAVE(ERRNO_H) 53 #include <errno.h> 41 54 #endif 42 55 … … 46 59 #endif 47 60 48 #if OS(DARWIN)49 50 #include <mach/mach_init.h>51 #include <mach/mach_port.h>52 #include <mach/task.h>53 #include <mach/thread_act.h>54 #include <mach/vm_map.h>55 56 #elif OS(WINDOWS)57 58 #include <malloc.h>59 #include <windows.h>60 61 #elif OS(HAIKU)62 63 #include <OS.h>64 65 #elif OS(UNIX)66 67 #include <stdlib.h>68 69 #if OS(SOLARIS)70 #include <thread.h>71 #else72 #include <pthread.h>73 #endif74 75 #if HAVE(PTHREAD_NP_H)76 #include <pthread_np.h>77 #endif78 79 #if OS(QNX)80 #include <errno.h>81 #include <fcntl.h>82 #include <stdio.h>83 #include <sys/procfs.h>84 #endif85 86 #endif87 88 61 namespace WTF { 89 62 63 /* 64 PageAllocation 65 66 The PageAllocation class provides a cross-platform memory allocation interface 67 with similar capabilities to posix mmap/munmap. Memory is allocated by calling 68 PageAllocation::allocate, and deallocated by calling deallocate on the 69 PageAllocation object. The PageAllocation holds the allocation's base pointer 70 and size. 71 72 The allocate method is passed the size required (which must be a multiple of 73 the system page size, which can be accessed using PageAllocation::pageSize). 74 Callers may also optinally provide a flag indicating the usage (for use by 75 system memory usage tracking tools, where implemented), and boolean values 76 specifying the required protection (defaulting to writable, non-executable). 77 78 Where HAVE(PAGE_ALLOCATE_AT) and HAVE(PAGE_ALLOCATE_ALIGNED) are available 79 memory may also be allocated at a specified address, or with a specified 80 alignment respectively. PageAllocation::allocateAt take an address to try 81 to allocate at, and a boolean indicating whether this behaviour is strictly 82 required (if this address is unavailable, should memory at another address 83 be allocated instead). PageAllocation::allocateAligned requires that the 84 size is a power of two that is >= system page size. 85 */ 90 86 class PageAllocation { 91 87 public: … … 107 103 } 108 104 109 // Create a PageAllocation object representing a sub-region of an existing allocation; 110 // deallocate should never be called on an object represnting a subregion, only on the 111 // initial allocation. 112 PageAllocation(void* base, size_t size, const PageAllocation& parent) 113 : m_base(base) 114 , m_size(size) 115 #if OS(SYMBIAN) 116 , m_chunk(parent.m_chunk) 117 #endif 118 { 119 #if defined(NDEBUG) && !OS(SYMBIAN) 120 UNUSED_PARAM(parent); 121 #endif 122 ASSERT(!base || base >= parent.m_base); 123 ASSERT(!base || size <= parent.m_size); 124 ASSERT(!base || static_cast<char*>(base) + size <= static_cast<char*>(parent.m_base) + parent.m_size); 125 } 126 105 bool operator!() const { return !m_base; } 127 106 void* base() const { return m_base; } 128 107 size_t size() const { return m_size; } 129 108 130 bool operator!() const { return !m_base; } 131 #if COMPILER(WINSCW) 132 operator bool const { return m_base; } 133 #else 134 typedef void* PageAllocation::*UnspecifiedBoolType; 135 operator UnspecifiedBoolType() const { return m_base ? &PageAllocation::m_base : 0; } 136 #endif 137 138 bool commit(void*, size_t, bool writable = true, bool executable = false) const; 139 void decommit(void*, size_t) const; 140 void deallocate(); 141 142 static PageAllocation allocate(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 143 static PageAllocation reserve(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 144 #if PAGE_ALLOCATION_ALLOCATE_AT 145 static PageAllocation allocateAt(void* address, bool fixed, size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 146 static PageAllocation reserveAt(void* address, bool fixed, size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 147 #endif 148 149 #if HAVE(ALIGNED_ALLOCATE) 150 static PageAllocation allocateAligned(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 151 #endif 152 153 static size_t pagesize(); 154 155 private: 109 static PageAllocation allocate(size_t size, Usage usage = UnknownUsage, bool writable = true, bool executable = false) 110 { 111 // size must be a multiple of pageSize. 112 ASSERT(!(size & (pageSize() - 1))); 113 return systemAllocate(size, usage, writable, executable); 114 } 115 116 #if HAVE(PAGE_ALLOCATE_AT) 117 static PageAllocation allocateAt(void* address, bool fixed, size_t size, Usage usage = UnknownUsage, bool writable = true, bool executable = false) 118 { 119 // size must be a multiple of pageSize. 120 ASSERT(!(size & (pageSize() - 1))); 121 // address must be aligned to pageSize. 122 ASSERT(!(reinterpret_cast<intptr_t>(address) & (pageSize() - 1))); 123 return systemAllocateAt(address, fixed, size, usage, writable, executable); 124 } 125 #endif 126 127 #if HAVE(PAGE_ALLOCATE_ALIGNED) 128 static PageAllocation allocateAligned(size_t size, Usage usage = UnknownUsage) 129 { 130 // size must be a multiple of pageSize. 131 ASSERT(!(size & (pageSize() - 1))); 132 // size must be a power of two. 133 ASSERT(!(size & (size - 1))); 134 return systemAllocateAligned(size, usage); 135 } 136 #endif 137 138 void deallocate() 139 { 140 ASSERT(m_base); 141 systemDeallocate(true); 142 } 143 144 static size_t pageSize() 145 { 146 if (!s_pageSize) { 147 s_pageSize = systemPageSize(); 148 // system page size must be a power of two. 149 ASSERT(!(s_pageSize & (s_pageSize - 1))); 150 } 151 return s_pageSize; 152 } 153 154 protected: 156 155 #if OS(SYMBIAN) 157 156 PageAllocation(void* base, size_t size, RChunk* chunk) … … 169 168 #endif 170 169 170 static PageAllocation systemAllocate(size_t, Usage, bool, bool); 171 #if HAVE(PAGE_ALLOCATE_AT) 172 static PageAllocation systemAllocateAt(void*, bool, size_t, Usage, bool, bool); 173 #endif 174 #if HAVE(PAGE_ALLOCATE_ALIGNED) 175 static PageAllocation systemAllocateAligned(size_t, Usage); 176 #endif 177 // systemDeallocate takes a parameter indicating whether memory is currently committed 178 // (this should always be true for PageAllocation, false for PageReservation). 179 void systemDeallocate(bool committed); 180 static size_t systemPageSize(); 181 171 182 void* m_base; 172 183 size_t m_size; … … 174 185 RChunk* m_chunk; 175 186 #endif 187 188 static JS_EXPORTDATA size_t s_pageSize; 176 189 }; 177 190 178 #if HAVE(ALIGNED_ALLOCATE) 179 180 #if OS(DARWIN) 181 182 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage, bool writable, bool executable) 183 { 184 ASSERT(!(size & (size - 1))); 185 vm_address_t address = 0; 186 vm_prot_t protection = VM_PROT_READ; 187 protection |= executable ? VM_PROT_EXECUTE : 0; 188 protection |= writable ? VM_PROT_WRITE : 0; 189 vm_map(current_task(), &address, size, (size - 1), VM_FLAGS_ANYWHERE | VM_TAG_FOR_COLLECTOR_MEMORY, MEMORY_OBJECT_NULL, 0, FALSE, protection, protection, VM_INHERIT_DEFAULT); 190 return PageAllocation(reinterpret_cast<void*>(address), size); 191 } 192 193 #elif OS(WINDOWS) 194 195 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 196 { 197 ASSERT(writable && !executable); 198 #if COMPILER(MINGW) && !COMPILER(MINGW64) 199 void* address = __mingw_aligned_malloc(size, size); 200 #else 201 void* address = _aligned_malloc(size, size); 202 #endif 203 memset(address, 0, size); 204 return PageAllocation(address, size); 205 } 206 207 #elif HAVE(POSIX_MEMALIGN) 208 209 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 210 { 211 ASSERT(writable && !executable); 212 213 void* address; 214 posix_memalign(&address, size, size); 215 return PageAllocation(address, size); 216 } 217 218 #elif HAVE(MMAP) 219 220 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 221 { 222 ASSERT(!(size & (size - 1))); 223 ASSERT(writable && !executable); 224 static size_t pagesize = getpagesize(); 225 226 size_t extra = 0; 227 if (size > pagesize) 228 extra = size - pagesize; 229 230 int flags = MAP_PRIVATE | MAP_ANON; 231 191 192 #if HAVE(MMAP) 193 194 195 inline PageAllocation PageAllocation::systemAllocate(size_t size, Usage usage, bool writable, bool executable) 196 { 197 return systemAllocateAt(0, false, size, usage, writable, executable); 198 } 199 200 inline PageAllocation PageAllocation::systemAllocateAt(void* address, bool fixed, size_t size, Usage usage, bool writable, bool executable) 201 { 232 202 int protection = PROT_READ; 233 203 if (writable) … … 236 206 protection |= PROT_EXEC; 237 207 238 // use page allocation 239 void* mmapResult = mmap(0, size + extra, protection, flags, usage, 0); 208 int flags = MAP_PRIVATE | MAP_ANON; 209 if (fixed) 210 flags |= MAP_FIXED; 211 212 #if OS(DARWIN) && !defined(BUILDING_ON_TIGER) 213 int fd = usage; 214 #else 215 int fd = -1; 216 #endif 217 218 void* base = mmap(address, size, protection, flags, fd, 0); 219 if (base == MAP_FAILED) 220 base = 0; 221 return PageAllocation(base, size); 222 } 223 224 inline PageAllocation PageAllocation::systemAllocateAligned(size_t size, Usage usage) 225 { 226 #if OS(DARWIN) 227 vm_address_t address = 0; 228 int flags = VM_FLAGS_ANYWHERE; 229 if (usage != -1) 230 flags |= usage; 231 vm_map(current_task(), &address, size, (size - 1), flags, MEMORY_OBJECT_NULL, 0, FALSE, PROT_READ | PROT_WRITE, PROT_READ | PROT_WRITE | PROT_EXEC, VM_INHERIT_DEFAULT); 232 return PageAllocation(reinterpret_cast<void*>(address), size); 233 #elif HAVE(POSIX_MEMALIGN) 234 void* address; 235 posix_memalign(&address, size, size); 236 return PageAllocation(address, size); 237 #else 238 size_t extra = size - pageSize(); 239 240 // Check for overflow. 241 if ((size + extra) < size) 242 return PageAllocation(0, size); 243 244 #if OS(DARWIN) && !defined(BUILDING_ON_TIGER) 245 int fd = usage; 246 #else 247 int fd = -1; 248 #endif 249 void* mmapResult = mmap(0, size + extra, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, fd, 0); 250 if (mmapResult == MAP_FAILED) 251 return PageAllocation(0, size); 240 252 uintptr_t address = reinterpret_cast<uintptr_t>(mmapResult); 241 253 … … 243 255 if ((address & (size - 1))) 244 256 adjust = size - (address & (size - 1)); 245 246 257 if (adjust > 0) 247 258 munmap(reinterpret_cast<char*>(address), adjust); 248 249 259 if (adjust < extra) 250 260 munmap(reinterpret_cast<char*>(address + adjust + size), extra - adjust); 251 252 261 address += adjust; 253 262 254 263 return PageAllocation(reinterpret_cast<void*>(address), size); 255 } 256 257 #endif 258 259 #endif 264 #endif 265 } 266 267 inline void PageAllocation::systemDeallocate(bool) 268 { 269 int result = munmap(m_base, m_size); 270 ASSERT_UNUSED(result, !result); 271 m_base = 0; 272 } 273 274 inline size_t PageAllocation::systemPageSize() 275 { 276 return getpagesize(); 277 } 278 279 280 #elif HAVE(VIRTUALALLOC) 281 282 283 inline PageAllocation PageAllocation::systemAllocate(size_t size, Usage, bool writable, bool executable) 284 { 285 DWORD protection = executable ? 286 (writable ? PAGE_EXECUTE_READWRITE : PAGE_EXECUTE_READ) : 287 (writable ? PAGE_READWRITE : PAGE_READONLY); 288 return PageAllocation(VirtualAlloc(0, size, MEM_COMMIT | MEM_RESERVE, protection), size); 289 } 290 291 #if HAVE(ALIGNED_MALLOC) 292 inline PageAllocation PageAllocation::systemAllocateAligned(size_t size, Usage usage) 293 { 294 #if COMPILER(MINGW) && !COMPILER(MINGW64) 295 void* address = __mingw_aligned_malloc(size, size); 296 #else 297 void* address = _aligned_malloc(size, size); 298 #endif 299 memset(address, 0, size); 300 return PageAllocation(address, size); 301 } 302 #endif 303 304 inline void PageAllocation::systemDeallocate(bool committed) 305 { 306 #if OS(WINCE) 307 if (committed) 308 VirtualFree(m_base, m_size, MEM_DECOMMIT); 309 #else 310 UNUSED_PARAM(committed); 311 #endif 312 VirtualFree(m_base, 0, MEM_RELEASE); 313 m_base = 0; 314 } 315 316 inline size_t PageAllocation::systemPageSize() 317 { 318 static size_t size = 0; 319 SYSTEM_INFO system_info; 320 GetSystemInfo(&system_info); 321 size = system_info.dwPageSize; 322 return size; 323 } 324 325 326 #elif OS(SYMBIAN) 327 328 329 inline PageAllocation PageAllocation::systemAllocate(size_t size, Usage usage, bool writable, bool executable) 330 { 331 RChunk* rchunk = new RChunk(); 332 if (executable) 333 rchunk->CreateLocalCode(size, size); 334 else 335 rchunk->CreateLocal(size, size); 336 return PageAllocation(rchunk->Base(), size, rchunk); 337 } 338 339 inline void PageAllocation::systemDeallocate(bool) 340 { 341 m_chunk->Close(); 342 delete m_chunk; 343 m_base = 0; 344 } 345 346 inline size_t PageAllocation::systemPageSize() 347 { 348 static TInt page_size = 0; 349 UserHal::PageSizeInBytes(page_size); 350 return page_size; 351 } 352 353 354 #endif 355 260 356 261 357 } -
trunk/JavaScriptCore/wtf/PageReservation.h
r64626 r64695 24 24 */ 25 25 26 #ifndef PageAllocation_h 27 #define PageAllocation_h 28 29 #include <wtf/Assertions.h> 30 #include <wtf/UnusedParam.h> 31 #include <wtf/VMTags.h> 32 33 #if OS(SYMBIAN) 34 #include <e32std.h> 35 #endif 36 37 #if HAVE(MMAP) 38 #define PAGE_ALLOCATION_ALLOCATE_AT 1 39 #else 40 #define PAGE_ALLOCATION_ALLOCATE_AT 0 41 #endif 42 43 #if HAVE(MMAP) 44 #include <sys/mman.h> 45 #include <unistd.h> 46 #endif 47 48 #if OS(DARWIN) 49 50 #include <mach/mach_init.h> 51 #include <mach/mach_port.h> 52 #include <mach/task.h> 53 #include <mach/thread_act.h> 54 #include <mach/vm_map.h> 55 56 #elif OS(WINDOWS) 57 58 #include <malloc.h> 59 #include <windows.h> 60 61 #elif OS(HAIKU) 62 63 #include <OS.h> 64 65 #elif OS(UNIX) 66 67 #include <stdlib.h> 68 69 #if OS(SOLARIS) 70 #include <thread.h> 71 #else 72 #include <pthread.h> 73 #endif 74 75 #if HAVE(PTHREAD_NP_H) 76 #include <pthread_np.h> 77 #endif 78 79 #if OS(QNX) 80 #include <errno.h> 81 #include <fcntl.h> 82 #include <stdio.h> 83 #include <sys/procfs.h> 84 #endif 85 86 #endif 26 #ifndef PageReservation_h 27 #define PageReservation_h 28 29 #include <wtf/PageAllocation.h> 87 30 88 31 namespace WTF { 89 32 90 class PageAllocation { 33 /* 34 PageReservation 35 36 Like PageAllocation, the PageReservation class provides a cross-platform memory 37 allocation interface, but with a set of capabilities more similar to that of 38 VirtualAlloc than posix mmap. PageReservation can be used to allocate virtual 39 memory without committing physical memory pages using PageReservation::reserve. 40 Following a call to reserve all memory in the region is in a decommited state, 41 in which the memory should not be used (accessing the memory may cause a fault). 42 43 Before using memory it must be committed by calling commit, which is passed start 44 and size values (both of which require system page size granularity). One the 45 committed memory is no longer needed 'decommit' may be called to return the 46 memory to its devommitted state. Commit should only be called on memory that is 47 currently decommitted, and decommit should only be called on memory regions that 48 are currently committed. All memory should be decommited before the reservation 49 is deallocated. Values in memory may not be retained accross a pair of calls if 50 the region of memory is decommitted and then committed again. 51 52 Where HAVE(PAGE_ALLOCATE_AT) is available a PageReservation::reserveAt method 53 also exists, with behaviour mirroring PageAllocation::allocateAt. 54 55 Memory protection should not be changed on decommitted memory, and if protection 56 is changed on memory while it is committed it should be returned to the orignal 57 protection before decommit is called. 58 59 Note: Inherits from PageAllocation privately to prevent clients accidentally 60 calling PageAllocation::deallocate on a PageReservation. 61 */ 62 class PageReservation : private PageAllocation { 91 63 public: 92 enum Usage { 93 UnknownUsage = -1, 94 FastMallocPages = VM_TAG_FOR_TCMALLOC_MEMORY, 95 JSGCHeapPages = VM_TAG_FOR_COLLECTOR_MEMORY, 96 JSVMStackPages = VM_TAG_FOR_REGISTERFILE_MEMORY, 97 JSJITCodePages = VM_TAG_FOR_EXECUTABLEALLOCATOR_MEMORY, 98 }; 99 100 PageAllocation() 101 : m_base(0) 102 , m_size(0) 103 #if OS(SYMBIAN) 104 , m_chunk(0) 105 #endif 106 { 107 } 108 109 // Create a PageAllocation object representing a sub-region of an existing allocation; 110 // deallocate should never be called on an object represnting a subregion, only on the 111 // initial allocation. 112 PageAllocation(void* base, size_t size, const PageAllocation& parent) 113 : m_base(base) 114 , m_size(size) 115 #if OS(SYMBIAN) 116 , m_chunk(parent.m_chunk) 117 #endif 118 { 119 #if defined(NDEBUG) && !OS(SYMBIAN) 120 UNUSED_PARAM(parent); 121 #endif 122 ASSERT(!base || base >= parent.m_base); 123 ASSERT(!base || size <= parent.m_size); 124 ASSERT(!base || static_cast<char*>(base) + size <= static_cast<char*>(parent.m_base) + parent.m_size); 125 } 126 127 void* base() const { return m_base; } 128 size_t size() const { return m_size; } 129 130 bool operator!() const { return !m_base; } 131 #if COMPILER(WINSCW) 132 operator bool const { return m_base; } 133 #else 134 typedef void* PageAllocation::*UnspecifiedBoolType; 135 operator UnspecifiedBoolType() const { return m_base ? &PageAllocation::m_base : 0; } 136 #endif 137 138 bool commit(void*, size_t, bool writable = true, bool executable = false) const; 139 void decommit(void*, size_t) const; 140 void deallocate(); 141 142 static PageAllocation allocate(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 143 static PageAllocation reserve(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 144 #if PAGE_ALLOCATION_ALLOCATE_AT 145 static PageAllocation allocateAt(void* address, bool fixed, size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 146 static PageAllocation reserveAt(void* address, bool fixed, size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 147 #endif 148 149 #if HAVE(ALIGNED_ALLOCATE) 150 static PageAllocation allocateAligned(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false); 151 #endif 152 153 static size_t pagesize(); 64 PageReservation() 65 { 66 } 67 68 using PageAllocation::operator!; 69 using PageAllocation::base; 70 using PageAllocation::size; 71 72 bool commit(void* start, size_t size) 73 { 74 ASSERT(m_base); 75 // size must be a multiple of pageSize. 76 ASSERT(!(size & (pageSize() - 1))); 77 // address must be aligned to pageSize. 78 ASSERT(!(reinterpret_cast<intptr_t>(start) & (pageSize() - 1))); 79 80 bool commited = systemCommit(start, size); 81 #ifndef NDEBUG 82 if (commited) 83 m_committed += size; 84 #endif 85 return commited; 86 } 87 void decommit(void* start, size_t size) 88 { 89 ASSERT(m_base); 90 // size must be a multiple of pageSize. 91 ASSERT(!(size & (pageSize() - 1))); 92 // address must be aligned to pageSize. 93 ASSERT(!(reinterpret_cast<intptr_t>(start) & (pageSize() - 1))); 94 95 #ifndef NDEBUG 96 m_committed -= size; 97 #endif 98 systemDecommit(start, size); 99 } 100 101 static PageReservation reserve(size_t size, Usage usage = UnknownUsage, bool writable = true, bool executable = false) 102 { 103 // size must be a multiple of pageSize. 104 ASSERT(!(size & (pageSize() - 1))); 105 return systemReserve(size, usage, writable, executable); 106 } 107 108 #if HAVE(PAGE_ALLOCATE_AT) 109 static PageReservation reserveAt(void* address, bool fixed, size_t size, Usage usage = UnknownUsage, bool writable = true, bool executable = false) 110 { 111 // size must be a multiple of pageSize. 112 ASSERT(!(size & (pageSize() - 1))); 113 // address must be aligned to pageSize. 114 ASSERT(!(reinterpret_cast<intptr_t>(address) & (pageSize() - 1))); 115 return systemReserveAt(address, fixed, size, usage, writable, executable); 116 } 117 #endif 118 119 void deallocate() 120 { 121 ASSERT(m_base); 122 ASSERT(!m_committed); 123 systemDeallocate(false); 124 } 154 125 155 126 private: 156 127 #if OS(SYMBIAN) 157 PageAllocation(void* base, size_t size, RChunk* chunk) 158 : m_base(base) 159 , m_size(size) 160 , m_chunk(chunk) 161 { 162 } 128 PageReservation(void* base, size_t size, RChunk* chunk) 129 : PageAllocation(base, size, chunk) 163 130 #else 164 PageAllocation(void* base, size_t size) 165 : m_base(base) 166 , m_size(size) 167 { 168 } 169 #endif 170 171 void* m_base; 172 size_t m_size; 173 #if OS(SYMBIAN) 174 RChunk* m_chunk; 131 PageReservation(void* base, size_t size) 132 : PageAllocation(base, size) 133 #endif 134 #ifndef NDEBUG 135 , m_committed(0) 136 #endif 137 { 138 } 139 140 bool systemCommit(void*, size_t); 141 void systemDecommit(void*, size_t); 142 static PageReservation systemReserve(size_t, Usage, bool, bool); 143 #if HAVE(PAGE_ALLOCATE_AT) 144 static PageReservation systemReserveAt(void*, bool, size_t, Usage, bool, bool); 145 #endif 146 147 #if HAVE(VIRTUALALLOC) 148 DWORD m_protection; 149 #endif 150 #ifndef NDEBUG 151 size_t m_committed; 175 152 #endif 176 153 }; 177 154 178 #if HAVE(ALIGNED_ALLOCATE) 179 180 #if OS(DARWIN) 181 182 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage, bool writable, bool executable) 183 { 184 ASSERT(!(size & (size - 1))); 185 vm_address_t address = 0; 186 vm_prot_t protection = VM_PROT_READ; 187 protection |= executable ? VM_PROT_EXECUTE : 0; 188 protection |= writable ? VM_PROT_WRITE : 0; 189 vm_map(current_task(), &address, size, (size - 1), VM_FLAGS_ANYWHERE | VM_TAG_FOR_COLLECTOR_MEMORY, MEMORY_OBJECT_NULL, 0, FALSE, protection, protection, VM_INHERIT_DEFAULT); 190 return PageAllocation(reinterpret_cast<void*>(address), size); 191 } 192 193 #elif OS(WINDOWS) 194 195 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 196 { 197 ASSERT(writable && !executable); 198 #if COMPILER(MINGW) && !COMPILER(MINGW64) 199 void* address = __mingw_aligned_malloc(size, size); 155 156 #if HAVE(MMAP) 157 158 159 inline bool PageReservation::systemCommit(void* start, size_t size) 160 { 161 #if HAVE(MADV_FREE_REUSE) 162 while (madvise(start, size, MADV_FREE_REUSE) == -1 && errno == EAGAIN) { } 200 163 #else 201 void* address = _aligned_malloc(size, size); 202 #endif 203 memset(address, 0, size); 204 return PageAllocation(address, size); 205 } 206 207 #elif HAVE(POSIX_MEMALIGN) 208 209 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 210 { 211 ASSERT(writable && !executable); 212 213 void* address; 214 posix_memalign(&address, size, size); 215 return PageAllocation(address, size); 216 } 217 218 #elif HAVE(MMAP) 219 220 inline PageAllocation PageAllocation::allocateAligned(size_t size, Usage usage, bool writable, bool executable) 221 { 222 ASSERT(!(size & (size - 1))); 223 ASSERT(writable && !executable); 224 static size_t pagesize = getpagesize(); 225 226 size_t extra = 0; 227 if (size > pagesize) 228 extra = size - pagesize; 229 230 int flags = MAP_PRIVATE | MAP_ANON; 231 232 int protection = PROT_READ; 233 if (writable) 234 protection |= PROT_WRITE; 164 UNUSED_PARAM(start); 165 UNUSED_PARAM(size); 166 #endif 167 return true; 168 } 169 170 inline void PageReservation::systemDecommit(void* start, size_t size) 171 { 172 #if HAVE(MADV_FREE_REUSE) 173 while (madvise(start, size, MADV_FREE_REUSABLE) == -1 && errno == EAGAIN) { } 174 #elif HAVE(MADV_FREE) 175 while (madvise(start, size, MADV_FREE) == -1 && errno == EAGAIN) { } 176 #elif HAVE(MADV_DONTNEED) 177 while (madvise(start, size, MADV_DONTNEED) == -1 && errno == EAGAIN) { } 178 #else 179 UNUSED_PARAM(start); 180 UNUSED_PARAM(size); 181 #endif 182 } 183 184 inline PageReservation PageReservation::systemReserve(size_t size, Usage usage, bool writable, bool executable) 185 { 186 return systemReserveAt(0, false, size, usage, writable, executable); 187 } 188 189 inline PageReservation PageReservation::systemReserveAt(void* address, bool fixed, size_t size, Usage usage, bool writable, bool executable) 190 { 191 void* base = systemAllocateAt(address, fixed, size, usage, writable, executable).base(); 192 #if HAVE(MADV_FREE_REUSE) 193 // When using MADV_FREE_REUSE we keep all decommitted memory marked as REUSABLE. 194 // We call REUSE on commit, and REUSABLE on decommit. 195 if (base) 196 while (madvise(base, size, MADV_FREE_REUSABLE) == -1 && errno == EAGAIN) { } 197 #endif 198 return PageReservation(base, size); 199 } 200 201 202 #elif HAVE(VIRTUALALLOC) 203 204 205 inline bool PageReservation::systemCommit(void* start, size_t size) 206 { 207 return VirtualAlloc(start, size, MEM_COMMIT, m_protection) == start; 208 } 209 210 inline void PageReservation::systemDecommit(void* start, size_t size) 211 { 212 VirtualFree(start, size, MEM_DECOMMIT); 213 } 214 215 inline PageReservation PageReservation::systemReserve(size_t size, Usage usage, bool writable, bool executable) 216 { 217 // Record the protection for use during commit. 218 m_protection = executable ? 219 (writable ? PAGE_EXECUTE_READWRITE : PAGE_EXECUTE_READ) : 220 (writable ? PAGE_READWRITE : PAGE_READONLY); 221 return PageReservation(VirtualAlloc(0, size, MEM_RESERVE, m_protection), size); 222 } 223 224 225 #elif OS(SYMBIAN) 226 227 228 inline bool PageReservation::systemCommit(void* start, size_t size) 229 { 230 intptr_t offset = reinterpret_cast<intptr_t>(m_base) - reinterpret_cast<intptr_t>(start); 231 m_chunk->Commit(offset, size); 232 return true; 233 } 234 235 inline void PageReservation::systemDecommit(void* start, size_t size) 236 { 237 intptr_t offset = reinterpret_cast<intptr_t>(m_base) - reinterpret_cast<intptr_t>(start); 238 m_chunk->Decommit(offset, size); 239 } 240 241 inline PageReservation PageReservation::systemReserve(size_t size, Usage usage, bool writable, bool executable) 242 { 243 RChunk* rchunk = new RChunk(); 235 244 if (executable) 236 protection |= PROT_EXEC; 237 238 // use page allocation 239 void* mmapResult = mmap(0, size + extra, protection, flags, usage, 0); 240 uintptr_t address = reinterpret_cast<uintptr_t>(mmapResult); 241 242 size_t adjust = 0; 243 if ((address & (size - 1))) 244 adjust = size - (address & (size - 1)); 245 246 if (adjust > 0) 247 munmap(reinterpret_cast<char*>(address), adjust); 248 249 if (adjust < extra) 250 munmap(reinterpret_cast<char*>(address + adjust + size), extra - adjust); 251 252 address += adjust; 253 254 return PageAllocation(reinterpret_cast<void*>(address), size); 255 } 256 257 #endif 258 259 #endif 260 261 } 262 263 using WTF::PageAllocation; 264 265 #endif // PageAllocation_h 245 rchunk->CreateLocalCode(0, size); 246 else 247 rchunk->CreateDisconnectedLocal(0, 0, size); 248 return PageReservation(rchunk->Base(), size, rchunk); 249 } 250 251 252 #endif 253 254 255 } 256 257 using WTF::PageReservation; 258 259 #endif // PageReservation_h -
trunk/JavaScriptCore/wtf/Platform.h
r64658 r64695 709 709 #define HAVE_LANGINFO_H 1 710 710 #define HAVE_MMAP 1 711 #define HAVE_ALIGNED_ALLOCATE 1712 711 #define HAVE_MERGESORT 1 713 712 #define HAVE_SBRK 1 … … 738 737 #if OS(WINCE) 739 738 #define HAVE_ERRNO_H 0 740 #define HAVE_ALIGNED_ALLOCATE 0741 739 #else 742 740 #define HAVE_SYS_TIMEB_H 1 743 #define HAVE_ALIGNED_ ALLOCATE1741 #define HAVE_ALIGNED_MALLOC 1 744 742 #endif 745 743 #define HAVE_VIRTUALALLOC 1 … … 798 796 #endif 799 797 800 #if !defined(HAVE_ALIGNED_ALLOCATE) 801 #if HAVE(POSIX_MEMALIGN) 802 #define HAVE_ALIGNED_ALLOCATE 1 803 #else 804 #define HAVE_ALIGNED_ALLOCATE 0 805 #endif 798 #if HAVE(MMAP) || (HAVE(VIRTUALALLOC) && HAVE(ALIGNED_MALLOC)) 799 #define HAVE_PAGE_ALLOCATE_ALIGNED 1 800 #endif 801 #if HAVE(MMAP) 802 #define HAVE_PAGE_ALLOCATE_AT 1 806 803 #endif 807 804 -
trunk/JavaScriptGlue/ChangeLog
r64684 r64695 1 2010-08-04 Gavin Barraclough <barraclough@apple.com> 2 3 Reviewed by Sam Weinig. 4 5 Bug 43515 - Fix small design issues with PageAllocation, split out PageReservation. 6 (add forwarding headers) 7 8 * ForwardingHeaders/wtf/Bitmap.h: Added. 9 * ForwardingHeaders/wtf/PageReservation.h: Added. 10 1 11 2010-08-04 Sheriff Bot <webkit.review.bot@gmail.com> 2 12 -
trunk/WebCore/ChangeLog
r64686 r64695 1 2010-08-04 Gavin Barraclough <barraclough@apple.com> 2 3 Reviewed by Sam Weinig. 4 5 Bug 43515 - Fix small design issues with PageAllocation, split out PageReservation. 6 (add forwarding headers) 7 8 * ForwardingHeaders/wtf/Bitmap.h: Added. 9 * ForwardingHeaders/wtf/PageReservation.h: Added. 10 1 11 2010-08-04 Zhenyao Mo <zmo@google.com> 2 12
Note: See TracChangeset
for help on using the changeset viewer.