Changeset 227951 in webkit


Ignore:
Timestamp:
Jan 31, 2018 9:36:40 PM (6 years ago)
Author:
sbarati@apple.com
Message:

Replace tryLargeMemalignVirtual with tryLargeZeroedMemalignVirtual and use it to allocate large zeroed memory in Wasm
https://bugs.webkit.org/show_bug.cgi?id=182064
<rdar://problem/36840132>

Reviewed by Geoffrey Garen.

Source/bmalloc:

This patch replaces the tryLargeMemalignVirtual API with tryLargeZeroedMemalignVirtual.
By doing that, we're able to remove the AllocationKind enum. To zero the memory,
tryLargeZeroedMemalignVirtual uses mmap(... MAP_ANON ...) over previously mmapped
memory. This both purges the any resident memory for the virtual range and ensures
that the pages in the range are zeroed. Most OSs should implement this by taking a
page fault and zero filling on first access. Therefore, this API is returning pages
that will result in page faults on first access. Hence, the name 'virtual' in the API.
This API differs from the old API in that users of it need not call madvise themselves.
The memory is ready to go.

  • bmalloc.xcodeproj/project.pbxproj:
  • bmalloc/AllocationKind.h: Removed.
  • bmalloc/DebugHeap.cpp:

(bmalloc::DebugHeap::memalignLarge):
(bmalloc::DebugHeap::freeLarge):

  • bmalloc/DebugHeap.h:
  • bmalloc/Heap.cpp:

(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):

  • bmalloc/Heap.h:
  • bmalloc/IsoPage.cpp:

(bmalloc::IsoPageBase::allocatePageMemory):

  • bmalloc/VMAllocate.h:

(bmalloc::vmZeroAndPurge):

  • bmalloc/VMHeap.cpp:

(bmalloc::VMHeap::tryAllocateLargeChunk):

  • bmalloc/VMHeap.h:
  • bmalloc/bmalloc.cpp:

(bmalloc::api::tryLargeZeroedMemalignVirtual):
(bmalloc::api::freeLargeVirtual):
(bmalloc::api::tryLargeMemalignVirtual): Deleted.

  • bmalloc/bmalloc.h:

Source/JavaScriptCore:

This patch switches WebAssembly Memory to always use bmalloc's
zeroed virtual allocation API. This makes it so that we don't
dirty the memory to zero it. It's a huge compile time speedup
on WasmBench on iOS.

  • wasm/WasmMemory.cpp:

(JSC::Wasm::Memory::create):
(JSC::Wasm::Memory::~Memory):
(JSC::Wasm::Memory::addressIsInActiveFastMemory):
(JSC::Wasm::Memory::grow):
(JSC::Wasm::commitZeroPages): Deleted.

Source/WTF:

  • wtf/Gigacage.cpp:

(Gigacage::tryAllocateZeroedVirtualPages):
(Gigacage::freeVirtualPages):
(Gigacage::tryAllocateVirtualPages): Deleted.

  • wtf/Gigacage.h:
  • wtf/OSAllocator.h:
Location:
trunk/Source
Files:
1 deleted
18 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r227929 r227951  
     12018-01-31  Saam Barati  <sbarati@apple.com>
     2
     3        Replace tryLargeMemalignVirtual with tryLargeZeroedMemalignVirtual and use it to allocate large zeroed memory in Wasm
     4        https://bugs.webkit.org/show_bug.cgi?id=182064
     5        <rdar://problem/36840132>
     6
     7        Reviewed by Geoffrey Garen.
     8
     9        This patch switches WebAssembly Memory to always use bmalloc's
     10        zeroed virtual allocation API. This makes it so that we don't
     11        dirty the memory to zero it. It's a huge compile time speedup
     12        on WasmBench on iOS.
     13
     14        * wasm/WasmMemory.cpp:
     15        (JSC::Wasm::Memory::create):
     16        (JSC::Wasm::Memory::~Memory):
     17        (JSC::Wasm::Memory::addressIsInActiveFastMemory):
     18        (JSC::Wasm::Memory::grow):
     19        (JSC::Wasm::commitZeroPages): Deleted.
     20
    1212018-01-31  Mark Lam  <mark.lam@apple.com>
    222
  • trunk/Source/JavaScriptCore/wasm/WasmMemory.cpp

    r226461 r227951  
    9696public:
    9797    MemoryManager()
    98         : m_maxCount(Options::maxNumWebAssemblyFastMemories())
    99     {
    100     }
    101    
    102     MemoryResult tryAllocateVirtualPages()
     98        : m_maxFastMemoryCount(Options::maxNumWebAssemblyFastMemories())
     99    {
     100    }
     101   
     102    MemoryResult tryAllocateFastMemory()
    103103    {
    104104        MemoryResult result = [&] {
    105105            auto holder = holdLock(m_lock);
    106             if (m_memories.size() >= m_maxCount)
     106            if (m_fastMemories.size() >= m_maxFastMemoryCount)
    107107                return MemoryResult(nullptr, MemoryResult::SyncTryToReclaimMemory);
    108108           
    109             void* result = Gigacage::tryAllocateVirtualPages(Gigacage::Primitive, Memory::fastMappedBytes());
     109            void* result = Gigacage::tryAllocateZeroedVirtualPages(Gigacage::Primitive, Memory::fastMappedBytes());
    110110            if (!result)
    111111                return MemoryResult(nullptr, MemoryResult::SyncTryToReclaimMemory);
    112112           
    113             m_memories.append(result);
     113            m_fastMemories.append(result);
    114114           
    115115            return MemoryResult(
    116116                result,
    117                 m_memories.size() >= m_maxCount / 2 ? MemoryResult::SuccessAndNotifyMemoryPressure : MemoryResult::Success);
     117                m_fastMemories.size() >= m_maxFastMemoryCount / 2 ? MemoryResult::SuccessAndNotifyMemoryPressure : MemoryResult::Success);
    118118        }();
    119119       
     
    124124    }
    125125   
    126     void freeVirtualPages(void* basePtr)
     126    void freeFastMemory(void* basePtr)
    127127    {
    128128        {
    129129            auto holder = holdLock(m_lock);
    130130            Gigacage::freeVirtualPages(Gigacage::Primitive, basePtr, Memory::fastMappedBytes());
    131             m_memories.removeFirst(basePtr);
     131            m_fastMemories.removeFirst(basePtr);
    132132        }
    133133       
     
    136136    }
    137137   
    138     bool containsAddress(void* address)
     138    bool isAddressInFastMemory(void* address)
    139139    {
    140140        // NOTE: This can be called from a signal handler, but only after we proved that we're in JIT code.
    141141        auto holder = holdLock(m_lock);
    142         for (void* memory : m_memories) {
     142        for (void* memory : m_fastMemories) {
    143143            char* start = static_cast<char*>(memory);
    144144            if (start <= address && address <= start + Memory::fastMappedBytes())
     
    189189    void dump(PrintStream& out) const
    190190    {
    191         out.print("virtual memories =  ", m_memories.size(), "/", m_maxCount, ", bytes = ", m_physicalBytes, "/", memoryLimit());
     191        out.print("fast memories =  ", m_fastMemories.size(), "/", m_maxFastMemoryCount, ", bytes = ", m_physicalBytes, "/", memoryLimit());
    192192    }
    193193   
    194194private:
    195195    Lock m_lock;
    196     unsigned m_maxCount { 0 };
    197     Vector<void*> m_memories;
     196    unsigned m_maxFastMemoryCount { 0 };
     197    Vector<void*> m_fastMemories;
    198198    size_t m_physicalBytes { 0 };
    199199};
     
    270270}
    271271
    272 static void commitZeroPages(void* startAddress, size_t sizeInBytes)
    273 {
    274     bool writable = true;
    275     bool executable = false;
    276 #if OS(LINUX)
    277     // In Linux, MADV_DONTNEED clears backing pages with zero. Be Careful that MADV_DONTNEED shows different semantics in different OSes.
    278     // For example, FreeBSD does not clear backing pages immediately.
    279     while (madvise(startAddress, sizeInBytes, MADV_DONTNEED) == -1 && errno == EAGAIN) { }
    280     OSAllocator::commit(startAddress, sizeInBytes, writable, executable);
    281 #else
    282     OSAllocator::commit(startAddress, sizeInBytes, writable, executable);
    283     memset(startAddress, 0, sizeInBytes);
    284 #endif
    285 }
    286 
    287272RefPtr<Memory> Memory::create()
    288273{
     
    315300        tryAllocate(
    316301            [&] () -> MemoryResult::Kind {
    317                 auto result = memoryManager().tryAllocateVirtualPages();
     302                auto result = memoryManager().tryAllocateFastMemory();
    318303                fastMemory = bitwise_cast<char*>(result.basePtr);
    319304                return result.kind;
     
    328313        }
    329314
    330         commitZeroPages(fastMemory, initialBytes);
    331 
    332315        return adoptRef(new Memory(fastMemory, initial, maximum, Memory::fastMappedBytes(), MemoryMode::Signaling, WTFMove(notifyMemoryPressure), WTFMove(syncTryToReclaimMemory), WTFMove(growSuccessCallback)));
    333316    }
     
    339322        return adoptRef(new Memory(initial, maximum, WTFMove(notifyMemoryPressure), WTFMove(syncTryToReclaimMemory), WTFMove(growSuccessCallback)));
    340323   
    341     void* slowMemory = Gigacage::tryAlignedMalloc(Gigacage::Primitive, WTF::pageSize(), initialBytes);
     324    void* slowMemory = Gigacage::tryAllocateZeroedVirtualPages(Gigacage::Primitive, initialBytes);
    342325    if (!slowMemory) {
    343326        memoryManager().freePhysicalBytes(initialBytes);
    344327        return nullptr;
    345328    }
    346     memset(slowMemory, 0, initialBytes);
    347329    return adoptRef(new Memory(slowMemory, initial, maximum, initialBytes, MemoryMode::BoundsChecking, WTFMove(notifyMemoryPressure), WTFMove(syncTryToReclaimMemory), WTFMove(growSuccessCallback)));
    348330}
     
    358340                RELEASE_ASSERT_NOT_REACHED();
    359341            }
    360             memoryManager().freeVirtualPages(m_memory);
     342            memoryManager().freeFastMemory(m_memory);
    361343            break;
    362344        case MemoryMode::BoundsChecking:
    363             Gigacage::alignedFree(Gigacage::Primitive, m_memory);
     345            Gigacage::freeVirtualPages(Gigacage::Primitive, m_memory, m_size);
    364346            break;
    365347        }
     
    380362bool Memory::addressIsInActiveFastMemory(void* address)
    381363{
    382     return memoryManager().containsAddress(address);
     364    return memoryManager().isAddressInFastMemory(address);
    383365}
    384366
     
    423405        RELEASE_ASSERT(maximum().bytes() != 0);
    424406
    425         void* newMemory = Gigacage::tryAlignedMalloc(Gigacage::Primitive, WTF::pageSize(), desiredSize);
     407        void* newMemory = Gigacage::tryAllocateZeroedVirtualPages(Gigacage::Primitive, desiredSize);
    426408        if (!newMemory)
    427409            return makeUnexpected(GrowFailReason::OutOfMemory);
    428410
    429411        memcpy(newMemory, m_memory, m_size);
    430         memset(static_cast<char*>(newMemory) + m_size, 0, desiredSize - m_size);
    431412        if (m_memory)
    432             Gigacage::alignedFree(Gigacage::Primitive, m_memory);
     413            Gigacage::freeVirtualPages(Gigacage::Primitive, m_memory, m_size);
    433414        m_memory = newMemory;
    434415        m_mappedCapacity = desiredSize;
     
    447428            RELEASE_ASSERT_NOT_REACHED();
    448429        }
    449         commitZeroPages(startAddress, extraBytes);
    450430        m_size = desiredSize;
    451431        m_indexingMask = WTF::computeIndexingMask(desiredSize);
  • trunk/Source/WTF/ChangeLog

    r227940 r227951  
     12018-01-31  Saam Barati  <sbarati@apple.com>
     2
     3        Replace tryLargeMemalignVirtual with tryLargeZeroedMemalignVirtual and use it to allocate large zeroed memory in Wasm
     4        https://bugs.webkit.org/show_bug.cgi?id=182064
     5        <rdar://problem/36840132>
     6
     7        Reviewed by Geoffrey Garen.
     8
     9        * wtf/Gigacage.cpp:
     10        (Gigacage::tryAllocateZeroedVirtualPages):
     11        (Gigacage::freeVirtualPages):
     12        (Gigacage::tryAllocateVirtualPages): Deleted.
     13        * wtf/Gigacage.h:
     14        * wtf/OSAllocator.h:
     15
    1162018-01-31  Mark Lam  <mark.lam@apple.com>
    217
  • trunk/Source/WTF/wtf/Gigacage.cpp

    r225471 r227951  
    4242}
    4343
    44 void* tryAllocateVirtualPages(Kind, size_t size)
     44void* tryAllocateZeroedVirtualPages(Kind, size_t size)
    4545{
    46     return OSAllocator::reserveUncommitted(size);
     46    size = roundUpToMultipleOf(WTF::pageSize(), size);
     47    void* result = OSAllocator::reserveAndCommit(size);
     48#if !ASSERT_DISABLED
     49    if (result) {
     50        for (size_t i = 0; i < size / sizeof(uintptr_t); ++i)
     51            ASSERT(static_cast<uintptr_t*>(result)[i] == 0);
     52    }
     53#endif
     54    return result;
    4755}
    4856
    4957void freeVirtualPages(Kind, void* basePtr, size_t size)
    5058{
    51     OSAllocator::releaseDecommitted(basePtr, size);
     59    OSAllocator::decommitAndRelease(basePtr, size);
    5260}
    5361
     
    94102}
    95103
    96 void* tryAllocateVirtualPages(Kind kind, size_t size)
     104void* tryAllocateZeroedVirtualPages(Kind kind, size_t size)
    97105{
    98     void* result = bmalloc::api::tryLargeMemalignVirtual(WTF::pageSize(), size, bmalloc::heapKind(kind));
     106    void* result = bmalloc::api::tryLargeZeroedMemalignVirtual(WTF::pageSize(), size, bmalloc::heapKind(kind));
    99107    WTF::compilerFence();
    100108    return result;
  • trunk/Source/WTF/wtf/Gigacage.h

    r225471 r227951  
    120120inline void free(Kind, void* p) { fastFree(p); }
    121121
    122 WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(Kind, size_t size);
     122WTF_EXPORT_PRIVATE void* tryAllocateZeroedVirtualPages(Kind, size_t size);
    123123WTF_EXPORT_PRIVATE void freeVirtualPages(Kind, void* basePtr, size_t size);
    124124
     
    134134WTF_EXPORT_PRIVATE void free(Kind, void*);
    135135
    136 WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(Kind, size_t size);
     136WTF_EXPORT_PRIVATE void* tryAllocateZeroedVirtualPages(Kind, size_t size);
    137137WTF_EXPORT_PRIVATE void freeVirtualPages(Kind, void* basePtr, size_t size);
    138138
  • trunk/Source/WTF/wtf/OSAllocator.h

    r215340 r227951  
    4343    // These methods are symmetric; reserveUncommitted allocates VM in an uncommitted state,
    4444    // releaseDecommitted should be called on a region of VM allocated by a single reservation,
    45     // the memory must all currently be in a decommitted state.
     45    // the memory must all currently be in a decommitted state. reserveUncommitted returns to
     46    // you memory that is zeroed.
    4647    WTF_EXPORT_PRIVATE static void* reserveUncommitted(size_t, Usage = UnknownUsage, bool writable = true, bool executable = false, bool includesGuardPages = false);
    4748    WTF_EXPORT_PRIVATE static void releaseDecommitted(void*, size_t);
  • trunk/Source/bmalloc/ChangeLog

    r227215 r227951  
     12018-01-31  Saam Barati  <sbarati@apple.com>
     2
     3        Replace tryLargeMemalignVirtual with tryLargeZeroedMemalignVirtual and use it to allocate large zeroed memory in Wasm
     4        https://bugs.webkit.org/show_bug.cgi?id=182064
     5        <rdar://problem/36840132>
     6
     7        Reviewed by Geoffrey Garen.
     8
     9        This patch replaces the tryLargeMemalignVirtual API with tryLargeZeroedMemalignVirtual.
     10        By doing that, we're able to remove the AllocationKind enum. To zero the memory,
     11        tryLargeZeroedMemalignVirtual uses mmap(... MAP_ANON ...) over previously mmapped
     12        memory. This both purges the any resident memory for the virtual range and ensures
     13        that the pages in the range are zeroed. Most OSs should implement this by taking a
     14        page fault and zero filling on first access. Therefore, this API is returning pages
     15        that will result in page faults on first access. Hence, the name 'virtual' in the API.
     16        This API differs from the old API in that users of it need not call madvise themselves.
     17        The memory is ready to go.
     18
     19        * bmalloc.xcodeproj/project.pbxproj:
     20        * bmalloc/AllocationKind.h: Removed.
     21        * bmalloc/DebugHeap.cpp:
     22        (bmalloc::DebugHeap::memalignLarge):
     23        (bmalloc::DebugHeap::freeLarge):
     24        * bmalloc/DebugHeap.h:
     25        * bmalloc/Heap.cpp:
     26        (bmalloc::Heap::splitAndAllocate):
     27        (bmalloc::Heap::tryAllocateLarge):
     28        (bmalloc::Heap::allocateLarge):
     29        (bmalloc::Heap::shrinkLarge):
     30        (bmalloc::Heap::deallocateLarge):
     31        * bmalloc/Heap.h:
     32        * bmalloc/IsoPage.cpp:
     33        (bmalloc::IsoPageBase::allocatePageMemory):
     34        * bmalloc/VMAllocate.h:
     35        (bmalloc::vmZeroAndPurge):
     36        * bmalloc/VMHeap.cpp:
     37        (bmalloc::VMHeap::tryAllocateLargeChunk):
     38        * bmalloc/VMHeap.h:
     39        * bmalloc/bmalloc.cpp:
     40        (bmalloc::api::tryLargeZeroedMemalignVirtual):
     41        (bmalloc::api::freeLargeVirtual):
     42        (bmalloc::api::tryLargeMemalignVirtual): Deleted.
     43        * bmalloc/bmalloc.h:
     44
    1452018-01-19  Keith Miller  <keith_miller@apple.com>
    246
  • trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj

    r226067 r227951  
    2222
    2323/* Begin PBXBuildFile section */
    24                 0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3DA0131F267AB800342C08 /* AllocationKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
    2524                0F5167741FAD685C008236A8 /* bmalloc.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5167731FAD6852008236A8 /* bmalloc.cpp */; };
    2625                0F5549EF1FB54704007FF75A /* IsoPage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5549EE1FB54701007FF75A /* IsoPage.cpp */; };
     
    174173
    175174/* Begin PBXFileReference section */
    176                 0F3DA0131F267AB800342C08 /* AllocationKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = AllocationKind.h; path = bmalloc/AllocationKind.h; sourceTree = "<group>"; };
    177175                0F5167731FAD6852008236A8 /* bmalloc.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = bmalloc.cpp; path = bmalloc/bmalloc.cpp; sourceTree = "<group>"; };
    178176                0F5549EE1FB54701007FF75A /* IsoPage.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = IsoPage.cpp; path = bmalloc/IsoPage.cpp; sourceTree = "<group>"; };
     
    469467                        isa = PBXGroup;
    470468                        children = (
    471                                 0F3DA0131F267AB800342C08 /* AllocationKind.h */,
    472469                                140FA00219CE429C00FFD3C8 /* BumpRange.h */,
    473470                                147DC6E21CA5B70B00724E8D /* Chunk.h */,
     
    615612                                14DD78CB18F48D7500950702 /* PerProcess.h in Headers */,
    616613                                0F7EB8261F9541B000F1ABCB /* IsoAllocatorInlines.h in Headers */,
    617                                 0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */,
    618614                                14DD78CC18F48D7500950702 /* PerThread.h in Headers */,
    619615                                14DD78CD18F48D7500950702 /* Range.h in Headers */,
  • trunk/Source/bmalloc/bmalloc/DebugHeap.cpp

    r220154 r227951  
    116116// https://bugs.webkit.org/show_bug.cgi?id=175086
    117117
    118 void* DebugHeap::memalignLarge(size_t alignment, size_t size, AllocationKind allocationKind)
     118void* DebugHeap::memalignLarge(size_t alignment, size_t size)
    119119{
    120120    alignment = roundUpToMultipleOf(m_pageSize, alignment);
     
    123123    if (!result)
    124124        return nullptr;
    125     if (allocationKind == AllocationKind::Virtual)
    126         vmDeallocatePhysicalPages(result, size);
    127125    {
    128126        std::lock_guard<std::mutex> locker(m_lock);
     
    132130}
    133131
    134 void DebugHeap::freeLarge(void* base, AllocationKind)
     132void DebugHeap::freeLarge(void* base)
    135133{
    136134    if (!base)
  • trunk/Source/bmalloc/bmalloc/DebugHeap.h

    r220154 r227951  
    2626#pragma once
    2727
    28 #include "AllocationKind.h"
    2928#include "StaticMutex.h"
    3029#include <mutex>
     
    4645    void free(void*);
    4746   
    48     void* memalignLarge(size_t alignment, size_t, AllocationKind);
    49     void freeLarge(void* base, AllocationKind);
     47    void* memalignLarge(size_t alignment, size_t);
     48    void freeLarge(void* base);
    5049
    5150private:
  • trunk/Source/bmalloc/bmalloc/Heap.cpp

    r223239 r227951  
    421421}
    422422
    423 LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size, AllocationKind allocationKind)
     423LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size)
    424424{
    425425    RELEASE_BASSERT(isActiveHeapKind(m_kind));
     
    442442    }
    443443   
    444     switch (allocationKind) {
    445     case AllocationKind::Virtual:
    446         if (range.physicalSize())
    447             vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
    448         break;
    449        
    450     case AllocationKind::Physical:
    451         if (range.physicalSize() < range.size()) {
    452             m_scavenger->scheduleIfUnderMemoryPressure(range.size());
    453            
    454             vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
    455             range.setPhysicalSize(range.size());
    456         }
    457         break;
     444    if (range.physicalSize() < range.size()) {
     445        m_scavenger->scheduleIfUnderMemoryPressure(range.size());
     446        vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
     447        range.setPhysicalSize(range.size());
    458448    }
    459449   
     
    470460}
    471461
    472 void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size, AllocationKind allocationKind)
     462void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
    473463{
    474464    RELEASE_BASSERT(isActiveHeapKind(m_kind));
     
    477467   
    478468    if (m_debugHeap)
    479         return m_debugHeap->memalignLarge(alignment, size, allocationKind);
     469        return m_debugHeap->memalignLarge(alignment, size);
    480470   
    481471    m_scavenger->didStartGrowing();
     
    496486            return nullptr;
    497487
    498         range = PerProcess<VMHeap>::get()->tryAllocateLargeChunk(alignment, size, allocationKind);
     488        range = PerProcess<VMHeap>::get()->tryAllocateLargeChunk(alignment, size);
    499489        if (!range)
    500490            return nullptr;
     
    505495    }
    506496
    507     return splitAndAllocate(range, alignment, size, allocationKind).begin();
    508 }
    509 
    510 void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, AllocationKind allocationKind)
    511 {
    512     void* result = tryAllocateLarge(lock, alignment, size, allocationKind);
     497    return splitAndAllocate(range, alignment, size).begin();
     498}
     499
     500void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
     501{
     502    void* result = tryAllocateLarge(lock, alignment, size);
    513503    RELEASE_BASSERT(result);
    514504    return result;
     
    531521    size_t size = m_largeAllocated.remove(object.begin());
    532522    LargeRange range = LargeRange(object, size);
    533     splitAndAllocate(range, alignment, newSize, AllocationKind::Physical);
     523    splitAndAllocate(range, alignment, newSize);
    534524
    535525    m_scavenger->schedule(size);
    536526}
    537527
    538 void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object, AllocationKind allocationKind)
     528void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
    539529{
    540530    if (m_debugHeap)
    541         return m_debugHeap->freeLarge(object, allocationKind);
     531        return m_debugHeap->freeLarge(object);
    542532
    543533    size_t size = m_largeAllocated.remove(object);
    544     m_largeFree.add(LargeRange(object, size, allocationKind == AllocationKind::Physical ? size : 0));
     534    m_largeFree.add(LargeRange(object, size, size));
    545535    m_scavenger->schedule(size);
    546536}
  • trunk/Source/bmalloc/bmalloc/Heap.h

    r222982 r227951  
    2727#define Heap_h
    2828
    29 #include "AllocationKind.h"
    3029#include "BumpRange.h"
    3130#include "Chunk.h"
     
    6867    void deallocateLineCache(std::lock_guard<StaticMutex>&, LineCache&);
    6968
    70     void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
    71     void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
    72     void deallocateLarge(std::lock_guard<StaticMutex>&, void*, AllocationKind = AllocationKind::Physical);
     69    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
     70    void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
     71    void deallocateLarge(std::lock_guard<StaticMutex>&, void*);
    7372
    7473    bool isLarge(std::lock_guard<StaticMutex>&, void*);
     
    111110    void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
    112111
    113     LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t, AllocationKind);
     112    LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t);
    114113
    115114    HeapKind m_kind;
  • trunk/Source/bmalloc/bmalloc/IsoPage.cpp

    r225125 r227951  
    3333void* IsoPageBase::allocatePageMemory()
    3434{
    35     return PerProcess<VMHeap>::get()->tryAllocateLargeChunk(pageSize, pageSize, AllocationKind::Physical).begin();
     35    return PerProcess<VMHeap>::get()->tryAllocateLargeChunk(pageSize, pageSize).begin();
    3636}
    3737
  • trunk/Source/bmalloc/bmalloc/VMAllocate.h

    r225912 r227951  
    147147}
    148148
     149inline void vmZeroAndPurge(void* p, size_t vmSize)
     150{
     151    vmValidate(p, vmSize);
     152    // MAP_ANON guarantees the memory is zeroed. This will also cause
     153    // page faults on accesses to this range following this call.
     154    void* result = mmap(p, vmSize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON | MAP_FIXED | BMALLOC_NORESERVE, BMALLOC_VM_TAG, 0);
     155    RELEASE_BASSERT(result == p);
     156}
     157
    149158// Allocates vmSize bytes at a specified power-of-two alignment.
    150159// Use this function to create maskable memory regions.
  • trunk/Source/bmalloc/bmalloc/VMHeap.cpp

    r220118 r227951  
    3434}
    3535
    36 LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size, AllocationKind allocationKind)
     36LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
    3737{
    3838    // We allocate VM in aligned multiples to increase the chances that
     
    5252        return LargeRange();
    5353   
    54     if (allocationKind == AllocationKind::Virtual)
    55         vmDeallocatePhysicalPagesSloppy(memory, size);
    56 
    5754    Chunk* chunk = static_cast<Chunk*>(memory);
    5855   
  • trunk/Source/bmalloc/bmalloc/VMHeap.h

    r220118 r227951  
    2727#define VMHeap_h
    2828
    29 #include "AllocationKind.h"
    3029#include "Chunk.h"
    3130#include "FixedVector.h"
     
    5049    VMHeap(std::lock_guard<StaticMutex>&);
    5150   
    52     LargeRange tryAllocateLargeChunk(size_t alignment, size_t, AllocationKind);
     51    LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
    5352};
    5453
  • trunk/Source/bmalloc/bmalloc/bmalloc.cpp

    r225125 r227951  
    4040}
    4141
    42 void* tryLargeMemalignVirtual(size_t alignment, size_t size, HeapKind kind)
     42void* tryLargeZeroedMemalignVirtual(size_t alignment, size_t size, HeapKind kind)
    4343{
     44    BASSERT(isPowerOfTwo(alignment));
     45
     46    size_t pageSize = vmPageSize();
     47    alignment = roundUpToMultipleOf(pageSize, alignment);
     48    size = roundUpToMultipleOf(pageSize, size);
     49
    4450    kind = mapToActiveHeapKind(kind);
    4551    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
    46     std::lock_guard<StaticMutex> lock(Heap::mutex());
    47     return heap.tryAllocateLarge(lock, alignment, size, AllocationKind::Virtual);
     52
     53    void* result;
     54    {
     55        std::lock_guard<StaticMutex> lock(Heap::mutex());
     56        result = heap.tryAllocateLarge(lock, alignment, size);
     57    }
     58
     59    if (result)
     60        vmZeroAndPurge(result, size);
     61    return result;
    4862}
    4963
     
    5367    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
    5468    std::lock_guard<StaticMutex> lock(Heap::mutex());
    55     heap.deallocateLarge(lock, object, AllocationKind::Virtual);
     69    heap.deallocateLarge(lock, object);
    5670}
    5771
  • trunk/Source/bmalloc/bmalloc/bmalloc.h

    r225125 r227951  
    7070}
    7171
    72 // Returns null for failure
    73 BEXPORT void* tryLargeMemalignVirtual(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary);
     72// Returns null on failure.
     73// This API will give you zeroed pages that are ready to be used. These pages
     74// will page fault on first access. It returns to you memory that initially only
     75// uses up virtual address space, not `size` bytes of physical memory.
     76BEXPORT void* tryLargeZeroedMemalignVirtual(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary);
    7477
    7578inline void free(void* object, HeapKind kind = HeapKind::Primary)
Note: See TracChangeset for help on using the changeset viewer.