Changeset 241847 in webkit


Ignore:
Timestamp:
Feb 20, 2019 4:03:17 PM (5 years ago)
Author:
ysuzuki@apple.com
Message:

[bmalloc] bmalloc::Heap is allocated even though we use system malloc mode
https://bugs.webkit.org/show_bug.cgi?id=194836

Reviewed by Mark Lam.

Previously, bmalloc::Heap holds DebugHeap, and delegates allocation and deallocation to debug heap.
However, bmalloc::Heap is large. We would like to avoid initialization of bmalloc::Heap under the
system malloc mode.

This patch extracts out DebugHeap from bmalloc::Heap, and logically puts this in a boundary of
bmalloc::api. bmalloc::api delegates allocation and deallocation to DebugHeap if DebugHeap is enabled.
Otherwise, using bmalloc's usual mechanism. The challenge is that we would like to keep bmalloc fast
path fast.

  1. For IsoHeaps, we use the similar techniques done in Cache. If the debug mode is enabled, we always go to the slow path of the IsoHeap allocation, and keep IsoTLS::get() returning nullptr. In the slow path, we just fallback to the usual bmalloc::api::tryMalloc implementation. This is efficient because bmalloc continues using the fast path.
  1. For the other APIs, like freeLargeVirtual, we just put DebugHeap check because this API itself takes fair amount of time. Then debug heap check does not matter.
  • bmalloc/Allocator.cpp:

(bmalloc::Allocator::reallocateImpl):

  • bmalloc/Cache.cpp:

(bmalloc::Cache::tryAllocateSlowCaseNullCache):
(bmalloc::Cache::allocateSlowCaseNullCache):
(bmalloc::Cache::deallocateSlowCaseNullCache):
(bmalloc::Cache::tryReallocateSlowCaseNullCache):
(bmalloc::Cache::reallocateSlowCaseNullCache):
(): Deleted.
(bmalloc::debugHeap): Deleted.

  • bmalloc/DebugHeap.cpp:
  • bmalloc/DebugHeap.h:

(bmalloc::DebugHeap::tryGet):

  • bmalloc/Heap.cpp:

(bmalloc::Heap::Heap):
(bmalloc::Heap::footprint):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::deallocateLarge):

  • bmalloc/Heap.h:

(bmalloc::Heap::debugHeap): Deleted.

  • bmalloc/IsoTLS.cpp:

(bmalloc::IsoTLS::IsoTLS):
(bmalloc::IsoTLS::isUsingDebugHeap): Deleted.
(bmalloc::IsoTLS::debugMalloc): Deleted.
(bmalloc::IsoTLS::debugFree): Deleted.

  • bmalloc/IsoTLS.h:
  • bmalloc/IsoTLSInlines.h:

(bmalloc::IsoTLS::allocateSlow):
(bmalloc::IsoTLS::deallocateSlow):

  • bmalloc/ObjectType.cpp:

(bmalloc::objectType):

  • bmalloc/ObjectType.h:
  • bmalloc/Scavenger.cpp:

(bmalloc::Scavenger::Scavenger):

  • bmalloc/bmalloc.cpp:

(bmalloc::api::tryLargeZeroedMemalignVirtual):
(bmalloc::api::freeLargeVirtual):
(bmalloc::api::scavenge):
(bmalloc::api::isEnabled):
(bmalloc::api::setScavengerThreadQOSClass):
(bmalloc::api::commitAlignedPhysical):
(bmalloc::api::decommitAlignedPhysical):
(bmalloc::api::enableMiniMode):

Location:
trunk/Source/bmalloc
Files:
15 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/bmalloc/ChangeLog

    r241841 r241847  
     12019-02-19  Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [bmalloc] bmalloc::Heap is allocated even though we use system malloc mode
     4        https://bugs.webkit.org/show_bug.cgi?id=194836
     5
     6        Reviewed by Mark Lam.
     7
     8        Previously, bmalloc::Heap holds DebugHeap, and delegates allocation and deallocation to debug heap.
     9        However, bmalloc::Heap is large. We would like to avoid initialization of bmalloc::Heap under the
     10        system malloc mode.
     11
     12        This patch extracts out DebugHeap from bmalloc::Heap, and logically puts this in a boundary of
     13        bmalloc::api. bmalloc::api delegates allocation and deallocation to DebugHeap if DebugHeap is enabled.
     14        Otherwise, using bmalloc's usual mechanism. The challenge is that we would like to keep bmalloc fast
     15        path fast.
     16
     17        1. For IsoHeaps, we use the similar techniques done in Cache. If the debug mode is enabled, we always go
     18           to the slow path of the IsoHeap allocation, and keep IsoTLS::get() returning nullptr. In the slow path,
     19           we just fallback to the usual bmalloc::api::tryMalloc implementation. This is efficient because bmalloc
     20           continues using the fast path.
     21
     22        2. For the other APIs, like freeLargeVirtual, we just put DebugHeap check because this API itself takes fair
     23           amount of time. Then debug heap check does not matter.
     24
     25        * bmalloc/Allocator.cpp:
     26        (bmalloc::Allocator::reallocateImpl):
     27        * bmalloc/Cache.cpp:
     28        (bmalloc::Cache::tryAllocateSlowCaseNullCache):
     29        (bmalloc::Cache::allocateSlowCaseNullCache):
     30        (bmalloc::Cache::deallocateSlowCaseNullCache):
     31        (bmalloc::Cache::tryReallocateSlowCaseNullCache):
     32        (bmalloc::Cache::reallocateSlowCaseNullCache):
     33        (): Deleted.
     34        (bmalloc::debugHeap): Deleted.
     35        * bmalloc/DebugHeap.cpp:
     36        * bmalloc/DebugHeap.h:
     37        (bmalloc::DebugHeap::tryGet):
     38        * bmalloc/Heap.cpp:
     39        (bmalloc::Heap::Heap):
     40        (bmalloc::Heap::footprint):
     41        (bmalloc::Heap::tryAllocateLarge):
     42        (bmalloc::Heap::deallocateLarge):
     43        * bmalloc/Heap.h:
     44        (bmalloc::Heap::debugHeap): Deleted.
     45        * bmalloc/IsoTLS.cpp:
     46        (bmalloc::IsoTLS::IsoTLS):
     47        (bmalloc::IsoTLS::isUsingDebugHeap): Deleted.
     48        (bmalloc::IsoTLS::debugMalloc): Deleted.
     49        (bmalloc::IsoTLS::debugFree): Deleted.
     50        * bmalloc/IsoTLS.h:
     51        * bmalloc/IsoTLSInlines.h:
     52        (bmalloc::IsoTLS::allocateSlow):
     53        (bmalloc::IsoTLS::deallocateSlow):
     54        * bmalloc/ObjectType.cpp:
     55        (bmalloc::objectType):
     56        * bmalloc/ObjectType.h:
     57        * bmalloc/Scavenger.cpp:
     58        (bmalloc::Scavenger::Scavenger):
     59        * bmalloc/bmalloc.cpp:
     60        (bmalloc::api::tryLargeZeroedMemalignVirtual):
     61        (bmalloc::api::freeLargeVirtual):
     62        (bmalloc::api::scavenge):
     63        (bmalloc::api::isEnabled):
     64        (bmalloc::api::setScavengerThreadQOSClass):
     65        (bmalloc::api::commitAlignedPhysical):
     66        (bmalloc::api::decommitAlignedPhysical):
     67        (bmalloc::api::enableMiniMode):
     68
    1692019-02-20  Andy Estes  <aestes@apple.com>
    270
  • trunk/Source/bmalloc/bmalloc/Allocator.cpp

    r241832 r241847  
    103103{
    104104    size_t oldSize = 0;
    105     switch (objectType(m_heap.kind(), object)) {
     105    switch (objectType(m_heap, object)) {
    106106    case ObjectType::Small: {
    107         BASSERT(objectType(m_heap.kind(), nullptr) == ObjectType::Small);
     107        BASSERT(objectType(m_heap, nullptr) == ObjectType::Small);
    108108        if (!object)
    109109            break;
  • trunk/Source/bmalloc/bmalloc/Cache.cpp

    r241837 r241847  
    3333namespace bmalloc {
    3434
    35 static DebugHeap* debugHeapCache { nullptr };
    36 
    3735void Cache::scavenge(HeapKind heapKind)
    3836{
     
    4745}
    4846
    49 static BINLINE DebugHeap* debugHeap()
    50 {
    51     if (debugHeapCache)
    52         return debugHeapCache;
    53     if (PerProcess<Environment>::get()->isDebugHeapEnabled()) {
    54         debugHeapCache = PerProcess<DebugHeap>::get();
    55         return debugHeapCache;
    56     }
    57     return nullptr;
    58 }
    59 
    6047Cache::Cache(HeapKind heapKind)
    6148    : m_deallocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind))
     
    6754BNO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
    6855{
    69     if (auto* heap = debugHeap()) {
     56    if (auto* debugHeap = DebugHeap::tryGet()) {
    7057        constexpr bool crashOnFailure = false;
    71         return heap->malloc(size, crashOnFailure);
     58        return debugHeap->malloc(size, crashOnFailure);
    7259    }
    7360    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryAllocate(size);
     
    7663BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
    7764{
    78     if (auto* heap = debugHeap()) {
     65    if (auto* debugHeap = DebugHeap::tryGet()) {
    7966        constexpr bool crashOnFailure = true;
    80         return heap->malloc(size, crashOnFailure);
     67        return debugHeap->malloc(size, crashOnFailure);
    8168    }
    8269    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().allocate(size);
     
    8572BNO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
    8673{
    87     if (auto* heap = debugHeap()) {
     74    if (auto* debugHeap = DebugHeap::tryGet()) {
    8875        constexpr bool crashOnFailure = false;
    89         return heap->memalign(alignment, size, crashOnFailure);
     76        return debugHeap->memalign(alignment, size, crashOnFailure);
    9077    }
    9178    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryAllocate(alignment, size);
     
    9481BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
    9582{
    96     if (auto* heap = debugHeap()) {
     83    if (auto* debugHeap = DebugHeap::tryGet()) {
    9784        constexpr bool crashOnFailure = true;
    98         return heap->memalign(alignment, size, crashOnFailure);
     85        return debugHeap->memalign(alignment, size, crashOnFailure);
    9986    }
    10087    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().allocate(alignment, size);
     
    10390BNO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
    10491{
    105     if (auto* heap = debugHeap()) {
    106         heap->free(object);
     92    if (auto* debugHeap = DebugHeap::tryGet()) {
     93        debugHeap->free(object);
    10794        return;
    10895    }
     
    11299BNO_INLINE void* Cache::tryReallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
    113100{
    114     if (auto* heap = debugHeap()) {
     101    if (auto* debugHeap = DebugHeap::tryGet()) {
    115102        constexpr bool crashOnFailure = false;
    116         return heap->realloc(object, newSize, crashOnFailure);
     103        return debugHeap->realloc(object, newSize, crashOnFailure);
    117104    }
    118105    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().tryReallocate(object, newSize);
     
    121108BNO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
    122109{
    123     if (auto* heap = debugHeap()) {
     110    if (auto* debugHeap = DebugHeap::tryGet()) {
    124111        constexpr bool crashOnFailure = true;
    125         return heap->realloc(object, newSize, crashOnFailure);
     112        return debugHeap->realloc(object, newSize, crashOnFailure);
    126113    }
    127114    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(mapToActiveHeapKind(heapKind)).allocator().reallocate(object, newSize);
  • trunk/Source/bmalloc/bmalloc/DebugHeap.cpp

    r241837 r241847  
    3434
    3535namespace bmalloc {
     36
     37DebugHeap* debugHeapCache { nullptr };
    3638   
    3739#if BOS(DARWIN)
  • trunk/Source/bmalloc/bmalloc/DebugHeap.h

    r241837 r241847  
    2626#pragma once
    2727
     28#include "Environment.h"
    2829#include "Mutex.h"
     30#include "PerProcess.h"
    2931#include <mutex>
    3032#include <unordered_map>
     
    4850    void freeLarge(void* base);
    4951
     52    static DebugHeap* tryGet();
     53
    5054private:
    5155#if BOS(DARWIN)
     
    5963};
    6064
     65extern BEXPORT DebugHeap* debugHeapCache;
     66BINLINE DebugHeap* DebugHeap::tryGet()
     67{
     68    if (debugHeapCache)
     69        return debugHeapCache;
     70    if (PerProcess<Environment>::get()->isDebugHeapEnabled()) {
     71        debugHeapCache = PerProcess<DebugHeap>::get();
     72        return debugHeapCache;
     73    }
     74    return nullptr;
     75}
     76
    6177} // namespace bmalloc
  • trunk/Source/bmalloc/bmalloc/Environment.h

    r230308 r241847  
    3333class Environment {
    3434public:
    35     Environment(std::lock_guard<Mutex>&);
     35    BEXPORT Environment(std::lock_guard<Mutex>&);
    3636   
    3737    bool isDebugHeapEnabled() { return m_isDebugHeapEnabled; }
  • trunk/Source/bmalloc/bmalloc/Heap.cpp

    r241305 r241847  
    4848    : m_kind(kind)
    4949    , m_vmPageSizePhysical(vmPageSizePhysical())
    50     , m_debugHeap(nullptr)
    5150{
    5251    RELEASE_BASSERT(vmPageSizePhysical() >= smallPageSize);
     
    5655    initializePageMetadata();
    5756   
    58     if (PerProcess<Environment>::get()->isDebugHeapEnabled())
    59         m_debugHeap = PerProcess<DebugHeap>::get();
    60     else {
    61         Gigacage::ensureGigacage();
     57    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
     58
     59    Gigacage::ensureGigacage();
    6260#if GIGACAGE_ENABLED
    63         if (usingGigacage()) {
    64             RELEASE_BASSERT(gigacageBasePtr());
    65             uint64_t random[2];
    66             cryptoRandom(reinterpret_cast<unsigned char*>(random), sizeof(random));
    67             size_t size = roundDownToMultipleOf(vmPageSize(), gigacageSize() - (random[0] % Gigacage::maximumCageSizeReductionForSlide));
    68             ptrdiff_t offset = roundDownToMultipleOf(vmPageSize(), random[1] % (gigacageSize() - size));
    69             void* base = reinterpret_cast<unsigned char*>(gigacageBasePtr()) + offset;
    70             m_largeFree.add(LargeRange(base, size, 0, 0));
    71         }
     61    if (usingGigacage()) {
     62        RELEASE_BASSERT(gigacageBasePtr());
     63        uint64_t random[2];
     64        cryptoRandom(reinterpret_cast<unsigned char*>(random), sizeof(random));
     65        size_t size = roundDownToMultipleOf(vmPageSize(), gigacageSize() - (random[0] % Gigacage::maximumCageSizeReductionForSlide));
     66        ptrdiff_t offset = roundDownToMultipleOf(vmPageSize(), random[1] % (gigacageSize() - size));
     67        void* base = reinterpret_cast<unsigned char*>(gigacageBasePtr()) + offset;
     68        m_largeFree.add(LargeRange(base, size, 0, 0));
     69    }
    7270#endif
    73     }
    7471   
    7572    m_scavenger = PerProcess<Scavenger>::get();
     
    154151size_t Heap::footprint()
    155152{
    156     BASSERT(!m_debugHeap);
    157153    return m_footprint;
    158154}
     
    556552    BASSERT(isPowerOfTwo(alignment));
    557553   
    558     if (m_debugHeap)
    559         return m_debugHeap->memalignLarge(alignment, size);
    560    
    561554    m_scavenger->didStartGrowing();
    562555   
     
    627620void Heap::deallocateLarge(std::unique_lock<Mutex>&, void* object)
    628621{
    629     if (m_debugHeap)
    630         return m_debugHeap->freeLarge(object);
    631 
    632622    size_t size = m_largeAllocated.remove(object);
    633623    m_largeFree.add(LargeRange(object, size, size, size));
  • trunk/Source/bmalloc/bmalloc/Heap.h

    r241305 r241847  
    6464    HeapKind kind() const { return m_kind; }
    6565   
    66     DebugHeap* debugHeap() { return m_debugHeap; }
    67 
    6866    void allocateSmallBumpRanges(std::unique_lock<Mutex>&, size_t sizeClass,
    6967        BumpAllocator&, BumpRangeCache&, LineCache&);
     
    146144
    147145    Scavenger* m_scavenger { nullptr };
    148     DebugHeap* m_debugHeap { nullptr };
    149146
    150147    size_t m_footprint { 0 };
  • trunk/Source/bmalloc/bmalloc/IsoTLS.cpp

    r241837 r241847  
    2626#include "IsoTLS.h"
    2727
    28 #include "DebugHeap.h"
    2928#include "Environment.h"
    3029#include "Gigacage.h"
     
    5655IsoTLS::IsoTLS()
    5756{
     57    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
    5858}
    5959
     
    175175}
    176176
    177 bool IsoTLS::isUsingDebugHeap()
    178 {
    179     return PerProcess<Environment>::get()->isDebugHeapEnabled();
    180 }
    181 
    182 auto IsoTLS::debugMalloc(size_t size) -> DebugMallocResult
    183 {
    184     DebugMallocResult result;
    185     if ((result.usingDebugHeap = isUsingDebugHeap())) {
    186         constexpr bool crashOnFailure = true;
    187         result.ptr = PerProcess<DebugHeap>::get()->malloc(size, crashOnFailure);
    188     }
    189     return result;
    190 }
    191 
    192 bool IsoTLS::debugFree(void* p)
    193 {
    194     if (isUsingDebugHeap()) {
    195         PerProcess<DebugHeap>::get()->free(p);
    196         return true;
    197     }
    198     return false;
    199 }
    200 
    201177void IsoTLS::determineMallocFallbackState()
    202178{
  • trunk/Source/bmalloc/bmalloc/IsoTLS.h

    r229694 r241847  
    104104    BEXPORT static void determineMallocFallbackState();
    105105   
    106     static bool isUsingDebugHeap();
    107    
    108     struct DebugMallocResult {
    109         void* ptr { nullptr };
    110         bool usingDebugHeap { false };
    111     };
    112    
    113     BEXPORT static DebugMallocResult debugMalloc(size_t);
    114     BEXPORT static bool debugFree(void*);
    115    
    116106    IsoTLSEntry* m_lastEntry { nullptr };
    117107    unsigned m_extent { 0 };
  • trunk/Source/bmalloc/bmalloc/IsoTLSInlines.h

    r230308 r241847  
    2626#pragma once
    2727
     28#include "Environment.h"
    2829#include "IsoHeapImpl.h"
    2930#include "IsoTLS.h"
     
    9596    }
    9697   
    97     auto debugMallocResult = debugMalloc(Config::objectSize);
    98     if (debugMallocResult.usingDebugHeap)
    99         return debugMallocResult.ptr;
     98    // If debug heap is enabled, s_mallocFallbackState becomes MallocFallbackState::FallBackToMalloc.
     99    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
    100100   
    101101    IsoTLS* tls = ensureHeapAndEntries(handle);
     
    139139    }
    140140   
    141     if (debugFree(p))
    142         return;
     141    // If debug heap is enabled, s_mallocFallbackState becomes MallocFallbackState::FallBackToMalloc.
     142    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
    143143   
    144144    RELEASE_BASSERT(handle.isInitialized());
  • trunk/Source/bmalloc/bmalloc/ObjectType.cpp

    r230501 r241847  
    3333namespace bmalloc {
    3434
    35 ObjectType objectType(HeapKind kind, void* object)
     35ObjectType objectType(Heap& heap, void* object)
    3636{
    3737    if (mightBeLarge(object)) {
     
    4040
    4141        std::unique_lock<Mutex> lock(Heap::mutex());
    42         if (PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).isLarge(lock, object))
     42        if (heap.isLarge(lock, object))
    4343            return ObjectType::Large;
    4444    }
  • trunk/Source/bmalloc/bmalloc/ObjectType.h

    r220118 r241847  
    3333namespace bmalloc {
    3434
     35class Heap;
     36
    3537enum class ObjectType : unsigned char { Small, Large };
    3638
    37 ObjectType objectType(HeapKind, void*);
     39ObjectType objectType(Heap&, void*);
    3840
    3941inline bool mightBeLarge(void* object)
  • trunk/Source/bmalloc/bmalloc/Scavenger.cpp

    r241580 r241847  
    6868Scavenger::Scavenger(std::lock_guard<Mutex>&)
    6969{
    70     if (PerProcess<Environment>::get()->isDebugHeapEnabled())
    71         return;
     70    BASSERT(!PerProcess<Environment>::get()->isDebugHeapEnabled());
    7271
    7372#if BOS(DARWIN)
  • trunk/Source/bmalloc/bmalloc/bmalloc.cpp

    r241832 r241847  
    2626#include "bmalloc.h"
    2727
     28#include "DebugHeap.h"
    2829#include "Environment.h"
    2930#include "PerProcess.h"
     
    5152    RELEASE_BASSERT(size >= requestedSize);
    5253
    53     kind = mapToActiveHeapKind(kind);
    54     Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
     54    void* result;
     55    if (auto* debugHeap = DebugHeap::tryGet())
     56        result = debugHeap->memalignLarge(alignment, size);
     57    else {
     58        kind = mapToActiveHeapKind(kind);
     59        Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
    5560
    56     void* result;
    57     {
    5861        std::unique_lock<Mutex> lock(Heap::mutex());
    5962        result = heap.tryAllocateLarge(lock, alignment, size);
     
    7477void freeLargeVirtual(void* object, size_t size, HeapKind kind)
    7578{
     79    if (auto* debugHeap = DebugHeap::tryGet()) {
     80        debugHeap->freeLarge(object);
     81        return;
     82    }
    7683    kind = mapToActiveHeapKind(kind);
    7784    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
     
    8693    scavengeThisThread();
    8794
    88     PerProcess<Scavenger>::get()->scavenge();
     95    if (!DebugHeap::tryGet())
     96        PerProcess<Scavenger>::get()->scavenge();
    8997}
    9098
     
    97105void setScavengerThreadQOSClass(qos_class_t overrideClass)
    98106{
     107    if (DebugHeap::tryGet())
     108        return;
    99109    std::unique_lock<Mutex> lock(Heap::mutex());
    100110    PerProcess<Scavenger>::get()->setScavengerThreadQOSClass(overrideClass);
     
    106116    vmValidatePhysical(object, size);
    107117    vmAllocatePhysicalPages(object, size);
    108     Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
    109     heap.externalCommit(object, size);
     118    if (!DebugHeap::tryGet())
     119        PerProcess<PerHeapKind<Heap>>::get()->at(kind).externalCommit(object, size);
    110120}
    111121
     
    114124    vmValidatePhysical(object, size);
    115125    vmDeallocatePhysicalPages(object, size);
    116     Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
    117     heap.externalDecommit(object, size);
     126    if (!DebugHeap::tryGet())
     127        PerProcess<PerHeapKind<Heap>>::get()->at(kind).externalDecommit(object, size);
    118128}
    119129
    120130void enableMiniMode()
    121131{
    122     PerProcess<Scavenger>::get()->enableMiniMode();
     132    if (!DebugHeap::tryGet())
     133        PerProcess<Scavenger>::get()->enableMiniMode();
    123134}
    124135
Note: See TracChangeset for help on using the changeset viewer.