Changeset 220118 in webkit


Ignore:
Timestamp:
Aug 1, 2017, 6:50:16 PM (8 years ago)
Author:
fpizlo@apple.com
Message:

Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
https://bugs.webkit.org/show_bug.cgi?id=174727

Reviewed by Mark Lam.
Source/bmalloc:


This adds a mechanism for managing multiple isolated heaps in bmalloc. For now, these isoheaps
(isolated heaps) have a very simple relationship with each other and with the rest of bmalloc:

  • You have to choose how many isoheaps you will have statically. See numHeaps in HeapKind.h.


  • Because numHeaps is static, each isoheap gets fast thread-local allocation. Basically, we have a Cache for each heap kind.


  • Each isoheap gets its own Heap.


  • Each Heap gets a scavenger thread.


  • Some things, like Zone/VMHeap/Scavenger, are per-process.


Most of the per-HeapKind functionality is handled by PerHeapKind<>.

This approach is ideal for supporting special per-HeapKind behaviors. For now we have two heaps:
the Primary heap for normal malloc and the Gigacage. The gigacage is a 64GB-aligned 64GB virtual
region that we now use for variable-length random-access allocations. No Primary allocations will
go into the Gigacage.

  • CMakeLists.txt:
  • bmalloc.xcodeproj/project.pbxproj:
  • bmalloc/AllocationKind.h: Added.
  • bmalloc/Allocator.cpp:

(bmalloc::Allocator::Allocator):
(bmalloc::Allocator::tryAllocate):
(bmalloc::Allocator::allocateImpl):
(bmalloc::Allocator::reallocate):
(bmalloc::Allocator::refillAllocatorSlowCase):
(bmalloc::Allocator::allocateLarge):

  • bmalloc/Allocator.h:
  • bmalloc/BExport.h: Added.
  • bmalloc/Cache.cpp:

(bmalloc::Cache::scavenge):
(bmalloc::Cache::Cache):
(bmalloc::Cache::tryAllocateSlowCaseNullCache):
(bmalloc::Cache::allocateSlowCaseNullCache):
(bmalloc::Cache::deallocateSlowCaseNullCache):
(bmalloc::Cache::reallocateSlowCaseNullCache):
(bmalloc::Cache::operator new): Deleted.
(bmalloc::Cache::operator delete): Deleted.

  • bmalloc/Cache.h:

(bmalloc::Cache::tryAllocate):
(bmalloc::Cache::allocate):
(bmalloc::Cache::deallocate):
(bmalloc::Cache::reallocate):

  • bmalloc/Deallocator.cpp:

(bmalloc::Deallocator::Deallocator):
(bmalloc::Deallocator::scavenge):
(bmalloc::Deallocator::processObjectLog):
(bmalloc::Deallocator::deallocateSlowCase):

  • bmalloc/Deallocator.h:
  • bmalloc/Gigacage.cpp: Added.

(Gigacage::Callback::Callback):
(Gigacage::Callback::function):
(Gigacage::Callbacks::Callbacks):
(Gigacage::ensureGigacage):
(Gigacage::disableGigacage):
(Gigacage::addDisableCallback):
(Gigacage::removeDisableCallback):

  • bmalloc/Gigacage.h: Added.

(Gigacage::caged):
(Gigacage::isCaged):

  • bmalloc/Heap.cpp:

(bmalloc::Heap::Heap):
(bmalloc::Heap::usingGigacage):
(bmalloc::Heap::concurrentScavenge):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):

  • bmalloc/Heap.h:

(bmalloc::Heap::mutex):
(bmalloc::Heap::kind const):
(bmalloc::Heap::setScavengerThreadQOSClass): Deleted.

  • bmalloc/HeapKind.h: Added.
  • bmalloc/ObjectType.cpp:

(bmalloc::objectType):

  • bmalloc/ObjectType.h:
  • bmalloc/PerHeapKind.h: Added.

(bmalloc::PerHeapKindBase::PerHeapKindBase):
(bmalloc::PerHeapKindBase::size):
(bmalloc::PerHeapKindBase::at):
(bmalloc::PerHeapKindBase::at const):
(bmalloc::PerHeapKindBase::operator[]):
(bmalloc::PerHeapKindBase::operator[] const):
(bmalloc::StaticPerHeapKind::StaticPerHeapKind):
(bmalloc::PerHeapKind::PerHeapKind):
(bmalloc::PerHeapKind::~PerHeapKind):

  • bmalloc/PerThread.h:

(bmalloc::PerThread<T>::destructor):
(bmalloc::PerThread<T>::getSlowCase):
(bmalloc::PerThreadStorage<Cache>::get): Deleted.
(bmalloc::PerThreadStorage<Cache>::init): Deleted.

  • bmalloc/Scavenger.cpp: Added.

(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::scavenge):

  • bmalloc/Scavenger.h: Added.

(bmalloc::Scavenger::setScavengerThreadQOSClass):
(bmalloc::Scavenger::requestedScavengerThreadQOSClass const):

  • bmalloc/VMHeap.cpp:

(bmalloc::VMHeap::VMHeap):
(bmalloc::VMHeap::tryAllocateLargeChunk):

  • bmalloc/VMHeap.h:
  • bmalloc/Zone.cpp:

(bmalloc::Zone::Zone):

  • bmalloc/Zone.h:
  • bmalloc/bmalloc.h:

(bmalloc::api::tryMalloc):
(bmalloc::api::malloc):
(bmalloc::api::tryMemalign):
(bmalloc::api::memalign):
(bmalloc::api::realloc):
(bmalloc::api::tryLargeMemalignVirtual):
(bmalloc::api::free):
(bmalloc::api::freeLargeVirtual):
(bmalloc::api::scavengeThisThread):
(bmalloc::api::scavenge):
(bmalloc::api::isEnabled):
(bmalloc::api::setScavengerThreadQOSClass):

  • bmalloc/mbmalloc.cpp:

Source/JavaScriptCore:


This adopts the Gigacage for the GigacageSubspace, which we use for Auxiliary allocations. Also, in
one place in the code - the FTL codegen for butterfly and typed array access - we "cage" the accesses
themselves. Basically, we do masking to ensure that the pointer points into the gigacage.

This is neutral on JetStream.

  • CMakeLists.txt:
  • JavaScriptCore.xcodeproj/project.pbxproj:
  • b3/B3InsertionSet.cpp:

(JSC::B3::InsertionSet::execute):

  • dfg/DFGAbstractInterpreterInlines.h:

(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):

  • dfg/DFGArgumentsEliminationPhase.cpp:
  • dfg/DFGClobberize.cpp:

(JSC::DFG::readsOverlap):

  • dfg/DFGClobberize.h:

(JSC::DFG::clobberize):

  • dfg/DFGDoesGC.cpp:

(JSC::DFG::doesGC):

  • dfg/DFGFixedButterflyAccessUncagingPhase.cpp: Added.

(JSC::DFG::performFixedButterflyAccessUncaging):

  • dfg/DFGFixedButterflyAccessUncagingPhase.h: Added.
  • dfg/DFGFixupPhase.cpp:

(JSC::DFG::FixupPhase::fixupNode):

  • dfg/DFGHeapLocation.cpp:

(WTF::printInternal):

  • dfg/DFGHeapLocation.h:
  • dfg/DFGNodeType.h:
  • dfg/DFGPlan.cpp:

(JSC::DFG::Plan::compileInThreadImpl):

  • dfg/DFGPredictionPropagationPhase.cpp:
  • dfg/DFGSafeToExecute.h:

(JSC::DFG::safeToExecute):

  • dfg/DFGSpeculativeJIT.cpp:

(JSC::DFG::SpeculativeJIT::compileGetButterfly):

  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGTypeCheckHoistingPhase.cpp:

(JSC::DFG::TypeCheckHoistingPhase::identifyRedundantStructureChecks):
(JSC::DFG::TypeCheckHoistingPhase::identifyRedundantArrayChecks):

  • ftl/FTLCapabilities.cpp:

(JSC::FTL::canCompile):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
(JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
(JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
(JSC::FTL::DFG::LowerDFGToB3::compileStringCharAt):
(JSC::FTL::DFG::LowerDFGToB3::compileStringCharCodeAt):
(JSC::FTL::DFG::LowerDFGToB3::compileGetMapBucket):
(JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
(JSC::FTL::DFG::LowerDFGToB3::compileToLowerCase):
(JSC::FTL::DFG::LowerDFGToB3::caged):

  • heap/GigacageSubspace.cpp: Added.

(JSC::GigacageSubspace::GigacageSubspace):
(JSC::GigacageSubspace::~GigacageSubspace):
(JSC::GigacageSubspace::tryAllocateAlignedMemory):
(JSC::GigacageSubspace::freeAlignedMemory):
(JSC::GigacageSubspace::canTradeBlocksWith):

  • heap/GigacageSubspace.h: Added.
  • heap/Heap.cpp:

(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::finalize):
(JSC::Heap::sweepInFinalize):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::shouldDoFullCollection):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::reportWebAssemblyFastMemoriesAllocated): Deleted.
(JSC::Heap::webAssemblyFastMemoriesThisCycleAtThreshold const): Deleted.
(JSC::Heap::sweepLargeAllocations): Deleted.
(JSC::Heap::didAllocateWebAssemblyFastMemories): Deleted.

  • heap/Heap.h:
  • heap/LargeAllocation.cpp:

(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::destroy):

  • heap/MarkedAllocator.cpp:

(JSC::MarkedAllocator::tryAllocateWithoutCollecting):
(JSC::MarkedAllocator::tryAllocateBlock):

  • heap/MarkedBlock.cpp:

(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::~Handle):
(JSC::MarkedBlock::Handle::didAddToAllocator):
(JSC::MarkedBlock::Handle::subspace const): Deleted.

  • heap/MarkedBlock.h:

(JSC::MarkedBlock::Handle::subspace const):

  • heap/MarkedSpace.cpp:

(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::freeMemory):
(JSC::MarkedSpace::prepareForAllocation):
(JSC::MarkedSpace::addMarkedAllocator):
(JSC::MarkedSpace::findEmptyBlockToSteal): Deleted.

  • heap/MarkedSpace.h:

(JSC::MarkedSpace::firstAllocator const):
(JSC::MarkedSpace::allocatorForEmptyAllocation const): Deleted.

  • heap/Subspace.cpp:

(JSC::Subspace::Subspace):
(JSC::Subspace::canTradeBlocksWith):
(JSC::Subspace::tryAllocateAlignedMemory):
(JSC::Subspace::freeAlignedMemory):
(JSC::Subspace::prepareForAllocation):
(JSC::Subspace::findEmptyBlockToSteal):

  • heap/Subspace.h:

(JSC::Subspace::didCreateFirstAllocator):

  • heap/SubspaceInlines.h:

(JSC::Subspace::forEachAllocator):
(JSC::Subspace::forEachMarkedBlock):
(JSC::Subspace::forEachNotEmptyMarkedBlock):

  • jit/JITPropertyAccess.cpp:

(JSC::JIT::emitDoubleLoad):
(JSC::JIT::emitContiguousLoad):
(JSC::JIT::emitArrayStorageLoad):
(JSC::JIT::emitGenericContiguousPutByVal):
(JSC::JIT::emitArrayStoragePutByVal):
(JSC::JIT::emit_op_get_from_scope):
(JSC::JIT::emit_op_put_to_scope):
(JSC::JIT::emitIntTypedArrayGetByVal):
(JSC::JIT::emitFloatTypedArrayGetByVal):
(JSC::JIT::emitIntTypedArrayPutByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):

  • jsc.cpp:

(fillBufferWithContentsOfFile):
(functionReadFile):
(gigacageDisabled):
(jscmain):

  • llint/LowLevelInterpreter64.asm:
  • runtime/ArrayBuffer.cpp:

(JSC::ArrayBufferContents::tryAllocate):
(JSC::ArrayBuffer::createAdopted):
(JSC::ArrayBuffer::createFromBytes):
(JSC::ArrayBuffer::tryCreate):

  • runtime/IndexingHeader.h:
  • runtime/InitializeThreading.cpp:

(JSC::initializeThreading):

  • runtime/JSArrayBuffer.cpp:
  • runtime/JSArrayBufferView.cpp:

(JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
(JSC::JSArrayBufferView::finalize):

  • runtime/JSLock.cpp:

(JSC::JSLock::didAcquireLock):

  • runtime/JSObject.h:
  • runtime/Options.cpp:

(JSC::recomputeDependentOptions):

  • runtime/Options.h:
  • runtime/ScopedArgumentsTable.h:
  • runtime/VM.cpp:

(JSC::VM::VM):
(JSC::VM::~VM):
(JSC::VM::gigacageDisabledCallback):
(JSC::VM::gigacageDisabled):

  • runtime/VM.h:

(JSC::VM::fireGigacageEnabledIfNecessary):
(JSC::VM::gigacageEnabled):

  • wasm/WasmB3IRGenerator.cpp:

(JSC::Wasm::B3IRGenerator::B3IRGenerator):
(JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):

  • wasm/WasmCodeBlock.cpp:

(JSC::Wasm::CodeBlock::isSafeToRun):

  • wasm/WasmMemory.cpp:

(JSC::Wasm::makeString):
(JSC::Wasm::Memory::create):
(JSC::Wasm::Memory::~Memory):
(JSC::Wasm::Memory::addressIsInActiveFastMemory):
(JSC::Wasm::Memory::grow):
(JSC::Wasm::Memory::initializePreallocations): Deleted.
(JSC::Wasm::Memory::maxFastMemoryCount): Deleted.

  • wasm/WasmMemory.h:
  • wasm/js/JSWebAssemblyInstance.cpp:

(JSC::JSWebAssemblyInstance::create):

  • wasm/js/JSWebAssemblyMemory.cpp:

(JSC::JSWebAssemblyMemory::grow):
(JSC::JSWebAssemblyMemory::finishCreation):

  • wasm/js/JSWebAssemblyMemory.h:

(JSC::JSWebAssemblyMemory::subspaceFor):

Source/WebCore:

No new tests because no change in behavior.

Needed to teach Metal how to allocate in the Gigacage.

  • platform/graphics/cocoa/GPUBufferMetal.mm:

(WebCore::GPUBuffer::GPUBuffer):
(WebCore::GPUBuffer::contents):

Source/WebKit:


The WebProcess should never disable the Gigacage by allocating typed arrays outside the Gigacage. So,
we add a callback that crashes the process.

  • WebProcess/WebProcess.cpp:

(WebKit::gigacageDisabled):
(WebKit::m_webSQLiteDatabaseTracker):

Source/WTF:


For the Gigacage project to have minimal impact, we need to have some abstraction that allows code to
avoid having to guard itself with #if's. This adds a Gigacage abstraction that overlays the Gigacage
namespace from bmalloc, which always lets you call things like Gigacage::caged and Gigacage::tryMalloc.

Because of how many places need to possibly allocate in a gigacage, or possibly perform caged accesses,
it's better to hide the question of whether or not it's enabled inside this API.

  • WTF.xcodeproj/project.pbxproj:
  • wtf/CMakeLists.txt:
  • wtf/FastMalloc.cpp:
  • wtf/Gigacage.cpp: Added.

(Gigacage::tryMalloc):
(Gigacage::tryAllocateVirtualPages):
(Gigacage::freeVirtualPages):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):

  • wtf/Gigacage.h: Added.

(Gigacage::ensureGigacage):
(Gigacage::disableGigacage):
(Gigacage::addDisableCallback):
(Gigacage::removeDisableCallback):
(Gigacage::caged):
(Gigacage::isCaged):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):

Location:
trunk
Files:
14 added
84 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/wasm/stress/oom.js

    r215662 r220118  
     1// We don't need N versions of this simultaneously filling up RAM.
     2//@ runDefault
     3
    14const verbose = false;
    25
  • trunk/Source/JavaScriptCore/CMakeLists.txt

    r219981 r220118  
    337337    dfg/DFGFailedFinalizer.cpp
    338338    dfg/DFGFinalizer.cpp
     339    dfg/DFGFixedButterflyAccessUncagingPhase.cpp
    339340    dfg/DFGFixupPhase.cpp
    340341    dfg/DFGFlowIndexing.cpp
     
    505506    heap/GCLogging.cpp
    506507    heap/GCRequest.cpp
     508    heap/GigacageSubspace.cpp
    507509    heap/HandleSet.cpp
    508510    heap/HandleStack.cpp
  • trunk/Source/JavaScriptCore/ChangeLog

    r220081 r220118  
     12017-08-01  Filip Pizlo  <fpizlo@apple.com>
     2
     3        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
     4        https://bugs.webkit.org/show_bug.cgi?id=174727
     5
     6        Reviewed by Mark Lam.
     7       
     8        This adopts the Gigacage for the GigacageSubspace, which we use for Auxiliary allocations. Also, in
     9        one place in the code - the FTL codegen for butterfly and typed array access - we "cage" the accesses
     10        themselves. Basically, we do masking to ensure that the pointer points into the gigacage.
     11       
     12        This is neutral on JetStream.
     13
     14        * CMakeLists.txt:
     15        * JavaScriptCore.xcodeproj/project.pbxproj:
     16        * b3/B3InsertionSet.cpp:
     17        (JSC::B3::InsertionSet::execute):
     18        * dfg/DFGAbstractInterpreterInlines.h:
     19        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
     20        * dfg/DFGArgumentsEliminationPhase.cpp:
     21        * dfg/DFGClobberize.cpp:
     22        (JSC::DFG::readsOverlap):
     23        * dfg/DFGClobberize.h:
     24        (JSC::DFG::clobberize):
     25        * dfg/DFGDoesGC.cpp:
     26        (JSC::DFG::doesGC):
     27        * dfg/DFGFixedButterflyAccessUncagingPhase.cpp: Added.
     28        (JSC::DFG::performFixedButterflyAccessUncaging):
     29        * dfg/DFGFixedButterflyAccessUncagingPhase.h: Added.
     30        * dfg/DFGFixupPhase.cpp:
     31        (JSC::DFG::FixupPhase::fixupNode):
     32        * dfg/DFGHeapLocation.cpp:
     33        (WTF::printInternal):
     34        * dfg/DFGHeapLocation.h:
     35        * dfg/DFGNodeType.h:
     36        * dfg/DFGPlan.cpp:
     37        (JSC::DFG::Plan::compileInThreadImpl):
     38        * dfg/DFGPredictionPropagationPhase.cpp:
     39        * dfg/DFGSafeToExecute.h:
     40        (JSC::DFG::safeToExecute):
     41        * dfg/DFGSpeculativeJIT.cpp:
     42        (JSC::DFG::SpeculativeJIT::compileGetButterfly):
     43        * dfg/DFGSpeculativeJIT32_64.cpp:
     44        (JSC::DFG::SpeculativeJIT::compile):
     45        * dfg/DFGSpeculativeJIT64.cpp:
     46        (JSC::DFG::SpeculativeJIT::compile):
     47        * dfg/DFGTypeCheckHoistingPhase.cpp:
     48        (JSC::DFG::TypeCheckHoistingPhase::identifyRedundantStructureChecks):
     49        (JSC::DFG::TypeCheckHoistingPhase::identifyRedundantArrayChecks):
     50        * ftl/FTLCapabilities.cpp:
     51        (JSC::FTL::canCompile):
     52        * ftl/FTLLowerDFGToB3.cpp:
     53        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
     54        (JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
     55        (JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
     56        (JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
     57        (JSC::FTL::DFG::LowerDFGToB3::compileStringCharAt):
     58        (JSC::FTL::DFG::LowerDFGToB3::compileStringCharCodeAt):
     59        (JSC::FTL::DFG::LowerDFGToB3::compileGetMapBucket):
     60        (JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
     61        (JSC::FTL::DFG::LowerDFGToB3::compileToLowerCase):
     62        (JSC::FTL::DFG::LowerDFGToB3::caged):
     63        * heap/GigacageSubspace.cpp: Added.
     64        (JSC::GigacageSubspace::GigacageSubspace):
     65        (JSC::GigacageSubspace::~GigacageSubspace):
     66        (JSC::GigacageSubspace::tryAllocateAlignedMemory):
     67        (JSC::GigacageSubspace::freeAlignedMemory):
     68        (JSC::GigacageSubspace::canTradeBlocksWith):
     69        * heap/GigacageSubspace.h: Added.
     70        * heap/Heap.cpp:
     71        (JSC::Heap::Heap):
     72        (JSC::Heap::lastChanceToFinalize):
     73        (JSC::Heap::finalize):
     74        (JSC::Heap::sweepInFinalize):
     75        (JSC::Heap::updateAllocationLimits):
     76        (JSC::Heap::shouldDoFullCollection):
     77        (JSC::Heap::collectIfNecessaryOrDefer):
     78        (JSC::Heap::reportWebAssemblyFastMemoriesAllocated): Deleted.
     79        (JSC::Heap::webAssemblyFastMemoriesThisCycleAtThreshold const): Deleted.
     80        (JSC::Heap::sweepLargeAllocations): Deleted.
     81        (JSC::Heap::didAllocateWebAssemblyFastMemories): Deleted.
     82        * heap/Heap.h:
     83        * heap/LargeAllocation.cpp:
     84        (JSC::LargeAllocation::tryCreate):
     85        (JSC::LargeAllocation::destroy):
     86        * heap/MarkedAllocator.cpp:
     87        (JSC::MarkedAllocator::tryAllocateWithoutCollecting):
     88        (JSC::MarkedAllocator::tryAllocateBlock):
     89        * heap/MarkedBlock.cpp:
     90        (JSC::MarkedBlock::tryCreate):
     91        (JSC::MarkedBlock::Handle::Handle):
     92        (JSC::MarkedBlock::Handle::~Handle):
     93        (JSC::MarkedBlock::Handle::didAddToAllocator):
     94        (JSC::MarkedBlock::Handle::subspace const): Deleted.
     95        * heap/MarkedBlock.h:
     96        (JSC::MarkedBlock::Handle::subspace const):
     97        * heap/MarkedSpace.cpp:
     98        (JSC::MarkedSpace::~MarkedSpace):
     99        (JSC::MarkedSpace::freeMemory):
     100        (JSC::MarkedSpace::prepareForAllocation):
     101        (JSC::MarkedSpace::addMarkedAllocator):
     102        (JSC::MarkedSpace::findEmptyBlockToSteal): Deleted.
     103        * heap/MarkedSpace.h:
     104        (JSC::MarkedSpace::firstAllocator const):
     105        (JSC::MarkedSpace::allocatorForEmptyAllocation const): Deleted.
     106        * heap/Subspace.cpp:
     107        (JSC::Subspace::Subspace):
     108        (JSC::Subspace::canTradeBlocksWith):
     109        (JSC::Subspace::tryAllocateAlignedMemory):
     110        (JSC::Subspace::freeAlignedMemory):
     111        (JSC::Subspace::prepareForAllocation):
     112        (JSC::Subspace::findEmptyBlockToSteal):
     113        * heap/Subspace.h:
     114        (JSC::Subspace::didCreateFirstAllocator):
     115        * heap/SubspaceInlines.h:
     116        (JSC::Subspace::forEachAllocator):
     117        (JSC::Subspace::forEachMarkedBlock):
     118        (JSC::Subspace::forEachNotEmptyMarkedBlock):
     119        * jit/JITPropertyAccess.cpp:
     120        (JSC::JIT::emitDoubleLoad):
     121        (JSC::JIT::emitContiguousLoad):
     122        (JSC::JIT::emitArrayStorageLoad):
     123        (JSC::JIT::emitGenericContiguousPutByVal):
     124        (JSC::JIT::emitArrayStoragePutByVal):
     125        (JSC::JIT::emit_op_get_from_scope):
     126        (JSC::JIT::emit_op_put_to_scope):
     127        (JSC::JIT::emitIntTypedArrayGetByVal):
     128        (JSC::JIT::emitFloatTypedArrayGetByVal):
     129        (JSC::JIT::emitIntTypedArrayPutByVal):
     130        (JSC::JIT::emitFloatTypedArrayPutByVal):
     131        * jsc.cpp:
     132        (fillBufferWithContentsOfFile):
     133        (functionReadFile):
     134        (gigacageDisabled):
     135        (jscmain):
     136        * llint/LowLevelInterpreter64.asm:
     137        * runtime/ArrayBuffer.cpp:
     138        (JSC::ArrayBufferContents::tryAllocate):
     139        (JSC::ArrayBuffer::createAdopted):
     140        (JSC::ArrayBuffer::createFromBytes):
     141        (JSC::ArrayBuffer::tryCreate):
     142        * runtime/IndexingHeader.h:
     143        * runtime/InitializeThreading.cpp:
     144        (JSC::initializeThreading):
     145        * runtime/JSArrayBuffer.cpp:
     146        * runtime/JSArrayBufferView.cpp:
     147        (JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
     148        (JSC::JSArrayBufferView::finalize):
     149        * runtime/JSLock.cpp:
     150        (JSC::JSLock::didAcquireLock):
     151        * runtime/JSObject.h:
     152        * runtime/Options.cpp:
     153        (JSC::recomputeDependentOptions):
     154        * runtime/Options.h:
     155        * runtime/ScopedArgumentsTable.h:
     156        * runtime/VM.cpp:
     157        (JSC::VM::VM):
     158        (JSC::VM::~VM):
     159        (JSC::VM::gigacageDisabledCallback):
     160        (JSC::VM::gigacageDisabled):
     161        * runtime/VM.h:
     162        (JSC::VM::fireGigacageEnabledIfNecessary):
     163        (JSC::VM::gigacageEnabled):
     164        * wasm/WasmB3IRGenerator.cpp:
     165        (JSC::Wasm::B3IRGenerator::B3IRGenerator):
     166        (JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):
     167        * wasm/WasmCodeBlock.cpp:
     168        (JSC::Wasm::CodeBlock::isSafeToRun):
     169        * wasm/WasmMemory.cpp:
     170        (JSC::Wasm::makeString):
     171        (JSC::Wasm::Memory::create):
     172        (JSC::Wasm::Memory::~Memory):
     173        (JSC::Wasm::Memory::addressIsInActiveFastMemory):
     174        (JSC::Wasm::Memory::grow):
     175        (JSC::Wasm::Memory::initializePreallocations): Deleted.
     176        (JSC::Wasm::Memory::maxFastMemoryCount): Deleted.
     177        * wasm/WasmMemory.h:
     178        * wasm/js/JSWebAssemblyInstance.cpp:
     179        (JSC::JSWebAssemblyInstance::create):
     180        * wasm/js/JSWebAssemblyMemory.cpp:
     181        (JSC::JSWebAssemblyMemory::grow):
     182        (JSC::JSWebAssemblyMemory::finishCreation):
     183        * wasm/js/JSWebAssemblyMemory.h:
     184        (JSC::JSWebAssemblyMemory::subspaceFor):
     185
    11862017-07-31  Mark Lam  <mark.lam@apple.com>
    2187
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r219981 r220118  
    465465                0F5AE2C41DF4F2800066EFE1 /* VMInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = FE90BB3A1B7CF64E006B3F03 /* VMInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
    466466                0F5B4A331C84F0D600F1B17E /* SlowPathReturnType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5B4A321C84F0D600F1B17E /* SlowPathReturnType.h */; settings = {ATTRIBUTES = (Private, ); }; };
     467                0F5BF1561F22EB170029D91D /* GigacageSubspace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */; };
     468                0F5BF1571F22EB170029D91D /* GigacageSubspace.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1551F22EB170029D91D /* GigacageSubspace.h */; settings = {ATTRIBUTES = (Private, ); }; };
    467469                0F5BF1631F2317120029D91D /* B3HoistLoopInvariantValues.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1611F2317120029D91D /* B3HoistLoopInvariantValues.cpp */; };
    468470                0F5BF1641F2317120029D91D /* B3HoistLoopInvariantValues.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1621F2317120029D91D /* B3HoistLoopInvariantValues.h */; };
     
    825827                0FD8A32B17D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD8A32317D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp */; };
    826828                0FD8A32C17D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD8A32417D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h */; };
     829                0FD9EA881F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */; };
     830                0FD9EA891F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */; };
    827831                0FDB2CC9173DA520007B3C1B /* FTLAbbreviatedTypes.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */; settings = {ATTRIBUTES = (Private, ); }; };
    828832                0FDB2CCA173DA523007B3C1B /* FTLValueFromBlock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    30503054                0F5A6282188C98D40072C9DF /* FTLValueRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLValueRange.h; path = ftl/FTLValueRange.h; sourceTree = "<group>"; };
    30513055                0F5B4A321C84F0D600F1B17E /* SlowPathReturnType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SlowPathReturnType.h; sourceTree = "<group>"; };
     3056                0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = GigacageSubspace.cpp; sourceTree = "<group>"; };
     3057                0F5BF1551F22EB170029D91D /* GigacageSubspace.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = GigacageSubspace.h; sourceTree = "<group>"; };
    30523058                0F5BF1611F2317120029D91D /* B3HoistLoopInvariantValues.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = B3HoistLoopInvariantValues.cpp; path = b3/B3HoistLoopInvariantValues.cpp; sourceTree = "<group>"; };
    30533059                0F5BF1621F2317120029D91D /* B3HoistLoopInvariantValues.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = B3HoistLoopInvariantValues.h; path = b3/B3HoistLoopInvariantValues.h; sourceTree = "<group>"; };
     
    34193425                0FD8A32317D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGToFTLForOSREntryDeferredCompilationCallback.cpp; path = dfg/DFGToFTLForOSREntryDeferredCompilationCallback.cpp; sourceTree = "<group>"; };
    34203426                0FD8A32417D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGToFTLForOSREntryDeferredCompilationCallback.h; path = dfg/DFGToFTLForOSREntryDeferredCompilationCallback.h; sourceTree = "<group>"; };
     3427                0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = DFGFixedButterflyAccessUncagingPhase.cpp; path = dfg/DFGFixedButterflyAccessUncagingPhase.cpp; sourceTree = "<group>"; };
     3428                0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = DFGFixedButterflyAccessUncagingPhase.h; path = dfg/DFGFixedButterflyAccessUncagingPhase.h; sourceTree = "<group>"; };
    34213429                0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLAbbreviatedTypes.h; path = ftl/FTLAbbreviatedTypes.h; sourceTree = "<group>"; };
    34223430                0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLValueFromBlock.h; path = ftl/FTLValueFromBlock.h; sourceTree = "<group>"; };
     
    60726080                                2A343F7718A1749D0039B085 /* GCSegmentedArrayInlines.h */,
    60736081                                0F86A26E1D6F7B3100CB0C92 /* GCTypeMap.h */,
     6082                                0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */,
     6083                                0F5BF1551F22EB170029D91D /* GigacageSubspace.h */,
    60746084                                142E312B134FF0A600AFADB5 /* Handle.h */,
    60756085                                C28318FF16FE4B7D00157BFD /* HandleBlock.h */,
     
    73567366                                A78A976E179738B8009DF744 /* DFGFinalizer.cpp */,
    73577367                                A78A976F179738B8009DF744 /* DFGFinalizer.h */,
     7368                                0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */,
     7369                                0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */,
    73587370                                0F2BDC12151C5D4A00CD8910 /* DFGFixupPhase.cpp */,
    73597371                                0F2BDC13151C5D4A00CD8910 /* DFGFixupPhase.h */,
     
    84918503                                0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */,
    84928504                                0FA762051DB9242900B7A2FD /* CollectionScope.h in Headers */,
     8505                                0FD9EA891F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h in Headers */,
    84938506                                0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */,
    84948507                                A53243981856A489002ED692 /* CombinedDomains.json in Headers */,
     
    95399552                                FE3022D71E42857300BAC493 /* VMInspector.h in Headers */,
    95409553                                FE6F56DE1E64EAD600D17801 /* VMTraps.h in Headers */,
     9554                                0F5BF1571F22EB170029D91D /* GigacageSubspace.h in Headers */,
    95419555                                53F40E931D5A4AB30099A1B6 /* WasmB3IRGenerator.h in Headers */,
    95429556                                53CA730A1EA533D80076049D /* WasmBBQPlan.h in Headers */,
     
    1044810462                                0F5D085D1B8CF99D001143B4 /* DFGNodeOrigin.cpp in Sources */,
    1044910463                                0F2B9CE619D0BA7D00B1D1B5 /* DFGObjectAllocationSinkingPhase.cpp in Sources */,
     10464                                0FD9EA881F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp in Sources */,
    1045010465                                0F2B9CE819D0BA7D00B1D1B5 /* DFGObjectMaterializationData.cpp in Sources */,
    1045110466                                86EC9DCF1328DF82002B2AD7 /* DFGOperations.cpp in Sources */,
     
    1097510990                                14469DEC107EC7E700650446 /* StringObject.cpp in Sources */,
    1097610991                                14469DED107EC7E700650446 /* StringPrototype.cpp in Sources */,
     10992                                0F5BF1561F22EB170029D91D /* GigacageSubspace.cpp in Sources */,
    1097710993                                9335F24D12E6765B002B5553 /* StringRecursionChecker.cpp in Sources */,
    1097810994                                BCDE3B430E6C832D001453A7 /* Structure.cpp in Sources */,
  • trunk/Source/JavaScriptCore/b3/B3InsertionSet.cpp

    r213714 r220118  
    6666void InsertionSet::execute(BasicBlock* block)
    6767{
     68    for (Insertion& insertion : m_insertions)
     69        insertion.element()->owner = block;
    6870    bubbleSort(m_insertions.begin(), m_insertions.end());
    6971    executeInsertions(block->m_values, m_insertions);
  • trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h

    r219981 r220118  
    23852385        break;
    23862386    case GetButterfly:
     2387    case GetButterflyWithoutCaging:
    23872388    case AllocatePropertyStorage:
    23882389    case ReallocatePropertyStorage:
  • trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp

    r219997 r220118  
    359359                   
    360360                case GetButterfly:
     361                case GetButterflyWithoutCaging:
    361362                    // This barely works. The danger is that the GetButterfly is used by something that
    362363                    // does something escaping to a candidate. Fortunately, the only butterfly-using ops
  • trunk/Source/JavaScriptCore/dfg/DFGClobberize.h

    r219981 r220118  
    10121012        return;
    10131013
     1014    case GetButterflyWithoutCaging:
     1015        read(JSObject_butterfly);
     1016        def(HeapLocation(ButterflyWithoutCagingLoc, JSObject_butterfly, node->child1()), LazyNode(node));
     1017        return;
     1018
    10141019    case CheckSubClass:
    10151020        def(PureValue(node, node->classInfo()));
  • trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp

    r218084 r220118  
    116116    case GetExecutable:
    117117    case GetButterfly:
     118    case GetButterflyWithoutCaging:
    118119    case CheckSubClass:
    119120    case CheckArray:
  • trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp

    r219981 r220118  
    13991399        case CheckCell:
    14001400        case CreateThis:
    1401         case GetButterfly: {
     1401        case GetButterfly:
     1402        case GetButterflyWithoutCaging: {
    14021403            fixEdge<CellUse>(node->child1());
    14031404            break;
  • trunk/Source/JavaScriptCore/dfg/DFGHeapLocation.cpp

    r217202 r220118  
    9797        return;
    9898       
     99    case ButterflyWithoutCagingLoc:
     100        out.print("ButterflyWithoutCagingLoc");
     101        return;
     102       
    99103    case CheckTypeInfoFlagsLoc:
    100104        out.print("CheckTypeInfoFlagsLoc");
  • trunk/Source/JavaScriptCore/dfg/DFGHeapLocation.h

    r217202 r220118  
    4040    VectorLengthLoc,
    4141    ButterflyLoc,
     42    ButterflyWithoutCagingLoc,
    4243    CheckTypeInfoFlagsLoc,
    4344    OverridesHasInstanceLoc,
  • trunk/Source/JavaScriptCore/dfg/DFGNodeType.h

    r218084 r220118  
    206206    macro(ReallocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \
    207207    macro(GetButterfly, NodeResultStorage) \
     208    macro(GetButterflyWithoutCaging, NodeResultStorage) \
    208209    macro(NukeStructureAndSetButterfly, NodeMustGenerate) \
    209210    macro(CheckArray, NodeMustGenerate) \
  • trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp

    r216815 r220118  
    4242#include "DFGDCEPhase.h"
    4343#include "DFGFailedFinalizer.h"
     44#include "DFGFixedButterflyAccessUncagingPhase.h"
    4445#include "DFGFixupPhase.h"
    4546#include "DFGGraphSafepoint.h"
     
    469470        RUN_PHASE(performGlobalStoreBarrierInsertion);
    470471        RUN_PHASE(performStoreBarrierClustering);
     472        RUN_PHASE(performFixedButterflyAccessUncaging);
    471473        if (Options::useMovHintRemoval())
    472474            RUN_PHASE(performMovHintRemoval);
  • trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp

    r218084 r220118  
    840840        }
    841841        case GetButterfly:
     842        case GetButterflyWithoutCaging:
    842843        case GetIndexedPropertyStorage:
    843844        case AllocatePropertyStorage:
  • trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h

    r218084 r220118  
    216216    case GetExecutable:
    217217    case GetButterfly:
     218    case GetButterflyWithoutCaging:
    218219    case CallDOMGetter:
    219220    case CallDOM:
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp

    r219981 r220118  
    79847984   
    79857985    m_jit.loadPtr(JITCompiler::Address(baseGPR, JSObject::butterflyOffset()), resultGPR);
     7986   
     7987    // FIXME: Implement caging!
     7988    // https://bugs.webkit.org/show_bug.cgi?id=174918
    79867989
    79877990    storageResult(resultGPR, node);
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

    r218729 r220118  
    44714471       
    44724472    case GetButterfly:
     4473    case GetButterflyWithoutCaging:
    44734474        compileGetButterfly(node);
    44744475        break;
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r218729 r220118  
    46574657       
    46584658    case GetButterfly:
     4659    case GetButterflyWithoutCaging:
    46594660        compileGetButterfly(node);
    46604661        break;
  • trunk/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp

    r211237 r220118  
    249249                case NukeStructureAndSetButterfly:
    250250                case GetButterfly:
     251                case GetButterflyWithoutCaging:
    251252                case GetByVal:
    252253                case PutByValDirect:
     
    325326                case ReallocatePropertyStorage:
    326327                case GetButterfly:
     328                case GetButterflyWithoutCaging:
    327329                case GetByVal:
    328330                case PutByValDirect:
  • trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp

    r218084 r220118  
    7070    case PutStructure:
    7171    case GetButterfly:
     72    case GetButterflyWithoutCaging:
    7273    case NewObject:
    7374    case NewArray:
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r219981 r220118  
    8888#include <unordered_set>
    8989#include <wtf/Box.h>
     90#include <wtf/Gigacage.h>
    9091
    9192namespace JSC { namespace FTL {
     
    665666            break;
    666667        case GetButterfly:
     668        case GetButterflyWithoutCaging:
    667669            compileGetButterfly();
    668670            break;
     
    32323234    void compileGetButterfly()
    32333235    {
    3234         setStorage(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSObject_butterfly));
     3236        LValue butterfly = m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSObject_butterfly);
     3237        if (m_node->op() != GetButterflyWithoutCaging)
     3238            butterfly = caged(butterfly);
     3239        setStorage(butterfly);
    32353240    }
    32363241
     
    32683273
    32693274        DFG_ASSERT(m_graph, m_node, isTypedView(m_node->arrayMode().typedArrayType()));
    3270         setStorage(m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector));
     3275        setStorage(caged(m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector)));
    32713276    }
    32723277   
     
    35103515                    m_out.load32NonNegative(base, m_heaps.DirectArguments_length)));
    35113516
     3517            // FIXME: I guess we need to cage DirectArguments?
     3518            // https://bugs.webkit.org/show_bug.cgi?id=174920
    35123519            TypedPointer address = m_out.baseIndex(
    35133520                m_heaps.DirectArguments_storage, base, m_out.zeroExtPtr(index));
     
    35413548            LValue arguments = m_out.loadPtr(table, m_heaps.ScopedArgumentsTable_arguments);
    35423549           
     3550            // FIXME: I guess we need to cage ScopedArguments?
     3551            // https://bugs.webkit.org/show_bug.cgi?id=174921
    35433552            TypedPointer address = m_out.baseIndex(
    35443553                m_heaps.scopedArgumentsTableArguments, arguments, m_out.zeroExtPtr(index));
     
    35493558                m_out.equal(scopeOffset, m_out.constInt32(ScopeOffset::invalidOffset)));
    35503559           
     3560            // FIXME: I guess we need to cage JSEnvironmentRecord?
     3561            // https://bugs.webkit.org/show_bug.cgi?id=174922
    35513562            address = m_out.baseIndex(
    35523563                m_heaps.JSEnvironmentRecord_variables, scope, m_out.zeroExtPtr(scopeOffset));
     
    35563567            m_out.appendTo(overflowCase, continuation);
    35573568           
     3569            // FIXME: I guess we need to cage overflow storage?
     3570            // https://bugs.webkit.org/show_bug.cgi?id=174923
    35583571            address = m_out.baseIndex(
    35593572                m_heaps.ScopedArguments_overflowStorage, base,
     
    53795392        m_out.appendTo(is8Bit, is16Bit);
    53805393           
     5394        // FIXME: Need to cage strings!
     5395        // https://bugs.webkit.org/show_bug.cgi?id=174924
    53815396        ValueFromBlock char8Bit = m_out.anchor(
    53825397            m_out.load8ZeroExt32(m_out.baseIndex(
     
    54805495        LBasicBlock lastNext = m_out.appendTo(is8Bit, is16Bit);
    54815496           
     5497        // FIXME: need to cage strings!
     5498        // https://bugs.webkit.org/show_bug.cgi?id=174924
    54825499        ValueFromBlock char8Bit = m_out.anchor(
    54835500            m_out.load8ZeroExt32(m_out.baseIndex(
     
    80768093        LValue unmaskedIndex = m_out.phi(Int32, indexStart);
    80778094        LValue index = m_out.bitAnd(mask, unmaskedIndex);
     8095        // FIXME: I think these buffers are caged?
     8096        // https://bugs.webkit.org/show_bug.cgi?id=174925
    80788097        LValue hashMapBucket = m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), buffer, m_out.zeroExt(index, Int64), ScaleEight));
    80798098        ValueFromBlock bucketResult = m_out.anchor(hashMapBucket);
     
    88518870        int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue);
    88528871        ValueFromBlock outOfLineResult = m_out.anchor(
    8853             m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), storage, realIndex, ScaleEight, offsetOfFirstProperty)));
     8872            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), caged(storage), realIndex, ScaleEight, offsetOfFirstProperty)));
    88548873        m_out.jump(continuation);
    88558874
     
    1026910288        m_out.appendTo(loopBody, slowPath);
    1027010289
     10290        // FIXME: Strings needs to be caged.
     10291        // https://bugs.webkit.org/show_bug.cgi?id=174924
    1027110292        LValue byte = m_out.load8ZeroExt32(m_out.baseIndex(m_heaps.characters8, buffer, m_out.zeroExtPtr(index)));
    1027210293        LValue isInvalidAsciiRange = m_out.bitAnd(byte, m_out.constInt32(~0x7F));
     
    1159211613            m_out.appendTo(performStore, lastNext);
    1159311614        }
     11615    }
     11616   
     11617    LValue caged(LValue ptr)
     11618    {
     11619        if (vm().gigacageEnabled().isStillValid()) {
     11620            m_graph.watchpoints().addLazily(vm().gigacageEnabled());
     11621           
     11622            LValue basePtr = m_out.constIntPtr(g_gigacageBasePtr);
     11623            LValue mask = m_out.constIntPtr(GIGACAGE_MASK);
     11624           
     11625            // We don't have to worry about B3 messing up the bitAnd. Also, we want to get B3's excellent
     11626            // codegen for 2-operand andq on x86-64.
     11627            LValue masked = m_out.bitAnd(ptr, mask);
     11628           
     11629            // But B3 will currently mess up the code generation of this add. Basically, any offset from what we
     11630            // compute here will get reassociated and folded with g_gigacageBasePtr. There's a world in which
     11631            // moveConstants() observes that it needs to reassociate in order to hoist the big constants. But
     11632            // it's much easier to just block B3's badness here. That's what we do for now.
     11633            PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
     11634            patchpoint->appendSomeRegister(basePtr);
     11635            patchpoint->appendSomeRegister(masked);
     11636            patchpoint->setGenerator(
     11637                [] (CCallHelpers& jit, const StackmapGenerationParams& params) {
     11638                    jit.addPtr(params[1].gpr(), params[2].gpr(), params[0].gpr());
     11639                });
     11640            patchpoint->effects = Effects::none();
     11641            return patchpoint;
     11642        }
     11643       
     11644        return ptr;
    1159411645    }
    1159511646   
  • trunk/Source/JavaScriptCore/heap/Heap.cpp

    r220069 r220118  
    269269    , m_sizeBeforeLastEdenCollect(0)
    270270    , m_bytesAllocatedThisCycle(0)
    271     , m_webAssemblyFastMemoriesAllocatedThisCycle(0)
    272271    , m_bytesAbandonedSinceLastFullCollect(0)
    273272    , m_maxEdenSize(m_minBytesPerCycle)
     
    437436    sweepAllLogicallyEmptyWeakBlocks();
    438437   
     438    m_objectSpace.freeMemory();
     439   
    439440    if (Options::logGC())
    440441        dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
     
    485486    m_deprecatedExtraMemorySize = UNLIKELY(checkedNewSize.hasOverflowed()) ? std::numeric_limits<size_t>::max() : checkedNewSize.unsafeGet();
    486487    reportExtraMemoryAllocatedSlowCase(size);
    487 }
    488 
    489 void Heap::reportWebAssemblyFastMemoriesAllocated(size_t count)
    490 {
    491     didAllocateWebAssemblyFastMemories(count);
    492     collectIfNecessaryOrDefer();
    493 }
    494 
    495 bool Heap::webAssemblyFastMemoriesThisCycleAtThreshold() const
    496 {
    497     // WebAssembly fast memories use large amounts of virtual memory and we
    498     // don't know how many can exist in this process. We keep track of the most
    499     // fast memories that have existed at any point in time. The GC uses this
    500     // top watermark as an indication of whether recent allocations should cause
    501     // a collection: get too close and we may be close to the actual limit.
    502     size_t fastMemoryThreshold = std::max<size_t>(1, Wasm::Memory::maxFastMemoryCount() / 2);
    503     return m_webAssemblyFastMemoriesAllocatedThisCycle > fastMemoryThreshold;
    504488}
    505489
     
    19981982   
    19991983    {
    2000         SweepingScope helpingGCScope(*this);
     1984        SweepingScope sweepingScope(*this);
    20011985        deleteUnmarkedCompiledCode();
    20021986        deleteSourceProviderCaches();
    2003         sweepLargeAllocations();
     1987        sweepInFinalize();
    20041988    }
    20051989   
     
    20522036}
    20532037
    2054 void Heap::sweepLargeAllocations()
     2038void Heap::sweepInFinalize()
    20552039{
    20562040    m_objectSpace.sweepLargeAllocations();
     2041   
     2042    auto sweepBlock = [&] (MarkedBlock::Handle* handle) {
     2043        handle->sweep(nullptr);
     2044    };
     2045   
     2046    vm()->eagerlySweptDestructibleObjectSpace.forEachMarkedBlock(sweepBlock);
    20572047}
    20582048
     
    21612151        dataLog("\n");
    21622152        dataLog("bytesAllocatedThisCycle = ", m_bytesAllocatedThisCycle, "\n");
    2163         dataLog("webAssemblyFastMemoriesAllocatedThisCycle = ", m_webAssemblyFastMemoriesAllocatedThisCycle, "\n");
    21642153    }
    21652154   
     
    22442233        dataLog("sizeAfterLastCollect = ", m_sizeAfterLastCollect, "\n");
    22452234    m_bytesAllocatedThisCycle = 0;
    2246     m_webAssemblyFastMemoriesAllocatedThisCycle = 0;
    22472235
    22482236    if (Options::logGC())
     
    23182306}
    23192307
    2320 void Heap::didAllocateWebAssemblyFastMemories(size_t count)
    2321 {
    2322     m_webAssemblyFastMemoriesAllocatedThisCycle += count;
    2323 }
    2324 
    23252308bool Heap::isValidAllocation(size_t)
    23262309{
     
    23752358
    23762359    if (!m_currentRequest.scope)
    2377         return m_shouldDoFullCollection || webAssemblyFastMemoriesThisCycleAtThreshold() || overCriticalMemoryThreshold();
     2360        return m_shouldDoFullCollection || overCriticalMemoryThreshold();
    23782361    return *m_currentRequest.scope == CollectionScope::Full;
    23792362}
     
    25332516#endif
    25342517
    2535         if (!webAssemblyFastMemoriesThisCycleAtThreshold()
    2536             && m_bytesAllocatedThisCycle <= bytesAllowedThisCycle)
     2518        if (m_bytesAllocatedThisCycle <= bytesAllowedThisCycle)
    25372519            return;
    25382520    }
  • trunk/Source/JavaScriptCore/heap/Heap.h

    r218794 r220118  
    205205    JS_EXPORT_PRIVATE void reportExtraMemoryVisited(size_t);
    206206
    207     // Same as above, but for uncommitted virtual memory allocations caused by
    208     // WebAssembly fast memories. This is counted separately because virtual
    209     // memory is logically a different type of resource than committed physical
    210     // memory. We can often allocate huge amounts of virtual memory (think
    211     // gigabytes) without adversely affecting regular GC'd memory. At some point
    212     // though, too much virtual memory becomes prohibitive and we want to
    213     // collect GC-able objects which keep this virtual memory alive.
    214     // This is counted in number of fast memories, not bytes.
    215     void reportWebAssemblyFastMemoriesAllocated(size_t);
    216     bool webAssemblyFastMemoriesThisCycleAtThreshold() const;
    217 
    218207#if ENABLE(RESOURCE_USAGE)
    219208    // Use this API to report the subset of extra memory that lives outside this process.
     
    265254
    266255    void didAllocate(size_t);
    267     void didAllocateWebAssemblyFastMemories(size_t);
    268256    bool isPagedOut(double deadline);
    269257   
     
    502490    void removeDeadHeapSnapshotNodes(HeapProfiler&);
    503491    void finalize();
    504     void sweepLargeAllocations();
     492    void sweepInFinalize();
    505493   
    506494    void sweepAllLogicallyEmptyWeakBlocks();
     
    549537
    550538    size_t m_bytesAllocatedThisCycle;
    551     size_t m_webAssemblyFastMemoriesAllocatedThisCycle;
    552539    size_t m_bytesAbandonedSinceLastFullCollect;
    553540    size_t m_maxEdenSize;
  • trunk/Source/JavaScriptCore/heap/LargeAllocation.cpp

    r210844 r220118  
    3535LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, Subspace* subspace)
    3636{
    37     void* space = tryFastAlignedMalloc(alignment, headerSize() + size);
     37    void* space = subspace->tryAllocateAlignedMemory(alignment, headerSize() + size);
    3838    if (!space)
    3939        return nullptr;
     
    107107void LargeAllocation::destroy()
    108108{
     109    Subspace* subspace = m_subspace;
    109110    this->~LargeAllocation();
    110     fastAlignedFree(this);
     111    subspace->freeAlignedMemory(this);
    111112}
    112113
  • trunk/Source/JavaScriptCore/heap/MarkedAllocator.cpp

    r219897 r220118  
    104104   
    105105    if (Options::stealEmptyBlocksFromOtherAllocators()) {
    106         if (MarkedBlock::Handle* block = markedSpace().findEmptyBlockToSteal()) {
     106        if (MarkedBlock::Handle* block = m_subspace->findEmptyBlockToSteal()) {
     107            RELEASE_ASSERT(block->subspace()->canTradeBlocksWith(m_subspace));
     108            RELEASE_ASSERT(m_subspace->canTradeBlocksWith(block->subspace()));
     109           
    107110            block->sweep(nullptr);
    108111           
     
    241244    SuperSamplerScope superSamplerScope(false);
    242245   
    243     MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap);
     246    MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap, subspace());
    244247    if (!handle)
    245248        return nullptr;
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.cpp

    r219897 r220118  
    4444static size_t balance;
    4545
    46 MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap)
     46MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap, Subspace* subspace)
    4747{
    4848    if (computeBalance) {
     
    5151            dataLog("MarkedBlock Balance: ", balance, "\n");
    5252    }
    53     void* blockSpace = tryFastAlignedMalloc(blockSize, blockSize);
     53    void* blockSpace = subspace->tryAllocateAlignedMemory(blockSize, blockSize);
    5454    if (!blockSpace)
    5555        return nullptr;
    5656    if (scribbleFreeCells())
    5757        scribble(blockSpace, blockSize);
    58     return new Handle(heap, blockSpace);
    59 }
    60 
    61 MarkedBlock::Handle::Handle(Heap& heap, void* blockSpace)
    62     : m_weakSet(heap.vm(), CellContainer())
     58    return new Handle(heap, subspace, blockSpace);
     59}
     60
     61MarkedBlock::Handle::Handle(Heap& heap, Subspace* subspace, void* blockSpace)
     62    : m_subspace(subspace)
     63    , m_weakSet(heap.vm(), CellContainer())
    6364    , m_newlyAllocatedVersion(MarkedSpace::nullVersion)
    6465{
     
    7374{
    7475    Heap& heap = *this->heap();
     76    Subspace* subspace = this->subspace();
    7577    if (computeBalance) {
    7678        balance--;
     
    8082    removeFromAllocator();
    8183    m_block->~MarkedBlock();
    82     fastAlignedFree(m_block);
     84    subspace->freeAlignedMemory(m_block);
    8385    heap.didFreeBlock(blockSize);
    8486}
     
    333335    m_allocator = allocator;
    334336   
     337    RELEASE_ASSERT(m_subspace->canTradeBlocksWith(allocator->subspace()));
     338    RELEASE_ASSERT(allocator->subspace()->canTradeBlocksWith(m_subspace));
     339   
     340    m_subspace = allocator->subspace();
     341   
    335342    size_t cellSize = allocator->cellSize();
    336343    m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
     
    389396            out.print(comma, name, ":", bitvector[index()] ? "YES" : "no");
    390397        });
    391 }
    392 
    393 Subspace* MarkedBlock::Handle::subspace() const
    394 {
    395     return allocator()->subspace();
    396398}
    397399
  • trunk/Source/JavaScriptCore/heap/MarkedBlock.h

    r218794 r220118  
    200200       
    201201    private:
    202         Handle(Heap&, void*);
     202        Handle(Heap&, Subspace*, void*);
    203203       
    204204        enum SweepDestructionMode { BlockHasNoDestructors, BlockHasDestructors, BlockHasDestructorsAndCollectorIsRunning };
     
    219219        void setIsFreeListed();
    220220       
    221         MarkedBlock::Handle* m_prev;
    222         MarkedBlock::Handle* m_next;
     221        MarkedBlock::Handle* m_prev { nullptr };
     222        MarkedBlock::Handle* m_next { nullptr };
    223223           
    224224        size_t m_atomsPerCell { std::numeric_limits<size_t>::max() };
     
    229229        AllocatorAttributes m_attributes;
    230230        bool m_isFreeListed { false };
    231            
     231
     232        Subspace* m_subspace { nullptr };
    232233        MarkedAllocator* m_allocator { nullptr };
    233234        size_t m_index { std::numeric_limits<size_t>::max() };
     
    239240    };
    240241       
    241     static MarkedBlock::Handle* tryCreate(Heap&);
     242    static MarkedBlock::Handle* tryCreate(Heap&, Subspace*);
    242243       
    243244    Handle& handle();
     
    396397}
    397398
     399inline Subspace* MarkedBlock::Handle::subspace() const
     400{
     401    return m_subspace;
     402}
     403
    398404inline Heap* MarkedBlock::Handle::heap() const
    399405{
  • trunk/Source/JavaScriptCore/heap/MarkedSpace.cpp

    r219702 r220118  
    204204MarkedSpace::~MarkedSpace()
    205205{
     206    ASSERT(!m_blocks.set().size());
     207}
     208
     209void MarkedSpace::freeMemory()
     210{
    206211    forEachBlock(
    207212        [&] (MarkedBlock::Handle* block) {
     
    210215    for (LargeAllocation* allocation : m_largeAllocations)
    211216        allocation->destroy();
    212     ASSERT(!m_blocks.set().size());
    213217}
    214218
     
    255259void MarkedSpace::prepareForAllocation()
    256260{
    257     forEachAllocator(
    258         [&] (MarkedAllocator& allocator) -> IterationStatus {
    259             allocator.prepareForAllocation();
    260             return IterationStatus::Continue;
    261         });
     261    for (Subspace* subspace : m_subspaces)
     262        subspace->prepareForAllocation();
    262263
    263264    m_activeWeakSets.takeFrom(m_newActiveWeakSets);
     
    268269        m_largeAllocationsNurseryOffsetForSweep = 0;
    269270    m_largeAllocationsNurseryOffset = m_largeAllocations.size();
    270    
    271     m_allocatorForEmptyAllocation = m_firstAllocator;
    272271}
    273272
     
    513512        m_newActiveWeakSets.append(&block->weakSet());
    514513    }
    515 }
    516 
    517 MarkedBlock::Handle* MarkedSpace::findEmptyBlockToSteal()
    518 {
    519     for (; m_allocatorForEmptyAllocation; m_allocatorForEmptyAllocation = m_allocatorForEmptyAllocation->nextAllocator()) {
    520         if (MarkedBlock::Handle* block = m_allocatorForEmptyAllocation->findEmptyBlockToSteal())
    521             return block;
    522     }
    523     return nullptr;
    524514}
    525515
     
    573563        m_firstAllocator = allocator;
    574564        m_lastAllocator = allocator;
    575         m_allocatorForEmptyAllocation = allocator;
     565        for (Subspace* subspace : m_subspaces)
     566            subspace->didCreateFirstAllocator(allocator);
    576567    } else {
    577568        m_lastAllocator->setNextAllocator(allocator);
  • trunk/Source/JavaScriptCore/heap/MarkedSpace.h

    r213883 r220118  
    9494   
    9595    void lastChanceToFinalize(); // You must call stopAllocating before you call this.
     96    void freeMemory();
    9697
    9798    static size_t optimalSizeFor(size_t);
     
    156157   
    157158    MarkedAllocator* firstAllocator() const { return m_firstAllocator; }
    158     MarkedAllocator* allocatorForEmptyAllocation() const { return m_allocatorForEmptyAllocation; }
    159    
    160     MarkedBlock::Handle* findEmptyBlockToSteal();
    161159   
    162160    Lock& allocatorLock() { return m_allocatorLock; }
     
    216214    MarkedAllocator* m_firstAllocator { nullptr };
    217215    MarkedAllocator* m_lastAllocator { nullptr };
    218     MarkedAllocator* m_allocatorForEmptyAllocation { nullptr };
    219216
    220217    friend class HeapVerifier;
  • trunk/Source/JavaScriptCore/heap/Subspace.cpp

    r217711 r220118  
    5959    , m_name(name)
    6060    , m_attributes(attributes)
     61    , m_allocatorForEmptyAllocation(m_space.firstAllocator())
    6162{
    6263    // It's remotely possible that we're GCing right now even if the client is careful to only
     
    8889}
    8990
     91bool Subspace::canTradeBlocksWith(Subspace*)
     92{
     93    return true;
     94}
     95
     96void* Subspace::tryAllocateAlignedMemory(size_t alignment, size_t size)
     97{
     98    void* result = tryFastAlignedMalloc(alignment, size);
     99    return result;
     100}
     101
     102void Subspace::freeAlignedMemory(void* basePtr)
     103{
     104    fastAlignedFree(basePtr);
     105    WTF::compilerFence();
     106}
     107
    90108// The reason why we distinguish between allocate and tryAllocate is to minimize the number of
    91109// checks on the allocation path in both cases. Likewise, the reason why we have overloads with and
     
    134152    didAllocate(result);
    135153    return result;
     154}
     155
     156void Subspace::prepareForAllocation()
     157{
     158    forEachAllocator(
     159        [&] (MarkedAllocator& allocator) {
     160            allocator.prepareForAllocation();
     161        });
     162
     163    m_allocatorForEmptyAllocation = m_space.firstAllocator();
     164}
     165
     166MarkedBlock::Handle* Subspace::findEmptyBlockToSteal()
     167{
     168    for (; m_allocatorForEmptyAllocation; m_allocatorForEmptyAllocation = m_allocatorForEmptyAllocation->nextAllocator()) {
     169        Subspace* otherSubspace = m_allocatorForEmptyAllocation->subspace();
     170        if (!canTradeBlocksWith(otherSubspace))
     171            continue;
     172        if (!otherSubspace->canTradeBlocksWith(this))
     173            continue;
     174       
     175        if (MarkedBlock::Handle* block = m_allocatorForEmptyAllocation->findEmptyBlockToSteal())
     176            return block;
     177    }
     178    return nullptr;
    136179}
    137180
  • trunk/Source/JavaScriptCore/heap/Subspace.h

    r217711 r220118  
    6060    virtual void destroy(VM&, JSCell*);
    6161   
     62    virtual bool canTradeBlocksWith(Subspace* other);
     63    virtual void* tryAllocateAlignedMemory(size_t alignment, size_t size);
     64    virtual void freeAlignedMemory(void*);
     65   
    6266    MarkedAllocator* tryAllocatorFor(size_t);
    6367    MarkedAllocator* allocatorFor(size_t);
     
    6872    JS_EXPORT_PRIVATE void* tryAllocate(size_t);
    6973    JS_EXPORT_PRIVATE void* tryAllocate(GCDeferralContext*, size_t);
     74   
     75    void prepareForAllocation();
     76   
     77    void didCreateFirstAllocator(MarkedAllocator* allocator) { m_allocatorForEmptyAllocation = allocator; }
     78   
     79    // Finds an empty block from any Subspace that agrees to trade blocks with us.
     80    MarkedBlock::Handle* findEmptyBlockToSteal();
     81   
     82    template<typename Func>
     83    void forEachAllocator(const Func&);
    7084   
    7185    template<typename Func>
     
    104118    std::array<MarkedAllocator*, MarkedSpace::numSizeClasses> m_allocatorForSizeStep;
    105119    MarkedAllocator* m_firstAllocator { nullptr };
     120    MarkedAllocator* m_allocatorForEmptyAllocation { nullptr }; // Uses the MarkedSpace linked list of blocks.
    106121    SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;
    107122};
  • trunk/Source/JavaScriptCore/heap/SubspaceInlines.h

    r217711 r220118  
    3535
    3636template<typename Func>
     37void Subspace::forEachAllocator(const Func& func)
     38{
     39    for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
     40        func(*allocator);
     41}
     42
     43template<typename Func>
    3744void Subspace::forEachMarkedBlock(const Func& func)
    3845{
    39     for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
    40         allocator->forEachBlock(func);
     46    forEachAllocator(
     47        [&] (MarkedAllocator& allocator) {
     48            allocator.forEachBlock(func);
     49        });
    4150}
    4251
     
    4453void Subspace::forEachNotEmptyMarkedBlock(const Func& func)
    4554{
    46     for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
    47         allocator->forEachNotEmptyBlock(func);
     55    forEachAllocator(
     56        [&] (MarkedAllocator& allocator) {
     57            allocator.forEachNotEmptyBlock(func);
     58        });
    4859}
    4960
  • trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp

    r218412 r220118  
    173173   
    174174    badType = patchableBranch32(NotEqual, regT2, TrustedImm32(DoubleShape));
     175    // FIXME: Should do caging.
     176    // https://bugs.webkit.org/show_bug.cgi?id=175037
    175177    loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
    176178    slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())));
     
    186188   
    187189    badType = patchableBranch32(NotEqual, regT2, TrustedImm32(expectedShape));
     190    // FIXME: Should do caging.
     191    // https://bugs.webkit.org/show_bug.cgi?id=175037
    188192    loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
    189193    slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())));
     
    201205    badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape));
    202206
     207    // FIXME: Should do caging.
     208    // https://bugs.webkit.org/show_bug.cgi?id=175037
    203209    loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
    204210    slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset())));
     
    348354    badType = patchableBranch32(NotEqual, regT2, TrustedImm32(indexingShape));
    349355   
     356    // FIXME: Should do caging.
     357    // https://bugs.webkit.org/show_bug.cgi?id=175037
    350358    loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
    351359    Jump outOfBounds = branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()));
     
    403411   
    404412    badType = patchableBranch32(NotEqual, regT2, TrustedImm32(ArrayStorageShape));
     413    // FIXME: Should do caging.
     414    // https://bugs.webkit.org/show_bug.cgi?id=175037
    405415    loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
    406416    slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset())));
     
    914924                isOutOfLine.link(this);
    915925            }
     926            // FIXME: Should do caging.
     927            // https://bugs.webkit.org/show_bug.cgi?id=175037
    916928            loadPtr(Address(base, JSObject::butterflyOffset()), scratch);
    917929            neg32(offset);
     
    10551067            emitGetVirtualRegister(value, regT2);
    10561068           
     1069            // FIXME: Should do caging.
     1070            // https://bugs.webkit.org/show_bug.cgi?id=175037
    10571071            loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0);
    10581072            loadPtr(operandSlot, regT1);
     
    15761590    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
    15771591    slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
     1592    // FIXME: Should do caging.
     1593    // https://bugs.webkit.org/show_bug.cgi?id=175037
    15781594    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
    15791595   
     
    16471663    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
    16481664    slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
     1665    // FIXME: Should do caging.
     1666    // https://bugs.webkit.org/show_bug.cgi?id=175037
    16491667    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
    16501668   
     
    17141732    // We would be loading this into base as in get_by_val, except that the slow
    17151733    // path expects the base to be unclobbered.
     1734    // FIXME: Should do caging.
     1735    // https://bugs.webkit.org/show_bug.cgi?id=175037
    17161736    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch);
    17171737   
     
    17971817    // We would be loading this into base as in get_by_val, except that the slow
    17981818    // path expects the base to be unclobbered.
     1819    // FIXME: Should do caging.
     1820    // https://bugs.webkit.org/show_bug.cgi?id=175037
    17991821    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch);
    18001822   
  • trunk/Source/JavaScriptCore/jsc.cpp

    r219981 r220118  
    7979#include "TypeProfiler.h"
    8080#include "TypeProfilerLog.h"
     81#include "TypedArrayInlines.h"
    8182#include "WasmContext.h"
    8283#include "WasmFaultSignalHandler.h"
     
    985986
    986987static bool fillBufferWithContentsOfFile(const String& fileName, Vector<char>& buffer);
     988static RefPtr<Uint8Array> fillBufferWithContentsOfFile(const String& fileName);
    987989
    988990class CommandLine;
     
    17091711}
    17101712
     1713static RefPtr<Uint8Array> fillBufferWithContentsOfFile(FILE* file)
     1714{
     1715    fseek(file, 0, SEEK_END);
     1716    size_t bufferCapacity = ftell(file);
     1717    fseek(file, 0, SEEK_SET);
     1718    RefPtr<Uint8Array> result = Uint8Array::create(bufferCapacity);
     1719    size_t readSize = fread(result->data(), 1, bufferCapacity, file);
     1720    if (readSize != bufferCapacity)
     1721        return nullptr;
     1722    return result;
     1723}
     1724
     1725static RefPtr<Uint8Array> fillBufferWithContentsOfFile(const String& fileName)
     1726{
     1727    FILE* f = fopen(fileName.utf8().data(), "rb");
     1728    if (!f) {
     1729        fprintf(stderr, "Could not open file: %s\n", fileName.utf8().data());
     1730        return nullptr;
     1731    }
     1732
     1733    RefPtr<Uint8Array> result = fillBufferWithContentsOfFile(f);
     1734    fclose(f);
     1735
     1736    return result;
     1737}
     1738
    17111739static bool fillBufferWithContentsOfFile(FILE* file, Vector<char>& buffer)
    17121740{
     
    22772305    }
    22782306
    2279     Vector<char> content;
    2280     if (!fillBufferWithContentsOfFile(fileName, content))
     2307    RefPtr<Uint8Array> content = fillBufferWithContentsOfFile(fileName);
     2308    if (!content)
    22812309        return throwVMError(exec, scope, "Could not open file.");
    22822310
    22832311    if (!isBinary)
    2284         return JSValue::encode(jsString(exec, stringFromUTF(content)));
     2312        return JSValue::encode(jsString(exec, String::fromUTF8WithLatin1Fallback(content->data(), content->length())));
    22852313
    22862314    Structure* structure = exec->lexicalGlobalObject()->typedArrayStructure(TypeUint8);
    2287     auto length = content.size();
    2288     JSObject* result = createUint8TypedArray(exec, structure, ArrayBuffer::createFromBytes(content.releaseBuffer().leakPtr(), length, [] (void* p) { fastFree(p); }), 0, length);
     2315    JSObject* result = JSUint8Array::create(vm, structure, WTFMove(content));
    22892316    RETURN_IF_EXCEPTION(scope, encodedJSValue());
    22902317
     
    37763803}
    37773804
     3805static void gigacageDisabled(void*)
     3806{
     3807    dataLog("Gigacage disabled! Aborting.\n");
     3808    UNREACHABLE_FOR_PLATFORM();
     3809}
     3810
    37783811int jscmain(int argc, char** argv)
    37793812{
     
    37943827    JSC::Wasm::enableFastMemory();
    37953828#endif
     3829    if (GIGACAGE_ENABLED)
     3830        Gigacage::addDisableCallback(gigacageDisabled, nullptr);
    37963831
    37973832    int result;
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

    r220047 r220118  
    11991199macro loadPropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value)
    12001200    bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
     1201    # FIXME: Should do caging
     1202    # https://bugs.webkit.org/show_bug.cgi?id=175036
    12011203    loadp JSObject::m_butterfly[objectAndStorage], objectAndStorage
    12021204    negi propertyOffsetAsInt
     
    12121214macro storePropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value)
    12131215    bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
     1216    # FIXME: Should do caging
     1217    # https://bugs.webkit.org/show_bug.cgi?id=175036
    12141218    loadp JSObject::m_butterfly[objectAndStorage], objectAndStorage
    12151219    negi propertyOffsetAsInt
     
    12881292    btiz t2, IndexingShapeMask, .opGetArrayLengthSlow
    12891293    loadisFromInstruction(1, t1)
     1294    # FIXME: Should do caging
     1295    # https://bugs.webkit.org/show_bug.cgi?id=175036
    12901296    loadp JSObject::m_butterfly[t3], t0
    12911297    loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0
     
    14711477    loadConstantOrVariableInt32(t3, t1, .opGetByValSlow)
    14721478    sxi2q t1, t1
     1479    # FIXME: Should do caging
     1480    # https://bugs.webkit.org/show_bug.cgi?id=175036
    14731481    loadp JSObject::m_butterfly[t0], t3
    14741482    andi IndexingShapeMask, t2
     
    15181526   
    15191527    # Sweet, now we know that we have a typed array. Do some basic things now.
     1528    # FIXME: Should do caging
     1529    # https://bugs.webkit.org/show_bug.cgi?id=175036
    15201530    loadp JSArrayBufferView::m_vector[t0], t3
    15211531    biaeq t1, JSArrayBufferView::m_length[t0], .opGetByValSlow
     
    16091619    loadConstantOrVariableInt32(t0, t3, .opPutByValSlow)
    16101620    sxi2q t3, t3
     1621    # FIXME: Should do caging
     1622    # https://bugs.webkit.org/show_bug.cgi?id=175036
    16111623    loadp JSObject::m_butterfly[t1], t0
    16121624    andi IndexingShapeMask, t2
  • trunk/Source/JavaScriptCore/runtime/ArrayBuffer.cpp

    r217052 r220118  
    11/*
    2  * Copyright (C) 2009, 2013, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2009-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3030#include "JSArrayBufferView.h"
    3131#include "JSCInlines.h"
     32#include <wtf/Gigacage.h>
    3233
    3334namespace JSC {
     
    103104        }
    104105    }
    105     bool allocationSucceeded = false;
     106    size_t size = static_cast<size_t>(numElements) * static_cast<size_t>(elementByteSize);
     107    if (!size)
     108        size = 1; // Make sure malloc actually allocates something, but not too much. We use null to mean that the buffer is neutered.
     109    m_data = Gigacage::tryMalloc(size);
     110    if (!m_data) {
     111        reset();
     112        return;
     113    }
     114   
    106115    if (policy == ZeroInitialize)
    107         allocationSucceeded = WTF::tryFastCalloc(numElements, elementByteSize).getValue(m_data);
    108     else {
    109         ASSERT(policy == DontInitialize);
    110         allocationSucceeded = WTF::tryFastMalloc(numElements * elementByteSize).getValue(m_data);
    111     }
    112 
    113     if (allocationSucceeded) {
    114         m_sizeInBytes = numElements * elementByteSize;
    115         m_destructor = [] (void* p) { fastFree(p); };
    116         return;
    117     }
    118     reset();
     116        memset(m_data, 0, size);
     117
     118    m_sizeInBytes = numElements * elementByteSize;
     119    m_destructor = [] (void* p) { Gigacage::free(p); };
    119120}
    120121
     
    181182}
    182183
     184// FIXME: We cannot use this except if the memory comes from the cage.
     185// Current this is only used from:
     186// - JSGenericTypedArrayView<>::slowDownAndWasteMemory. But in that case, the memory should have already come
     187//   from the cage.
    183188Ref<ArrayBuffer> ArrayBuffer::createAdopted(const void* data, unsigned byteLength)
    184189{
    185     return createFromBytes(data, byteLength, [] (void* p) { fastFree(p); });
    186 }
    187 
     190    return createFromBytes(data, byteLength, [] (void* p) { Gigacage::free(p); });
     191}
     192
     193// FIXME: We cannot use this except if the memory comes from the cage.
     194// Currently this is only used from:
     195// - The C API. We could support that by either having the system switch to a mode where typed arrays are no
     196//   longer caged, or we could introduce a new set of typed array types that are uncaged and get accessed
     197//   differently.
     198// - WebAssembly. Wasm should allocate from the cage.
    188199Ref<ArrayBuffer> ArrayBuffer::createFromBytes(const void* data, unsigned byteLength, ArrayBufferDestructorFunction&& destructor)
    189200{
     201    if (!Gigacage::isCaged(data) && data && byteLength)
     202        Gigacage::disableGigacage();
     203   
    190204    ArrayBufferContents contents(const_cast<void*>(data), byteLength, WTFMove(destructor));
    191205    return create(WTFMove(contents));
     
    205219{
    206220    ArrayBufferContents contents;
    207     contents.tryAllocate(byteLength, 1, ArrayBufferContents::ZeroInitialize);
     221    contents.tryAllocate(byteLength, 1, ArrayBufferContents::DontInitialize);
    208222    if (!contents.m_data)
    209223        return nullptr;
  • trunk/Source/JavaScriptCore/runtime/IndexingHeader.h

    r206525 r220118  
    123123    union {
    124124        struct {
     125            // FIXME: vectorLength should be least significant, so that it's really hard to craft a pointer by
     126            // mucking with the butterfly.
     127            // https://bugs.webkit.org/show_bug.cgi?id=174927
    125128            uint32_t publicLength; // The meaning of this field depends on the array type, but for all JSArrays we rely on this being the publicly visible length (array.length).
    126129            uint32_t vectorLength; // The length of the indexed property storage. The actual size of the storage depends on this, and the type.
  • trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp

    r220069 r220118  
    4141#include "StructureIDTable.h"
    4242#include "SuperSampler.h"
    43 #include "WasmMemory.h"
    4443#include "WasmThunks.h"
    4544#include "WriteBarrier.h"
     
    6160        WTF::initializeThreading();
    6261        Options::initialize();
    63 #if ENABLE(WEBASSEMBLY)
    64         Wasm::Memory::initializePreallocations();
    65 #endif
    6662#if ENABLE(WRITE_BARRIER_PROFILING)
    6763        WriteBarrierCounters::initialize();
  • trunk/Source/JavaScriptCore/runtime/JSArrayBuffer.cpp

    r217108 r220118  
    3030#include "TypeError.h"
    3131#include "TypedArrayController.h"
     32#include <wtf/Gigacage.h>
    3233
    3334namespace JSC {
  • trunk/Source/JavaScriptCore/runtime/JSArrayBufferView.cpp

    r217108 r220118  
    3131#include "TypeError.h"
    3232#include "TypedArrayController.h"
     33#include <wtf/Gigacage.h>
    3334
    3435namespace JSC {
     
    8990        return;
    9091   
    91     if (mode == ZeroFill) {
    92         if (!tryFastCalloc(length, elementSize).getValue(m_vector))
    93             return;
    94     } else {
    95         if (!tryFastMalloc(length * elementSize).getValue(m_vector))
    96             return;
    97     }
     92    size_t size = static_cast<size_t>(length) * static_cast<size_t>(elementSize);
     93    m_vector = Gigacage::tryMalloc(size);
     94    if (!m_vector)
     95        return;
     96    if (mode == ZeroFill)
     97        memset(m_vector, 0, size);
    9898   
    9999    vm.heap.reportExtraMemoryAllocated(static_cast<size_t>(length) * elementSize);
     
    193193    ASSERT(thisObject->m_mode == OversizeTypedArray || thisObject->m_mode == WastefulTypedArray);
    194194    if (thisObject->m_mode == OversizeTypedArray)
    195         fastFree(thisObject->m_vector.get());
     195        Gigacage::free(thisObject->m_vector.get());
    196196}
    197197
  • trunk/Source/JavaScriptCore/runtime/JSLock.cpp

    r220069 r220118  
    157157    // Note: everything below must come after addCurrentThread().
    158158    m_vm->traps().notifyGrabAllLocks();
     159   
     160    m_vm->fireGigacageEnabledIfNecessary();
    159161
    160162#if ENABLE(SAMPLING_PROFILER)
  • trunk/Source/JavaScriptCore/runtime/JSObject.h

    r219981 r220118  
    10461046
    10471047protected:
     1048    // FIXME: This should do caging.
     1049    // https://bugs.webkit.org/show_bug.cgi?id=175039
    10481050    AuxiliaryBarrier<Butterfly*> m_butterfly;
    10491051#if USE(JSVALUE32_64)
  • trunk/Source/JavaScriptCore/runtime/Options.cpp

    r219055 r220118  
    406406        Options::useWebAssembly() = false;
    407407
    408     if (!Options::useWebAssembly()) {
    409         Options::webAssemblyFastMemoryPreallocateCount() = 0;
     408    if (!Options::useWebAssembly())
    410409        Options::useWebAssemblyFastTLS() = false;
    411     }
    412410   
    413411    if (Options::dumpDisassembly()
  • trunk/Source/JavaScriptCore/runtime/Options.h

    r219611 r220118  
    461461    /* FIXME: enable fast memories on iOS and pre-allocate them. https://bugs.webkit.org/show_bug.cgi?id=170774 */ \
    462462    v(bool, useWebAssemblyFastMemory, !isIOS(), Normal, "If true, we will try to use a 32-bit address space with a signal handler to bounds check wasm memory.") \
     463    v(bool, logWebAssemblyMemory, false, Normal, nullptr) \
    463464    v(unsigned, webAssemblyFastMemoryRedzonePages, 128, Normal, "WebAssembly fast memories use 4GiB virtual allocations, plus a redzone (counted as multiple of 64KiB WebAssembly pages) at the end to catch reg+imm accesses which exceed 32-bit, anything beyond the redzone is explicitly bounds-checked") \
    464465    v(bool, crashIfWebAssemblyCantFastMemory, false, Normal, "If true, we will crash if we can't obtain fast memory for wasm.") \
    465     v(unsigned, webAssemblyFastMemoryPreallocateCount, 0, Normal, "WebAssembly fast memories can be pre-allocated at program startup and remain cached to avoid fragmentation leading to bounds-checked memory. This number is an upper bound on initial allocation as well as total count of fast memories. Zero means no pre-allocation, no caching, and no limit to the number of runtime allocations.") \
     466    v(unsigned, maxNumWebAssemblyFastMemories, 10, Normal, nullptr) \
    466467    v(bool, useWebAssemblyFastTLS, true, Normal, "If true, we will try to use fast thread-local storage if available on the current platform.") \
    467468    v(bool, useFastTLSForWasmContext, true, Normal, "If true (and fast TLS is enabled), we will store context in fast TLS. If false, we will pin it to a register.") \
  • trunk/Source/JavaScriptCore/runtime/ScopedArgumentsTable.h

    r206525 r220118  
    8787    uint32_t m_length;
    8888    bool m_locked; // Being locked means that there are multiple references to this object and none of them expect to see the others' modifications. This means that modifications need to make a copy first.
     89    // FIXME: Allocate this in the primitive gigacage
     90    // https://bugs.webkit.org/show_bug.cgi?id=174921
    8991    std::unique_ptr<ScopeOffset[]> m_arguments;
    9092};
  • trunk/Source/JavaScriptCore/runtime/VM.cpp

    r220069 r220118  
    168168    , stringSpace("JSString", heap)
    169169    , destructibleObjectSpace("JSDestructibleObject", heap)
     170    , eagerlySweptDestructibleObjectSpace("Eagerly Swept JSDestructibleObject", heap)
    170171    , segmentedVariableObjectSpace("JSSegmentedVariableObjectSpace", heap)
    171172#if ENABLE(WEBASSEMBLY)
     
    208209    , m_builtinExecutables(std::make_unique<BuiltinExecutables>(*this))
    209210    , m_typeProfilerEnabledCount(0)
     211    , m_gigacageEnabled(IsWatched)
    210212    , m_controlFlowProfilerEnabledCount(0)
    211213    , m_shadowChicken(std::make_unique<ShadowChicken>())
     
    285287    initializeHostCallReturnValue(); // This is needed to convince the linker not to drop host call return support.
    286288#endif
     289   
     290    Gigacage::addDisableCallback(gigacageDisabledCallback, this);
    287291
    288292    heap.notifyIsSafeToCollect();
     
    339343VM::~VM()
    340344{
     345    Gigacage::removeDisableCallback(gigacageDisabledCallback, this);
    341346    promiseDeferredTimer->stopRunningTasks();
    342347#if ENABLE(WEBASSEMBLY)
     
    405410        fastFree(scratchBuffers[i]);
    406411#endif
     412}
     413
     414void VM::gigacageDisabledCallback(void* argument)
     415{
     416    static_cast<VM*>(argument)->gigacageDisabled();
     417}
     418
     419void VM::gigacageDisabled()
     420{
     421    if (m_apiLock->currentThreadIsHoldingLock()) {
     422        m_gigacageEnabled.fireAll(*this, "Gigacage disabled");
     423        return;
     424    }
     425 
     426    // This is totally racy, and that's OK. The point is, it's up to the user to ensure that they pass the
     427    // uncaged buffer in a nicely synchronized manner.
     428    m_needToFireGigacageEnabled = true;
    407429}
    408430
  • trunk/Source/JavaScriptCore/runtime/VM.h

    r220069 r220118  
    3737#include "ExecutableAllocator.h"
    3838#include "FunctionHasExecutedCache.h"
     39#include "GigacageSubspace.h"
    3940#include "Heap.h"
    4041#include "Intrinsic.h"
     
    287288    Heap heap;
    288289   
    289     Subspace auxiliarySpace;
     290    GigacageSubspace auxiliarySpace;
    290291   
    291292    // Whenever possible, use subspaceFor<CellType>(vm) to get one of these subspaces.
     
    294295    JSStringSubspace stringSpace;
    295296    JSDestructibleObjectSubspace destructibleObjectSpace;
     297    JSDestructibleObjectSubspace eagerlySweptDestructibleObjectSpace;
    296298    JSSegmentedVariableObjectSubspace segmentedVariableObjectSpace;
    297299#if ENABLE(WEBASSEMBLY)
     
    524526    void* lastStackTop() { return m_lastStackTop; }
    525527    void setLastStackTop(void*);
     528   
     529    void fireGigacageEnabledIfNecessary()
     530    {
     531        if (m_needToFireGigacageEnabled) {
     532            m_needToFireGigacageEnabled = false;
     533            m_gigacageEnabled.fireAll(*this, "Gigacage disabled asynchronously");
     534        }
     535    }
    526536
    527537    JSValue hostCallReturnValue;
     
    625635    // FIXME: Use AtomicString once it got merged with Identifier.
    626636    JS_EXPORT_PRIVATE void addImpureProperty(const String&);
     637   
     638    InlineWatchpointSet& gigacageEnabled() { return m_gigacageEnabled; }
    627639
    628640    BuiltinExecutables* builtinExecutables() { return m_builtinExecutables.get(); }
     
    731743    void verifyExceptionCheckNeedIsSatisfied(unsigned depth, ExceptionEventLocation&);
    732744#endif
     745   
     746    static void gigacageDisabledCallback(void*);
     747    void gigacageDisabled();
    733748
    734749#if ENABLE(ASSEMBLER)
     
    775790    std::unique_ptr<TypeProfilerLog> m_typeProfilerLog;
    776791    unsigned m_typeProfilerEnabledCount;
     792    bool m_needToFireGigacageEnabled { false };
     793    InlineWatchpointSet m_gigacageEnabled;
    777794    FunctionHasExecutedCache m_functionHasExecutedCache;
    778795    std::unique_ptr<ControlFlowProfiler> m_controlFlowProfiler;
  • trunk/Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp

    r219899 r220118  
    359359                ASSERT_UNUSED(pinnedGPR, InvalidGPRReg == pinnedGPR);
    360360                break;
    361             case MemoryMode::NumberOfMemoryModes:
    362                 ASSERT_NOT_REACHED();
    363361            }
    364362            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     
    638636        }
    639637        break;
    640 
    641     case MemoryMode::NumberOfMemoryModes:
    642         RELEASE_ASSERT_NOT_REACHED();
    643638    }
    644639    pointer = m_currentBlock->appendNew<Value>(m_proc, ZExt32, origin(), pointer);
  • trunk/Source/JavaScriptCore/wasm/WasmCodeBlock.cpp

    r217942 r220118  
    126126        // because the page protection detects out-of-bounds accesses.
    127127        return memoryMode == Wasm::MemoryMode::Signaling;
    128     case Wasm::MemoryMode::NumberOfMemoryModes:
    129         break;
    130128    }
    131129    RELEASE_ASSERT_NOT_REACHED();
  • trunk/Source/JavaScriptCore/wasm/WasmMemory.cpp

    r219595 r220118  
    3131#include "VM.h"
    3232#include "WasmThunks.h"
    33 
    34 #include <atomic>
    35 #include <wtf/MonotonicTime.h>
     33#include <wtf/Gigacage.h>
     34#include <wtf/Lock.h>
    3635#include <wtf/Platform.h>
    3736#include <wtf/PrintStream.h>
    38 #include <wtf/VMTags.h>
     37#include <wtf/RAMSize.h>
    3938
    4039namespace JSC { namespace Wasm {
     
    4544
    4645namespace {
     46
    4747constexpr bool verbose = false;
    4848
    4949NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntGetFastMemory() { CRASH(); }
    50 NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntUnmapMemory() { CRASH(); }
    51 NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntUnprotectMemory() { CRASH(); }
    52 
    53 void* mmapBytes(size_t bytes)
    54 {
    55     void* location = mmap(nullptr, bytes, PROT_NONE, MAP_PRIVATE | MAP_ANON, VM_TAG_FOR_WEBASSEMBLY_MEMORY, 0);
    56     return location == MAP_FAILED ? nullptr : location;
    57 }
    58 
    59 void munmapBytes(void* memory, size_t size)
    60 {
    61     if (UNLIKELY(munmap(memory, size)))
    62         webAssemblyCouldntUnmapMemory();
    63 }
    64 
    65 void zeroAndUnprotectBytes(void* start, size_t bytes)
    66 {
    67     if (bytes) {
    68         dataLogLnIf(verbose, "Zeroing and unprotecting ", bytes, " from ", RawPointer(start));
    69         // FIXME: We could be smarter about memset / mmap / madvise. Here, we may not need to act synchronously, or maybe we can memset+unprotect smaller ranges of memory (which would pay off if not all the writable memory was actually physically backed: memset forces physical backing only to unprotect it right after). https://bugs.webkit.org/show_bug.cgi?id=170343
    70         memset(start, 0, bytes);
    71         if (UNLIKELY(mprotect(start, bytes, PROT_NONE)))
    72             webAssemblyCouldntUnprotectMemory();
    73     }
    74 }
    75 
    76 // Allocate fast memories very early at program startup and cache them. The fast memories use significant amounts of virtual uncommitted address space, reducing the likelihood that we'll obtain any if we wait to allocate them.
    77 // We still try to allocate fast memories at runtime, and will cache them when relinquished up to the preallocation limit.
    78 // Note that this state is per-process, not per-VM.
    79 // We use simple static globals which don't allocate to avoid early fragmentation and to keep management to the bare minimum. We avoid locking because fast memories use segfault signal handling to handle out-of-bounds accesses. This requires identifying if the faulting address is in a fast memory range, which should avoid acquiring a lock lest the actual signal was caused by this very code while it already held the lock.
    80 // Speed and contention don't really matter here, but simplicity does. We therefore use straightforward FIFOs for our cache, and linear traversal for the list of currently active fast memories.
    81 constexpr size_t fastMemoryCacheHardLimit { 16 };
    82 constexpr size_t fastMemoryAllocationSoftLimit { 32 }; // Prevents filling up the virtual address space.
    83 static_assert(fastMemoryAllocationSoftLimit >= fastMemoryCacheHardLimit, "The cache shouldn't be bigger than the total number we'll ever allocate");
    84 size_t fastMemoryPreallocateCount { 0 };
    85 std::atomic<void*> fastMemoryCache[fastMemoryCacheHardLimit] = { ATOMIC_VAR_INIT(nullptr) };
    86 std::atomic<void*> currentlyActiveFastMemories[fastMemoryAllocationSoftLimit] = { ATOMIC_VAR_INIT(nullptr) };
    87 std::atomic<size_t> currentlyAllocatedFastMemories = ATOMIC_VAR_INIT(0);
    88 std::atomic<size_t> observedMaximumFastMemory = ATOMIC_VAR_INIT(0);
    89 std::atomic<size_t> currentSlowMemoryCapacity = ATOMIC_VAR_INIT(0);
    90 
    91 size_t fastMemoryAllocatedBytesSoftLimit()
    92 {
    93     return fastMemoryAllocationSoftLimit * Memory::fastMappedBytes();
    94 }
    95 
    96 void* tryGetCachedFastMemory()
    97 {
    98     for (unsigned idx = 0; idx < fastMemoryPreallocateCount; ++idx) {
    99         if (void* previous = fastMemoryCache[idx].exchange(nullptr, std::memory_order_acq_rel))
    100             return previous;
    101     }
    102     return nullptr;
    103 }
    104 
    105 bool tryAddToCachedFastMemory(void* memory)
    106 {
    107     for (unsigned i = 0; i < fastMemoryPreallocateCount; ++i) {
    108         void* expected = nullptr;
    109         if (fastMemoryCache[i].compare_exchange_strong(expected, memory, std::memory_order_acq_rel)) {
    110             dataLogLnIf(verbose, "Cached fast memory ", RawPointer(memory));
    111             return true;
    112         }
    113     }
    114     return false;
    115 }
    116 
    117 bool tryAddToCurrentlyActiveFastMemories(void* memory)
    118 {
    119     for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
    120         void* expected = nullptr;
    121         if (currentlyActiveFastMemories[idx].compare_exchange_strong(expected, memory, std::memory_order_acq_rel))
    122             return true;
    123     }
    124     return false;
    125 }
    126 
    127 void removeFromCurrentlyActiveFastMemories(void* memory)
    128 {
    129     for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
    130         void* expected = memory;
    131         if (currentlyActiveFastMemories[idx].compare_exchange_strong(expected, nullptr, std::memory_order_acq_rel))
    132             return;
    133     }
    134     RELEASE_ASSERT_NOT_REACHED();
    135 }
    136 
    137 void* tryGetFastMemory(VM& vm)
    138 {
    139     void* memory = nullptr;
    140 
    141     if (LIKELY(Options::useWebAssemblyFastMemory())) {
    142         memory = tryGetCachedFastMemory();
    143         if (memory)
    144             dataLogLnIf(verbose, "tryGetFastMemory re-using ", RawPointer(memory));
    145         else if (currentlyAllocatedFastMemories.load(std::memory_order_acquire) >= 1) {
    146             // No memory was available in the cache, but we know there's at least one currently live. Maybe GC will find a free one.
    147             // FIXME collectSync(Full) and custom eager destruction of wasm memories could be better. For now use collectNow. Also, nothing tells us the current VM is holding onto fast memories. https://bugs.webkit.org/show_bug.cgi?id=170748
    148             dataLogLnIf(verbose, "tryGetFastMemory waiting on GC and retrying");
    149             vm.heap.collectNow(Sync, CollectionScope::Full);
    150             memory = tryGetCachedFastMemory();
    151             dataLogLnIf(verbose, "tryGetFastMemory waited on GC and retried ", memory? "successfully" : "unseccessfully");
    152         }
    153 
    154         // The soft limit is inherently racy because checking+allocation isn't atomic. Exceeding it slightly is fine.
    155         bool atAllocationSoftLimit = currentlyAllocatedFastMemories.load(std::memory_order_acquire) >= fastMemoryAllocationSoftLimit;
    156         dataLogLnIf(verbose && atAllocationSoftLimit, "tryGetFastMemory reached allocation soft limit of ", fastMemoryAllocationSoftLimit);
    157 
    158         if (!memory && !atAllocationSoftLimit) {
    159             memory = mmapBytes(Memory::fastMappedBytes());
    160             if (memory) {
    161                 size_t currentlyAllocated = 1 + currentlyAllocatedFastMemories.fetch_add(1, std::memory_order_acq_rel);
    162                 size_t currentlyObservedMaximum = observedMaximumFastMemory.load(std::memory_order_acquire);
    163                 if (currentlyAllocated > currentlyObservedMaximum) {
    164                     size_t expected = currentlyObservedMaximum;
    165                     bool success = observedMaximumFastMemory.compare_exchange_strong(expected, currentlyAllocated, std::memory_order_acq_rel);
    166                     if (success)
    167                         dataLogLnIf(verbose, "tryGetFastMemory currently observed maximum is now ", currentlyAllocated);
    168                     else
    169                         // We lost the update race, but the counter is monotonic so the winner must have updated the value to what we were going to update it to, or multiple winners did so.
    170                         ASSERT(expected >= currentlyAllocated);
    171                 }
    172                 dataLogLnIf(verbose, "tryGetFastMemory allocated ", RawPointer(memory), ", currently allocated is ", currentlyAllocated);
    173             }
    174         }
    175     }
    176 
    177     if (memory) {
    178         if (UNLIKELY(!tryAddToCurrentlyActiveFastMemories(memory))) {
    179             // We got a memory, but reached the allocation soft limit *and* all of the allocated memories are active, none are cached. That's a bummer, we have to get rid of our memory. We can't just hold on to it because the list of active fast memories must be precise.
    180             dataLogLnIf(verbose, "tryGetFastMemory found a fast memory but had to give it up");
    181             munmapBytes(memory, Memory::fastMappedBytes());
    182             currentlyAllocatedFastMemories.fetch_sub(1, std::memory_order_acq_rel);
    183             memory = nullptr;
    184         }
    185     }
    186 
    187     if (!memory) {
    188         dataLogLnIf(verbose, "tryGetFastMemory couldn't re-use or allocate a fast memory");
    189         if (UNLIKELY(Options::crashIfWebAssemblyCantFastMemory()))
    190             webAssemblyCouldntGetFastMemory();
    191     }
    192 
    193     return memory;
    194 }
    195 
    196 bool slowMemoryCapacitySoftMaximumExceeded()
    197 {
    198     // The limit on slow memory capacity is arbitrary. Its purpose is to limit
    199     // virtual memory allocation. We choose to set the limit at the same virtual
    200     // memory limit imposed on fast memories.
    201     size_t maximum = fastMemoryAllocatedBytesSoftLimit();
    202     size_t currentCapacity = currentSlowMemoryCapacity.load(std::memory_order_acquire);
    203     if (UNLIKELY(currentCapacity > maximum)) {
    204         dataLogLnIf(verbose, "Slow memory capacity limit reached");
    205         return true;
    206     }
    207     return false;
    208 }
    209 
    210 void* tryGetSlowMemory(size_t bytes)
    211 {
    212     if (slowMemoryCapacitySoftMaximumExceeded())
     50
     51struct MemoryResult {
     52    enum Kind {
     53        Success,
     54        SuccessAndAsyncGC,
     55        SyncGCAndRetry
     56    };
     57   
     58    static const char* toString(Kind kind)
     59    {
     60        switch (kind) {
     61        case Success:
     62            return "Success";
     63        case SuccessAndAsyncGC:
     64            return "SuccessAndAsyncGC";
     65        case SyncGCAndRetry:
     66            return "SyncGCAndRetry";
     67        }
     68        RELEASE_ASSERT_NOT_REACHED();
    21369        return nullptr;
    214     void* memory = mmapBytes(bytes);
    215     if (memory)
    216         currentSlowMemoryCapacity.fetch_add(bytes, std::memory_order_acq_rel);
    217     dataLogLnIf(memory && verbose, "Obtained slow memory ", RawPointer(memory), " with capacity ", bytes);
    218     dataLogLnIf(!memory && verbose, "Failed obtaining slow memory with capacity ", bytes);
    219     return memory;
    220 }
    221 
    222 void relinquishMemory(void* memory, size_t writableSize, size_t mappedCapacity, MemoryMode mode)
    223 {
    224     switch (mode) {
    225     case MemoryMode::Signaling: {
    226         RELEASE_ASSERT(Options::useWebAssemblyFastMemory());
    227         RELEASE_ASSERT(mappedCapacity == Memory::fastMappedBytes());
    228 
    229         // This memory cannot cause a trap anymore.
    230         removeFromCurrentlyActiveFastMemories(memory);
    231 
    232         // We may cache fast memories. Assuming we will, we have to reset them before inserting them into the cache.
    233         zeroAndUnprotectBytes(memory, writableSize);
    234 
    235         if (tryAddToCachedFastMemory(memory))
    236             return;
    237 
    238         dataLogLnIf(verbose, "relinquishMemory unable to cache fast memory, freeing instead ", RawPointer(memory));
    239         munmapBytes(memory, Memory::fastMappedBytes());
    240         currentlyAllocatedFastMemories.fetch_sub(1, std::memory_order_acq_rel);
    241 
    242         return;
    243     }
    244 
    245     case MemoryMode::BoundsChecking:
    246         dataLogLnIf(verbose, "relinquishFastMemory freeing slow memory ", RawPointer(memory));
    247         munmapBytes(memory, mappedCapacity);
    248         currentSlowMemoryCapacity.fetch_sub(mappedCapacity, std::memory_order_acq_rel);
    249         return;
    250 
    251     case MemoryMode::NumberOfMemoryModes:
    252         break;
    253     }
    254 
    255     RELEASE_ASSERT_NOT_REACHED();
    256 }
    257 
    258 bool makeNewMemoryReadWriteOrRelinquish(void* memory, size_t initialBytes, size_t mappedCapacityBytes, MemoryMode mode)
    259 {
    260     ASSERT(memory && initialBytes <= mappedCapacityBytes);
    261     if (initialBytes) {
    262         dataLogLnIf(verbose, "Marking WebAssembly memory's ", RawPointer(memory), "'s initial ", initialBytes, " bytes as read+write");
    263         if (mprotect(memory, initialBytes, PROT_READ | PROT_WRITE)) {
    264             const char* why = strerror(errno);
    265             dataLogLnIf(verbose, "Failed making memory ", RawPointer(memory), " readable and writable: ", why);
    266             relinquishMemory(memory, 0, mappedCapacityBytes, mode);
    267             return false;
    268         }
    269     }
    270     return true;
     70    }
     71   
     72    MemoryResult() { }
     73   
     74    MemoryResult(void* basePtr, Kind kind)
     75        : basePtr(basePtr)
     76        , kind(kind)
     77    {
     78    }
     79   
     80    void dump(PrintStream& out) const
     81    {
     82        out.print("{basePtr = ", RawPointer(basePtr), ", kind = ", toString(kind), "}");
     83    }
     84   
     85    void* basePtr;
     86    Kind kind;
     87};
     88
     89class MemoryManager {
     90public:
     91    MemoryManager()
     92        : m_maxCount(Options::maxNumWebAssemblyFastMemories())
     93    {
     94    }
     95   
     96    MemoryResult tryAllocateVirtualPages()
     97    {
     98        MemoryResult result = [&] {
     99            auto holder = holdLock(m_lock);
     100            if (m_memories.size() >= m_maxCount)
     101                return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
     102           
     103            void* result = Gigacage::tryAllocateVirtualPages(Memory::fastMappedBytes());
     104            if (!result)
     105                return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
     106           
     107            m_memories.append(result);
     108           
     109            return MemoryResult(
     110                result,
     111                m_memories.size() >= m_maxCount / 2 ? MemoryResult::SuccessAndAsyncGC : MemoryResult::Success);
     112        }();
     113       
     114        if (Options::logWebAssemblyMemory())
     115            dataLog("Allocated virtual: ", result, "; state: ", *this, "\n");
     116       
     117        return result;
     118    }
     119   
     120    void freeVirtualPages(void* basePtr)
     121    {
     122        {
     123            auto holder = holdLock(m_lock);
     124            Gigacage::freeVirtualPages(basePtr, Memory::fastMappedBytes());
     125            m_memories.removeFirst(basePtr);
     126        }
     127       
     128        if (Options::logWebAssemblyMemory())
     129            dataLog("Freed virtual; state: ", *this, "\n");
     130    }
     131   
     132    bool containsAddress(void* address)
     133    {
     134        // NOTE: This can be called from a signal handler, but only after we proved that we're in JIT code.
     135        auto holder = holdLock(m_lock);
     136        for (void* memory : m_memories) {
     137            char* start = static_cast<char*>(memory);
     138            if (start <= address && address <= start + Memory::fastMappedBytes())
     139                return true;
     140        }
     141        return false;
     142    }
     143   
     144    // FIXME: Ideally, bmalloc would have this kind of mechanism. Then, we would just forward to that
     145    // mechanism here.
     146    MemoryResult::Kind tryAllocatePhysicalBytes(size_t bytes)
     147    {
     148        MemoryResult::Kind result = [&] {
     149            auto holder = holdLock(m_lock);
     150            if (m_physicalBytes + bytes > ramSize())
     151                return MemoryResult::SyncGCAndRetry;
     152           
     153            m_physicalBytes += bytes;
     154           
     155            if (m_physicalBytes >= ramSize() / 2)
     156                return MemoryResult::SuccessAndAsyncGC;
     157           
     158            return MemoryResult::Success;
     159        }();
     160       
     161        if (Options::logWebAssemblyMemory())
     162            dataLog("Allocated physical: ", bytes, ", ", MemoryResult::toString(result), "; state: ", *this, "\n");
     163       
     164        return result;
     165    }
     166   
     167    void freePhysicalBytes(size_t bytes)
     168    {
     169        {
     170            auto holder = holdLock(m_lock);
     171            m_physicalBytes -= bytes;
     172        }
     173       
     174        if (Options::logWebAssemblyMemory())
     175            dataLog("Freed physical: ", bytes, "; state: ", *this, "\n");
     176    }
     177   
     178    void dump(PrintStream& out) const
     179    {
     180        out.print("memories =  ", m_memories.size(), "/", m_maxCount, ", bytes = ", m_physicalBytes, "/", ramSize());
     181    }
     182   
     183private:
     184    Lock m_lock;
     185    unsigned m_maxCount { 0 };
     186    Vector<void*> m_memories;
     187    size_t m_physicalBytes { 0 };
     188};
     189
     190static MemoryManager& memoryManager()
     191{
     192    static std::once_flag onceFlag;
     193    static MemoryManager* manager;
     194    std::call_once(
     195        onceFlag,
     196        [] {
     197            manager = new MemoryManager();
     198        });
     199    return *manager;
     200}
     201
     202template<typename Func>
     203bool tryAndGC(VM& vm, const Func& allocate)
     204{
     205    unsigned numTries = 2;
     206    bool done = false;
     207    for (unsigned i = 0; i < numTries && !done; ++i) {
     208        switch (allocate()) {
     209        case MemoryResult::Success:
     210            done = true;
     211            break;
     212        case MemoryResult::SuccessAndAsyncGC:
     213            vm.heap.collectAsync(CollectionScope::Full);
     214            done = true;
     215            break;
     216        case MemoryResult::SyncGCAndRetry:
     217            if (i + 1 == numTries)
     218                break;
     219            vm.heap.collectSync(CollectionScope::Full);
     220            break;
     221        }
     222    }
     223    return done;
    271224}
    272225
    273226} // anonymous namespace
    274 
    275227
    276228const char* makeString(MemoryMode mode)
     
    279231    case MemoryMode::BoundsChecking: return "BoundsChecking";
    280232    case MemoryMode::Signaling: return "Signaling";
    281     case MemoryMode::NumberOfMemoryModes: break;
    282233    }
    283234    RELEASE_ASSERT_NOT_REACHED();
    284235    return "";
    285 }
    286 
    287 void Memory::initializePreallocations()
    288 {
    289     if (UNLIKELY(!Options::useWebAssemblyFastMemory()))
    290         return;
    291 
    292     // Races cannot occur in this function: it is only called at program initialization, before WebAssembly can be invoked.
    293 
    294     MonotonicTime startTime;
    295     if (verbose)
    296         startTime = MonotonicTime::now();
    297 
    298     const size_t desiredFastMemories = std::min<size_t>(Options::webAssemblyFastMemoryPreallocateCount(), fastMemoryCacheHardLimit);
    299 
    300     // Start off trying to allocate fast memories contiguously so they don't fragment each other. This can fail if the address space is otherwise fragmented. In that case, go for smaller contiguous allocations. We'll eventually get individual non-contiguous fast memories allocated, or we'll just be unable to fit a single one at which point we give up.
    301     auto allocateContiguousFastMemories = [&] (size_t numContiguous) -> bool {
    302         if (void *memory = mmapBytes(Memory::fastMappedBytes() * numContiguous)) {
    303             for (size_t subMemory = 0; subMemory < numContiguous; ++subMemory) {
    304                 void* startAddress = reinterpret_cast<char*>(memory) + Memory::fastMappedBytes() * subMemory;
    305                 bool inserted = false;
    306                 for (size_t cacheEntry = 0; cacheEntry < fastMemoryCacheHardLimit; ++cacheEntry) {
    307                     if (fastMemoryCache[cacheEntry].load(std::memory_order_relaxed) == nullptr) {
    308                         fastMemoryCache[cacheEntry].store(startAddress, std::memory_order_relaxed);
    309                         inserted = true;
    310                         break;
    311                     }
    312                 }
    313                 RELEASE_ASSERT(inserted);
    314             }
    315             return true;
    316         }
    317         return false;
    318     };
    319 
    320     size_t fragments = 0;
    321     size_t numFastMemories = 0;
    322     size_t contiguousMemoryAllocationAttempt = desiredFastMemories;
    323     while (numFastMemories != desiredFastMemories && contiguousMemoryAllocationAttempt != 0) {
    324         if (allocateContiguousFastMemories(contiguousMemoryAllocationAttempt)) {
    325             numFastMemories += contiguousMemoryAllocationAttempt;
    326             contiguousMemoryAllocationAttempt = std::min(contiguousMemoryAllocationAttempt - 1, desiredFastMemories - numFastMemories);
    327         } else
    328             --contiguousMemoryAllocationAttempt;
    329         ++fragments;
    330     }
    331 
    332     fastMemoryPreallocateCount = numFastMemories;
    333     currentlyAllocatedFastMemories.store(fastMemoryPreallocateCount, std::memory_order_relaxed);
    334     observedMaximumFastMemory.store(fastMemoryPreallocateCount, std::memory_order_relaxed);
    335 
    336     if (verbose) {
    337         MonotonicTime endTime = MonotonicTime::now();
    338 
    339         for (size_t cacheEntry = 0; cacheEntry < fastMemoryPreallocateCount; ++cacheEntry) {
    340             void* startAddress = fastMemoryCache[cacheEntry].load(std::memory_order_relaxed);
    341             ASSERT(startAddress);
    342             dataLogLn("Pre-allocation of WebAssembly fast memory at ", RawPointer(startAddress));
    343         }
    344 
    345         dataLogLn("Pre-allocated ", fastMemoryPreallocateCount, " WebAssembly fast memories in ", fastMemoryPreallocateCount == 0 ? 0 : fragments, fragments == 1 ? " fragment, took " : " fragments, took ", endTime - startTime);
    346     }
    347236}
    348237
     
    374263    const size_t initialBytes = initial.bytes();
    375264    const size_t maximumBytes = maximum ? maximum.bytes() : 0;
    376     size_t mappedCapacityBytes = 0;
    377     MemoryMode mode;
    378265
    379266    // We need to be sure we have a stub prior to running code.
     
    386273        return adoptRef(new Memory(initial, maximum));
    387274    }
    388 
    389     void* memory = nullptr;
    390 
    391     // First try fast memory, because they're fast. Fast memory is suitable for any initial / maximum.
    392     memory = tryGetFastMemory(vm);
    393     if (memory) {
    394         mappedCapacityBytes = Memory::fastMappedBytes();
    395         mode = MemoryMode::Signaling;
    396     }
    397 
    398     // If we can't get a fast memory but the user expressed the intent to grow memory up to a certain maximum then we should try to honor that desire. It'll mean that grow is more likely to succeed, and won't require remapping.
    399     if (!memory && maximum) {
    400         memory = tryGetSlowMemory(maximumBytes);
    401         if (memory) {
    402             mappedCapacityBytes = maximumBytes;
    403             mode = MemoryMode::BoundsChecking;
    404         }
    405     }
    406 
    407     // We're stuck with a slow memory which may be slower or impossible to grow.
    408     if (!memory) {
    409         if (!initialBytes)
    410             return adoptRef(new Memory(initial, maximum));
    411         memory = tryGetSlowMemory(initialBytes);
    412         if (memory) {
    413             mappedCapacityBytes = initialBytes;
    414             mode = MemoryMode::BoundsChecking;
    415         }
    416     }
    417 
    418     if (!memory)
     275   
     276    bool done = tryAndGC(
     277        vm,
     278        [&] () -> MemoryResult::Kind {
     279            return memoryManager().tryAllocatePhysicalBytes(initialBytes);
     280        });
     281    if (!done)
    419282        return nullptr;
    420 
    421     if (!makeNewMemoryReadWriteOrRelinquish(memory, initialBytes, mappedCapacityBytes, mode))
     283       
     284    char* fastMemory = nullptr;
     285    if (Options::useWebAssemblyFastMemory()) {
     286        tryAndGC(
     287            vm,
     288            [&] () -> MemoryResult::Kind {
     289                auto result = memoryManager().tryAllocateVirtualPages();
     290                fastMemory = bitwise_cast<char*>(result.basePtr);
     291                return result.kind;
     292            });
     293    }
     294   
     295    if (fastMemory) {
     296        bool writable = true;
     297        bool executable = false;
     298        OSAllocator::commit(fastMemory, initialBytes, writable, executable);
     299       
     300        if (mprotect(fastMemory + initialBytes, Memory::fastMappedBytes() - initialBytes, PROT_NONE)) {
     301            dataLog("mprotect failed: ", strerror(errno), "\n");
     302            RELEASE_ASSERT_NOT_REACHED();
     303        }
     304       
     305        memset(fastMemory, 0, initialBytes);
     306        return adoptRef(new Memory(fastMemory, initial, maximum, Memory::fastMappedBytes(), MemoryMode::Signaling));
     307    }
     308   
     309    if (UNLIKELY(Options::crashIfWebAssemblyCantFastMemory()))
     310        webAssemblyCouldntGetFastMemory();
     311
     312    if (!initialBytes)
     313        return adoptRef(new Memory(initial, maximum));
     314   
     315    void* slowMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), initialBytes);
     316    if (!slowMemory) {
     317        memoryManager().freePhysicalBytes(initialBytes);
    422318        return nullptr;
    423 
    424     return adoptRef(new Memory(memory, initial, maximum, mappedCapacityBytes, mode));
     319    }
     320    memset(slowMemory, 0, initialBytes);
     321    return adoptRef(new Memory(slowMemory, initial, maximum, initialBytes, MemoryMode::BoundsChecking));
    425322}
    426323
     
    428325{
    429326    if (m_memory) {
    430         dataLogLnIf(verbose, "Memory::~Memory ", *this);
    431         relinquishMemory(m_memory, m_size, m_mappedCapacity, m_mode);
     327        memoryManager().freePhysicalBytes(m_size);
     328        switch (m_mode) {
     329        case MemoryMode::Signaling:
     330            mprotect(m_memory, Memory::fastMappedBytes(), PROT_READ | PROT_WRITE);
     331            memoryManager().freeVirtualPages(m_memory);
     332            break;
     333        case MemoryMode::BoundsChecking:
     334            Gigacage::alignedFree(m_memory);
     335            break;
     336        }
    432337    }
    433338}
     
    444349}
    445350
    446 size_t Memory::maxFastMemoryCount()
    447 {
    448     // The order can be relaxed here because we provide a monotonically-increasing estimate. A concurrent observer could see a slightly out-of-date value but can't tell that they did.
    449     return observedMaximumFastMemory.load(std::memory_order_relaxed);
    450 }
    451 
    452351bool Memory::addressIsInActiveFastMemory(void* address)
    453352{
    454     // This cannot race in any meaningful way: the thread which calls this function wants to know if a fault it received at a particular address is in a fast memory. That fast memory must therefore be active in that thread. It cannot be added or removed from the list of currently active fast memories. Other memories being added / removed concurrently are inconsequential.
    455     for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
    456         char* start = static_cast<char*>(currentlyActiveFastMemories[idx].load(std::memory_order_acquire));
    457         if (start <= address && address <= start + fastMappedBytes())
    458             return true;
    459     }
    460     return false;
    461 }
    462 
    463 bool Memory::grow(PageCount newSize)
     353    return memoryManager().containsAddress(address);
     354}
     355
     356bool Memory::grow(VM& vm, PageCount newSize)
    464357{
    465358    RELEASE_ASSERT(newSize > PageCount::fromBytes(m_size));
     
    471364
    472365    size_t desiredSize = newSize.bytes();
    473 
     366    RELEASE_ASSERT(desiredSize > m_size);
     367    size_t extraBytes = desiredSize - m_size;
     368    RELEASE_ASSERT(extraBytes);
     369    bool success = tryAndGC(
     370        vm,
     371        [&] () -> MemoryResult::Kind {
     372            return memoryManager().tryAllocatePhysicalBytes(extraBytes);
     373        });
     374    if (!success)
     375        return false;
     376       
    474377    switch (mode()) {
    475     case MemoryMode::BoundsChecking:
     378    case MemoryMode::BoundsChecking: {
    476379        RELEASE_ASSERT(maximum().bytes() != 0);
    477         break;
    478     case MemoryMode::Signaling:
     380       
     381        void* newMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), desiredSize);
     382        if (!newMemory)
     383            return false;
     384        memcpy(newMemory, m_memory, m_size);
     385        memset(static_cast<char*>(newMemory) + m_size, 0, desiredSize - m_size);
     386        if (m_memory)
     387            Gigacage::alignedFree(m_memory);
     388        m_memory = newMemory;
     389        m_mappedCapacity = desiredSize;
     390        m_size = desiredSize;
     391        return true;
     392    }
     393    case MemoryMode::Signaling: {
     394        RELEASE_ASSERT(m_memory);
    479395        // Signaling memory must have been pre-allocated virtually.
    480         RELEASE_ASSERT(m_memory);
    481         break;
    482     case MemoryMode::NumberOfMemoryModes:
    483         RELEASE_ASSERT_NOT_REACHED();
    484     }
    485 
    486     if (m_memory && desiredSize <= m_mappedCapacity) {
    487396        uint8_t* startAddress = static_cast<uint8_t*>(m_memory) + m_size;
    488         size_t extraBytes = desiredSize - m_size;
    489         RELEASE_ASSERT(extraBytes);
     397       
    490398        dataLogLnIf(verbose, "Marking WebAssembly memory's ", RawPointer(m_memory), " as read+write in range [", RawPointer(startAddress), ", ", RawPointer(startAddress + extraBytes), ")");
    491399        if (mprotect(startAddress, extraBytes, PROT_READ | PROT_WRITE)) {
     
    493401            return false;
    494402        }
    495 
     403        memset(startAddress, 0, extraBytes);
    496404        m_size = desiredSize;
    497         dataLogLnIf(verbose, "Memory::grow in-place ", *this);
    498405        return true;
    499     }
    500 
    501     // Signaling memory can't grow past its already-mapped size.
    502     RELEASE_ASSERT(mode() != MemoryMode::Signaling);
    503 
    504     // Otherwise, let's try to make some new memory.
    505     // FIXME mremap would be nice https://bugs.webkit.org/show_bug.cgi?id=170557
    506     // FIXME should we over-allocate here? https://bugs.webkit.org/show_bug.cgi?id=170826
    507     void* newMemory = tryGetSlowMemory(desiredSize);
    508     if (!newMemory)
    509         return false;
    510 
    511     if (!makeNewMemoryReadWriteOrRelinquish(newMemory, desiredSize, desiredSize, mode()))
    512         return false;
    513 
    514     if (m_memory) {
    515         memcpy(newMemory, m_memory, m_size);
    516         relinquishMemory(m_memory, m_size, m_size, m_mode);
    517     }
    518 
    519     m_memory = newMemory;
    520     m_mappedCapacity = desiredSize;
    521     m_size = desiredSize;
    522 
    523     dataLogLnIf(verbose, "Memory::grow ", *this);
    524     return true;
     406    } }
     407   
     408    RELEASE_ASSERT_NOT_REACHED();
     409    return false;
    525410}
    526411
  • trunk/Source/JavaScriptCore/wasm/WasmMemory.h

    r215340 r220118  
    4646enum class MemoryMode : uint8_t {
    4747    BoundsChecking,
    48     Signaling,
    49     NumberOfMemoryModes
     48    Signaling
    5049};
    51 static constexpr size_t NumberOfMemoryModes = static_cast<size_t>(MemoryMode::NumberOfMemoryModes);
     50static constexpr size_t NumberOfMemoryModes = 2;
    5251JS_EXPORT_PRIVATE const char* makeString(MemoryMode);
    5352
     
    5958
    6059    explicit operator bool() const { return !!m_memory; }
    61 
    62     static void initializePreallocations();
     60   
    6361    static RefPtr<Memory> create(VM&, PageCount initial, PageCount maximum);
    6462
    65     Memory() = default;
    6663    ~Memory();
    6764
    6865    static size_t fastMappedRedzoneBytes();
    6966    static size_t fastMappedBytes(); // Includes redzone.
    70     static size_t maxFastMemoryCount();
    7167    static bool addressIsInActiveFastMemory(void*);
    7268
     
    8278    // grow() should only be called from the JSWebAssemblyMemory object since that object needs to update internal
    8379    // pointers with the current base and size.
    84     bool grow(PageCount);
     80    bool grow(VM&, PageCount);
    8581
    8682    void check() {  ASSERT(!deletionHasBegun()); }
  • trunk/Source/JavaScriptCore/wasm/js/JSWebAssemblyInstance.cpp

    r218951 r220118  
    349349    if (!instance->memory()) {
    350350        // Make sure we have a dummy memory, so that wasm -> wasm thunks avoid checking for a nullptr Memory when trying to set pinned registers.
    351         instance->m_memory.set(vm, instance, JSWebAssemblyMemory::create(exec, vm, exec->lexicalGlobalObject()->WebAssemblyMemoryStructure(), adoptRef(*(new Wasm::Memory()))));
     351        instance->m_memory.set(vm, instance, JSWebAssemblyMemory::create(exec, vm, exec->lexicalGlobalObject()->WebAssemblyMemoryStructure(), Wasm::Memory::create(vm, 0, 0).releaseNonNull()));
    352352        RETURN_IF_EXCEPTION(throwScope, nullptr);
    353353    }
  • trunk/Source/JavaScriptCore/wasm/js/JSWebAssemblyMemory.cpp

    r218951 r220118  
    107107
    108108    if (delta) {
    109         bool success = memory().grow(newSize);
     109        bool success = memory().grow(vm, newSize);
    110110        if (!success) {
    111111            ASSERT(m_memoryBase == memory().memory());
     
    139139    ASSERT(inherits(vm, info()));
    140140    heap()->reportExtraMemoryAllocated(memory().size());
    141     vm.heap.reportWebAssemblyFastMemoriesAllocated(1);
    142141}
    143142
  • trunk/Source/JavaScriptCore/wasm/js/JSWebAssemblyMemory.h

    r218951 r220118  
    4242    typedef JSDestructibleObject Base;
    4343
     44    template<typename CellType>
     45    static Subspace* subspaceFor(VM& vm)
     46    {
     47        // We hold onto a lot of memory, so it makes a lot of sense to be swept eagerly.
     48        return &vm.eagerlySweptDestructibleObjectSpace;
     49    }
     50
    4451    static JSWebAssemblyMemory* create(ExecState*, VM&, Structure*, Ref<Wasm::Memory>&&);
    4552    static Structure* createStructure(VM&, JSGlobalObject*, JSValue);
  • trunk/Source/WTF/ChangeLog

    r220069 r220118  
     12017-08-01  Filip Pizlo  <fpizlo@apple.com>
     2
     3        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
     4        https://bugs.webkit.org/show_bug.cgi?id=174727
     5
     6        Reviewed by Mark Lam.
     7       
     8        For the Gigacage project to have minimal impact, we need to have some abstraction that allows code to
     9        avoid having to guard itself with #if's. This adds a Gigacage abstraction that overlays the Gigacage
     10        namespace from bmalloc, which always lets you call things like Gigacage::caged and Gigacage::tryMalloc.
     11       
     12        Because of how many places need to possibly allocate in a gigacage, or possibly perform caged accesses,
     13        it's better to hide the question of whether or not it's enabled inside this API.
     14
     15        * WTF.xcodeproj/project.pbxproj:
     16        * wtf/CMakeLists.txt:
     17        * wtf/FastMalloc.cpp:
     18        * wtf/Gigacage.cpp: Added.
     19        (Gigacage::tryMalloc):
     20        (Gigacage::tryAllocateVirtualPages):
     21        (Gigacage::freeVirtualPages):
     22        (Gigacage::tryAlignedMalloc):
     23        (Gigacage::alignedFree):
     24        (Gigacage::free):
     25        * wtf/Gigacage.h: Added.
     26        (Gigacage::ensureGigacage):
     27        (Gigacage::disableGigacage):
     28        (Gigacage::addDisableCallback):
     29        (Gigacage::removeDisableCallback):
     30        (Gigacage::caged):
     31        (Gigacage::isCaged):
     32        (Gigacage::tryAlignedMalloc):
     33        (Gigacage::alignedFree):
     34        (Gigacage::free):
     35
    1362017-07-31  Matt Lewis  <jlewis3@apple.com>
    237
  • trunk/Source/WTF/WTF.xcodeproj/project.pbxproj

    r220069 r220118  
    2424                0F30BA901E78708E002CA847 /* GlobalVersion.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */; };
    2525                0F43D8F11DB5ADDC00108FB6 /* AutomaticThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F43D8EF1DB5ADDC00108FB6 /* AutomaticThread.cpp */; };
     26                0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1741F23D49A0029D91D /* Gigacage.cpp */; };
    2627                0F60F32F1DFCBD1B00416D6C /* LockedPrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */; };
    2728                0F66B28A1DC97BAB004A1D3F /* ClockType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F66B2801DC97BAB004A1D3F /* ClockType.cpp */; };
     
    181182                0F4570421BE5B58F0062A629 /* Dominators.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Dominators.h; sourceTree = "<group>"; };
    182183                0F4570441BE834410062A629 /* BubbleSort.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BubbleSort.h; sourceTree = "<group>"; };
     184                0F5BF1741F23D49A0029D91D /* Gigacage.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = Gigacage.cpp; sourceTree = "<group>"; };
     185                0F5BF1751F23D49A0029D91D /* Gigacage.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = Gigacage.h; sourceTree = "<group>"; };
    183186                0F5BF1651F2317830029D91D /* NaturalLoops.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = NaturalLoops.h; sourceTree = "<group>"; };
    184187                0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LockedPrintStream.cpp; sourceTree = "<group>"; };
     
    801804                                1A1D8B9B173186CE00141DA4 /* FunctionDispatcher.h */,
    802805                                A8A472A8151A825A004123FF /* GetPtr.h */,
     806                                0F5BF1741F23D49A0029D91D /* Gigacage.cpp */,
     807                                0F5BF1751F23D49A0029D91D /* Gigacage.h */,
    803808                                0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */,
    804809                                0F30BA8B1E78708E002CA847 /* GlobalVersion.h */,
     
    13841389                                A5BA15FC182435A600A82E69 /* StringImplCF.cpp in Sources */,
    13851390                                A5BA15F51824348000A82E69 /* StringImplMac.mm in Sources */,
     1391                                0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */,
    13861392                                A5BA15F3182433A900A82E69 /* StringMac.mm in Sources */,
    13871393                                0FDDBFA71666DFA300C55FEF /* StringPrintStream.cpp in Sources */,
  • trunk/Source/WTF/wtf/CMakeLists.txt

    r220069 r220118  
    3939    FunctionDispatcher.h
    4040    GetPtr.h
     41    Gigacage.h
    4142    GlobalVersion.h
    4243    GraphNodeWorklist.h
     
    220221    FilePrintStream.cpp
    221222    FunctionDispatcher.cpp
     223    Gigacage.cpp
    222224    GlobalVersion.cpp
    223225    GregorianDateTime.cpp
  • trunk/Source/WTF/wtf/FastMalloc.cpp

    r218800 r220118  
    11/*
    22 * Copyright (c) 2005, 2007, Google Inc. All rights reserved.
    3  * Copyright (C) 2005-2009, 2011, 2015-2016 Apple Inc. All rights reserved.
     3 * Copyright (C) 2005-2017 Apple Inc. All rights reserved.
    44 * Redistribution and use in source and binary forms, with or without
    55 * modification, are permitted provided that the following conditions
  • trunk/Source/WebCore/ChangeLog

    r220117 r220118  
     12017-08-01  Filip Pizlo  <fpizlo@apple.com>
     2
     3        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
     4        https://bugs.webkit.org/show_bug.cgi?id=174727
     5
     6        Reviewed by Mark Lam.
     7
     8        No new tests because no change in behavior.
     9       
     10        Needed to teach Metal how to allocate in the Gigacage.
     11
     12        * platform/graphics/cocoa/GPUBufferMetal.mm:
     13        (WebCore::GPUBuffer::GPUBuffer):
     14        (WebCore::GPUBuffer::contents):
     15
    1162017-08-01  Fujii Hironori  <Hironori.Fujii@sony.com>
    217
  • trunk/Source/WebCore/platform/graphics/cocoa/GPUBufferMetal.mm

    r213780 r220118  
    3131#import "GPUDevice.h"
    3232#import "Logging.h"
    33 
    3433#import <Metal/Metal.h>
     34#import <wtf/Gigacage.h>
     35#import <wtf/PageBlock.h>
    3536
    3637namespace WebCore {
     
    4243    if (!device || !device->platformDevice() || !data)
    4344        return;
    44 
    45     m_buffer = adoptNS((MTLBuffer *)[device->platformDevice() newBufferWithBytes:data->baseAddress() length:data->byteLength() options:MTLResourceOptionCPUCacheModeDefault]);
     45   
     46    size_t pageSize = WTF::pageSize();
     47    size_t pageAlignedSize = roundUpToMultipleOf(pageSize, data->byteLength());
     48    void* pageAlignedCopy = Gigacage::tryAlignedMalloc(pageSize, pageAlignedSize);
     49    if (!pageAlignedCopy)
     50        return;
     51    memcpy(pageAlignedCopy, data->baseAddress(), data->byteLength());
     52    m_contents = ArrayBuffer::createFromBytes(pageAlignedCopy, data->byteLength(), [] (void* ptr) { Gigacage::alignedFree(ptr); });
     53    m_contents->ref();
     54    ArrayBuffer* capturedContents = m_contents.get();
     55    m_buffer = adoptNS((MTLBuffer *)[device->platformDevice() newBufferWithBytesNoCopy:m_contents->data() length:pageAlignedSize options:MTLResourceOptionCPUCacheModeDefault deallocator:^(void*, NSUInteger) { capturedContents->deref(); }]);
     56    if (!m_buffer) {
     57        m_contents->deref();
     58        m_contents = nullptr;
     59    }
    4660}
    4761
     
    5670RefPtr<ArrayBuffer> GPUBuffer::contents()
    5771{
    58     if (m_contents)
    59         return m_contents;
    60 
    61     if (!m_buffer)
    62         return nullptr;
    63 
    64     m_contents = ArrayBuffer::createFromBytes([m_buffer contents], [m_buffer length], [] (void*) { });
    6572    return m_contents;
    6673}
  • trunk/Source/WebKit/ChangeLog

    r220115 r220118  
     12017-08-01  Filip Pizlo  <fpizlo@apple.com>
     2
     3        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
     4        https://bugs.webkit.org/show_bug.cgi?id=174727
     5
     6        Reviewed by Mark Lam.
     7       
     8        The WebProcess should never disable the Gigacage by allocating typed arrays outside the Gigacage. So,
     9        we add a callback that crashes the process.
     10
     11        * WebProcess/WebProcess.cpp:
     12        (WebKit::gigacageDisabled):
     13        (WebKit::m_webSQLiteDatabaseTracker):
     14
    1152017-08-01  Brian Burg  <bburg@apple.com>
    216
  • trunk/Source/WebKit/WebProcess/WebProcess.cpp

    r220105 r220118  
    147147namespace WebKit {
    148148
     149static void gigacageDisabled(void*)
     150{
     151    UNREACHABLE_FOR_PLATFORM();
     152}
     153
    149154WebProcess& WebProcess::singleton()
    150155{
     
    197202        parentProcessConnection()->send(Messages::WebResourceLoadStatisticsStore::ResourceLoadStatisticsUpdated(WTFMove(statistics)), 0);
    198203    });
     204
     205    if (GIGACAGE_ENABLED)
     206        Gigacage::addDisableCallback(gigacageDisabled, nullptr);
    199207}
    200208
  • trunk/Source/bmalloc/CMakeLists.txt

    r216763 r220118  
    1212    bmalloc/DebugHeap.cpp
    1313    bmalloc/Environment.cpp
     14    bmalloc/Gigacage.cpp
    1415    bmalloc/Heap.cpp
    1516    bmalloc/LargeMap.cpp
    1617    bmalloc/Logging.cpp
    1718    bmalloc/ObjectType.cpp
     19    bmalloc/Scavenger.cpp
    1820    bmalloc/StaticMutex.cpp
    1921    bmalloc/VMHeap.cpp
  • trunk/Source/bmalloc/ChangeLog

    r220097 r220118  
     12017-08-01  Filip Pizlo  <fpizlo@apple.com>
     2
     3        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
     4        https://bugs.webkit.org/show_bug.cgi?id=174727
     5
     6        Reviewed by Mark Lam.
     7       
     8        This adds a mechanism for managing multiple isolated heaps in bmalloc. For now, these isoheaps
     9        (isolated heaps) have a very simple relationship with each other and with the rest of bmalloc:
     10       
     11        - You have to choose how many isoheaps you will have statically. See numHeaps in HeapKind.h.
     12       
     13        - Because numHeaps is static, each isoheap gets fast thread-local allocation. Basically, we have a
     14          Cache for each heap kind.
     15       
     16        - Each isoheap gets its own Heap.
     17       
     18        - Each Heap gets a scavenger thread.
     19       
     20        - Some things, like Zone/VMHeap/Scavenger, are per-process.
     21       
     22        Most of the per-HeapKind functionality is handled by PerHeapKind<>.
     23       
     24        This approach is ideal for supporting special per-HeapKind behaviors. For now we have two heaps:
     25        the Primary heap for normal malloc and the Gigacage. The gigacage is a 64GB-aligned 64GB virtual
     26        region that we now use for variable-length random-access allocations. No Primary allocations will
     27        go into the Gigacage.
     28
     29        * CMakeLists.txt:
     30        * bmalloc.xcodeproj/project.pbxproj:
     31        * bmalloc/AllocationKind.h: Added.
     32        * bmalloc/Allocator.cpp:
     33        (bmalloc::Allocator::Allocator):
     34        (bmalloc::Allocator::tryAllocate):
     35        (bmalloc::Allocator::allocateImpl):
     36        (bmalloc::Allocator::reallocate):
     37        (bmalloc::Allocator::refillAllocatorSlowCase):
     38        (bmalloc::Allocator::allocateLarge):
     39        * bmalloc/Allocator.h:
     40        * bmalloc/BExport.h: Added.
     41        * bmalloc/Cache.cpp:
     42        (bmalloc::Cache::scavenge):
     43        (bmalloc::Cache::Cache):
     44        (bmalloc::Cache::tryAllocateSlowCaseNullCache):
     45        (bmalloc::Cache::allocateSlowCaseNullCache):
     46        (bmalloc::Cache::deallocateSlowCaseNullCache):
     47        (bmalloc::Cache::reallocateSlowCaseNullCache):
     48        (bmalloc::Cache::operator new): Deleted.
     49        (bmalloc::Cache::operator delete): Deleted.
     50        * bmalloc/Cache.h:
     51        (bmalloc::Cache::tryAllocate):
     52        (bmalloc::Cache::allocate):
     53        (bmalloc::Cache::deallocate):
     54        (bmalloc::Cache::reallocate):
     55        * bmalloc/Deallocator.cpp:
     56        (bmalloc::Deallocator::Deallocator):
     57        (bmalloc::Deallocator::scavenge):
     58        (bmalloc::Deallocator::processObjectLog):
     59        (bmalloc::Deallocator::deallocateSlowCase):
     60        * bmalloc/Deallocator.h:
     61        * bmalloc/Gigacage.cpp: Added.
     62        (Gigacage::Callback::Callback):
     63        (Gigacage::Callback::function):
     64        (Gigacage::Callbacks::Callbacks):
     65        (Gigacage::ensureGigacage):
     66        (Gigacage::disableGigacage):
     67        (Gigacage::addDisableCallback):
     68        (Gigacage::removeDisableCallback):
     69        * bmalloc/Gigacage.h: Added.
     70        (Gigacage::caged):
     71        (Gigacage::isCaged):
     72        * bmalloc/Heap.cpp:
     73        (bmalloc::Heap::Heap):
     74        (bmalloc::Heap::usingGigacage):
     75        (bmalloc::Heap::concurrentScavenge):
     76        (bmalloc::Heap::splitAndAllocate):
     77        (bmalloc::Heap::tryAllocateLarge):
     78        (bmalloc::Heap::allocateLarge):
     79        (bmalloc::Heap::shrinkLarge):
     80        (bmalloc::Heap::deallocateLarge):
     81        * bmalloc/Heap.h:
     82        (bmalloc::Heap::mutex):
     83        (bmalloc::Heap::kind const):
     84        (bmalloc::Heap::setScavengerThreadQOSClass): Deleted.
     85        * bmalloc/HeapKind.h: Added.
     86        * bmalloc/ObjectType.cpp:
     87        (bmalloc::objectType):
     88        * bmalloc/ObjectType.h:
     89        * bmalloc/PerHeapKind.h: Added.
     90        (bmalloc::PerHeapKindBase::PerHeapKindBase):
     91        (bmalloc::PerHeapKindBase::size):
     92        (bmalloc::PerHeapKindBase::at):
     93        (bmalloc::PerHeapKindBase::at const):
     94        (bmalloc::PerHeapKindBase::operator[]):
     95        (bmalloc::PerHeapKindBase::operator[] const):
     96        (bmalloc::StaticPerHeapKind::StaticPerHeapKind):
     97        (bmalloc::PerHeapKind::PerHeapKind):
     98        (bmalloc::PerHeapKind::~PerHeapKind):
     99        * bmalloc/PerThread.h:
     100        (bmalloc::PerThread<T>::destructor):
     101        (bmalloc::PerThread<T>::getSlowCase):
     102        (bmalloc::PerThreadStorage<Cache>::get): Deleted.
     103        (bmalloc::PerThreadStorage<Cache>::init): Deleted.
     104        * bmalloc/Scavenger.cpp: Added.
     105        (bmalloc::Scavenger::Scavenger):
     106        (bmalloc::Scavenger::scavenge):
     107        * bmalloc/Scavenger.h: Added.
     108        (bmalloc::Scavenger::setScavengerThreadQOSClass):
     109        (bmalloc::Scavenger::requestedScavengerThreadQOSClass const):
     110        * bmalloc/VMHeap.cpp:
     111        (bmalloc::VMHeap::VMHeap):
     112        (bmalloc::VMHeap::tryAllocateLargeChunk):
     113        * bmalloc/VMHeap.h:
     114        * bmalloc/Zone.cpp:
     115        (bmalloc::Zone::Zone):
     116        * bmalloc/Zone.h:
     117        * bmalloc/bmalloc.h:
     118        (bmalloc::api::tryMalloc):
     119        (bmalloc::api::malloc):
     120        (bmalloc::api::tryMemalign):
     121        (bmalloc::api::memalign):
     122        (bmalloc::api::realloc):
     123        (bmalloc::api::tryLargeMemalignVirtual):
     124        (bmalloc::api::free):
     125        (bmalloc::api::freeLargeVirtual):
     126        (bmalloc::api::scavengeThisThread):
     127        (bmalloc::api::scavenge):
     128        (bmalloc::api::isEnabled):
     129        (bmalloc::api::setScavengerThreadQOSClass):
     130        * bmalloc/mbmalloc.cpp:
     131
    11322017-08-01  Daewoong Jang  <daewoong.jang@navercorp.com>
    2133
  • trunk/Source/bmalloc/bmalloc.xcodeproj/project.pbxproj

    r219009 r220118  
    88
    99/* Begin PBXBuildFile section */
     10                0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3DA0131F267AB800342C08 /* AllocationKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
     11                0F5BF1471F22A8B10029D91D /* HeapKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1461F22A8B10029D91D /* HeapKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
     12                0F5BF1491F22A8D80029D91D /* PerHeapKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1481F22A8D80029D91D /* PerHeapKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
     13                0F5BF14D1F22B0C30029D91D /* Gigacage.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF14C1F22B0C30029D91D /* Gigacage.h */; settings = {ATTRIBUTES = (Private, ); }; };
     14                0F5BF14F1F22DEAF0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */; };
     15                0F5BF1521F22E1570029D91D /* Scavenger.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1501F22E1570029D91D /* Scavenger.cpp */; };
     16                0F5BF1531F22E1570029D91D /* Scavenger.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1511F22E1570029D91D /* Scavenger.h */; settings = {ATTRIBUTES = (Private, ); }; };
     17                0F5BF1731F23C5710029D91D /* BExport.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1721F23C5710029D91D /* BExport.h */; settings = {ATTRIBUTES = (Private, ); }; };
    1018                1400274918F89C1300115C97 /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14DA320C18875B09007269E0 /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; };
    1119                1400274A18F89C2300115C97 /* VMHeap.h in Headers */ = {isa = PBXBuildFile; fileRef = 144F7BFC18BFC517003537F3 /* VMHeap.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    7684
    7785/* Begin PBXFileReference section */
     86                0F3DA0131F267AB800342C08 /* AllocationKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = AllocationKind.h; path = bmalloc/AllocationKind.h; sourceTree = "<group>"; };
     87                0F5BF1461F22A8B10029D91D /* HeapKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = HeapKind.h; path = bmalloc/HeapKind.h; sourceTree = "<group>"; };
     88                0F5BF1481F22A8D80029D91D /* PerHeapKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = PerHeapKind.h; path = bmalloc/PerHeapKind.h; sourceTree = "<group>"; };
     89                0F5BF14C1F22B0C30029D91D /* Gigacage.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = Gigacage.h; path = bmalloc/Gigacage.h; sourceTree = "<group>"; };
     90                0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = Gigacage.cpp; path = bmalloc/Gigacage.cpp; sourceTree = "<group>"; };
     91                0F5BF1501F22E1570029D91D /* Scavenger.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = Scavenger.cpp; path = bmalloc/Scavenger.cpp; sourceTree = "<group>"; };
     92                0F5BF1511F22E1570029D91D /* Scavenger.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = Scavenger.h; path = bmalloc/Scavenger.h; sourceTree = "<group>"; };
     93                0F5BF1721F23C5710029D91D /* BExport.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = BExport.h; path = bmalloc/BExport.h; sourceTree = "<group>"; };
    7894                140FA00219CE429C00FFD3C8 /* BumpRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BumpRange.h; path = bmalloc/BumpRange.h; sourceTree = "<group>"; };
    7995                140FA00419CE4B6800FFD3C8 /* LineMetadata.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LineMetadata.h; path = bmalloc/LineMetadata.h; sourceTree = "<group>"; };
     
    236252                        isa = PBXGroup;
    237253                        children = (
     254                                0F3DA0131F267AB800342C08 /* AllocationKind.h */,
    238255                                140FA00219CE429C00FFD3C8 /* BumpRange.h */,
    239256                                147DC6E21CA5B70B00724E8D /* Chunk.h */,
     
    242259                                14895D8F1A3A319C0006235D /* Environment.cpp */,
    243260                                14895D901A3A319C0006235D /* Environment.h */,
     261                                0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */,
     262                                0F5BF14C1F22B0C30029D91D /* Gigacage.h */,
    244263                                14DA320E18875D9F007269E0 /* Heap.cpp */,
    245264                                14DA320C18875B09007269E0 /* Heap.h */,
     
    248267                                14105E8318E14374003A106E /* ObjectType.cpp */,
    249268                                1485656018A43DBA00ED6942 /* ObjectType.h */,
     269                                0F5BF1501F22E1570029D91D /* Scavenger.cpp */,
     270                                0F5BF1511F22E1570029D91D /* Scavenger.h */,
    250271                                145F6874179DF84100D65598 /* Sizes.h */,
    251272                                144F7BFB18BFC517003537F3 /* VMHeap.cpp */,
     
    266287                                6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */,
    267288                                1413E468189EEDE400546D68 /* BAssert.h */,
     289                                0F5BF1721F23C5710029D91D /* BExport.h */,
    268290                                14C919C818FCC59F0028DB43 /* BPlatform.h */,
    269291                                14D9DB4517F2447100EAAB79 /* FixedVector.h */,
     292                                0F5BF1461F22A8B10029D91D /* HeapKind.h */,
    270293                                1413E460189DCE1E00546D68 /* Inline.h */,
    271294                                141D9AFF1C8E51C0000ABBA0 /* List.h */,
     
    274297                                14C8992A1CC485E70027A057 /* Map.h */,
    275298                                144DCED617A649D90093B2F2 /* Mutex.h */,
     299                                0F5BF1481F22A8D80029D91D /* PerHeapKind.h */,
    276300                                14446A0717A61FA400F9EA1D /* PerProcess.h */,
    277301                                144469FD17A61F1F00F9EA1D /* PerThread.h */,
     
    311335                                14DD78C518F48D7500950702 /* Algorithm.h in Headers */,
    312336                                14DD789818F48D4A00950702 /* Allocator.h in Headers */,
     337                                0F5BF1531F22E1570029D91D /* Scavenger.h in Headers */,
     338                                0F5BF1471F22A8B10029D91D /* HeapKind.h in Headers */,
    313339                                14DD78C618F48D7500950702 /* AsyncTask.h in Headers */,
    314340                                6599C5CD1EC3F15900A2F7BB /* AvailableMemory.h in Headers */,
     
    321347                                14DD789918F48D4A00950702 /* Cache.h in Headers */,
    322348                                147DC6E31CA5B70B00724E8D /* Chunk.h in Headers */,
     349                                0F5BF1731F23C5710029D91D /* BExport.h in Headers */,
    323350                                14DD789A18F48D4A00950702 /* Deallocator.h in Headers */,
    324351                                142B44371E2839E7001DA6E9 /* DebugHeap.h in Headers */,
     
    326353                                14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
    327354                                1400274918F89C1300115C97 /* Heap.h in Headers */,
     355                                0F5BF1491F22A8D80029D91D /* PerHeapKind.h in Headers */,
    328356                                14DD78C918F48D7500950702 /* Inline.h in Headers */,
    329357                                144C07F51C7B70260051BB6A /* LargeMap.h in Headers */,
     
    337365                                14DD789318F48D0F00950702 /* ObjectType.h in Headers */,
    338366                                14DD78CB18F48D7500950702 /* PerProcess.h in Headers */,
     367                                0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */,
    339368                                14DD78CC18F48D7500950702 /* PerThread.h in Headers */,
    340369                                14DD78CD18F48D7500950702 /* Range.h in Headers */,
     
    345374                                143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */,
    346375                                14DD78CE18F48D7500950702 /* Syscall.h in Headers */,
     376                                0F5BF14D1F22B0C30029D91D /* Gigacage.h in Headers */,
    347377                                14DD78CF18F48D7500950702 /* Vector.h in Headers */,
    348378                                14DD78D018F48D7500950702 /* VMAllocate.h in Headers */,
     
    430460                        buildActionMask = 2147483647;
    431461                        files = (
     462                                0F5BF1521F22E1570029D91D /* Scavenger.cpp in Sources */,
    432463                                14F271C318EA3978008C152F /* Allocator.cpp in Sources */,
    433464                                6599C5CC1EC3F15900A2F7BB /* AvailableMemory.cpp in Sources */,
     
    437468                                14895D911A3A319C0006235D /* Environment.cpp in Sources */,
    438469                                14F271C718EA3990008C152F /* Heap.cpp in Sources */,
     470                                0F5BF14F1F22DEAF0029D91D /* Gigacage.cpp in Sources */,
    439471                                144C07F41C7B70260051BB6A /* LargeMap.cpp in Sources */,
    440472                                4426E2801C838EE0008EB042 /* Logging.cpp in Sources */,
  • trunk/Source/bmalloc/bmalloc/Allocator.cpp

    r218788 r220118  
    3939namespace bmalloc {
    4040
    41 Allocator::Allocator(Heap* heap, Deallocator& deallocator)
    42     : m_debugHeap(heap->debugHeap())
     41Allocator::Allocator(Heap& heap, Deallocator& deallocator)
     42    : m_heap(heap)
     43    , m_debugHeap(heap.debugHeap())
    4344    , m_deallocator(deallocator)
    4445{
     
    6061        return allocate(size);
    6162
    62     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    63     return PerProcess<Heap>::getFastCase()->tryAllocateLarge(lock, alignment, size);
     63    std::lock_guard<StaticMutex> lock(Heap::mutex());
     64    return m_heap.tryAllocateLarge(lock, alignment, size);
    6465}
    6566
     
    8990        return allocate(roundUpToMultipleOf(alignment, size));
    9091
    91     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    92     Heap* heap = PerProcess<Heap>::getFastCase();
     92    std::lock_guard<StaticMutex> lock(Heap::mutex());
    9393    if (crashOnFailure)
    94         return heap->allocateLarge(lock, alignment, size);
    95     return heap->tryAllocateLarge(lock, alignment, size);
     94        return m_heap.allocateLarge(lock, alignment, size);
     95    return m_heap.tryAllocateLarge(lock, alignment, size);
    9696}
    9797
     
    102102
    103103    size_t oldSize = 0;
    104     switch (objectType(object)) {
     104    switch (objectType(m_heap.kind(), object)) {
    105105    case ObjectType::Small: {
    106         BASSERT(objectType(nullptr) == ObjectType::Small);
     106        BASSERT(objectType(m_heap.kind(), nullptr) == ObjectType::Small);
    107107        if (!object)
    108108            break;
     
    113113    }
    114114    case ObjectType::Large: {
    115         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    116         oldSize = PerProcess<Heap>::getFastCase()->largeSize(lock, object);
     115        std::lock_guard<StaticMutex> lock(Heap::mutex());
     116        oldSize = m_heap.largeSize(lock, object);
    117117
    118118        if (newSize < oldSize && newSize > smallMax) {
    119             PerProcess<Heap>::getFastCase()->shrinkLarge(lock, Range(object, oldSize), newSize);
     119            m_heap.shrinkLarge(lock, Range(object, oldSize), newSize);
    120120            return object;
    121121        }
     
    154154    BumpRangeCache& bumpRangeCache = m_bumpRangeCaches[sizeClass];
    155155
    156     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
     156    std::lock_guard<StaticMutex> lock(Heap::mutex());
    157157    m_deallocator.processObjectLog(lock);
    158     PerProcess<Heap>::getFastCase()->allocateSmallBumpRanges(
    159         lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
     158    m_heap.allocateSmallBumpRanges(lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
    160159}
    161160
     
    170169NO_INLINE void* Allocator::allocateLarge(size_t size)
    171170{
    172     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    173     return PerProcess<Heap>::getFastCase()->allocateLarge(lock, alignment, size);
     171    std::lock_guard<StaticMutex> lock(Heap::mutex());
     172    return m_heap.allocateLarge(lock, alignment, size);
    174173}
    175174
  • trunk/Source/bmalloc/bmalloc/Allocator.h

    r210746 r220118  
    2727#define Allocator_h
    2828
     29#include "BExport.h"
    2930#include "BumpAllocator.h"
    3031#include <array>
     
    4041class Allocator {
    4142public:
    42     Allocator(Heap*, Deallocator&);
     43    Allocator(Heap&, Deallocator&);
    4344    ~Allocator();
    4445
     
    5556   
    5657    bool allocateFastCase(size_t, void*&);
    57     void* allocateSlowCase(size_t);
     58    BEXPORT void* allocateSlowCase(size_t);
    5859   
    5960    void* allocateLogSizeClass(size_t);
     
    6667    std::array<BumpRangeCache, sizeClassCount> m_bumpRangeCaches;
    6768
     69    Heap& m_heap;
    6870    DebugHeap* m_debugHeap;
    6971    Deallocator& m_deallocator;
  • trunk/Source/bmalloc/bmalloc/Cache.cpp

    r181329 r220118  
    11/*
    2  * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3131namespace bmalloc {
    3232
    33 void* Cache::operator new(size_t size)
     33void Cache::scavenge(HeapKind heapKind)
    3434{
    35     return vmAllocate(vmSize(size));
     35    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     36    if (!caches)
     37        return;
     38
     39    caches->at(heapKind).allocator().scavenge();
     40    caches->at(heapKind).deallocator().scavenge();
    3641}
    3742
    38 void Cache::operator delete(void* p, size_t size)
    39 {
    40     vmDeallocate(p, vmSize(size));
    41 }
    42 
    43 void Cache::scavenge()
    44 {
    45     Cache* cache = PerThread<Cache>::getFastCase();
    46     if (!cache)
    47         return;
    48 
    49     cache->allocator().scavenge();
    50     cache->deallocator().scavenge();
    51 }
    52 
    53 Cache::Cache()
    54     : m_deallocator(PerProcess<Heap>::get())
    55     , m_allocator(PerProcess<Heap>::get(), m_deallocator)
     43Cache::Cache(HeapKind heapKind)
     44    : m_deallocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind))
     45    , m_allocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind), m_deallocator)
    5646{
    5747}
    5848
    59 NO_INLINE void* Cache::tryAllocateSlowCaseNullCache(size_t size)
     49NO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
    6050{
    61     return PerThread<Cache>::getSlowCase()->allocator().tryAllocate(size);
     51    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().tryAllocate(size);
    6252}
    6353
    64 NO_INLINE void* Cache::allocateSlowCaseNullCache(size_t size)
     54NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
    6555{
    66     return PerThread<Cache>::getSlowCase()->allocator().allocate(size);
     56    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(size);
    6757}
    6858
    69 NO_INLINE void* Cache::allocateSlowCaseNullCache(size_t alignment, size_t size)
     59NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
    7060{
    71     return PerThread<Cache>::getSlowCase()->allocator().allocate(alignment, size);
     61    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(alignment, size);
    7262}
    7363
    74 NO_INLINE void Cache::deallocateSlowCaseNullCache(void* object)
     64NO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
    7565{
    76     PerThread<Cache>::getSlowCase()->deallocator().deallocate(object);
     66    PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).deallocator().deallocate(object);
    7767}
    7868
    79 NO_INLINE void* Cache::reallocateSlowCaseNullCache(void* object, size_t newSize)
     69NO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
    8070{
    81     return PerThread<Cache>::getSlowCase()->allocator().reallocate(object, newSize);
     71    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().reallocate(object, newSize);
    8272}
    8373
  • trunk/Source/bmalloc/bmalloc/Cache.h

    r205462 r220118  
    11/*
    2  * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2828
    2929#include "Allocator.h"
     30#include "BExport.h"
    3031#include "Deallocator.h"
     32#include "HeapKind.h"
    3133#include "PerThread.h"
    3234
     
    3739class Cache {
    3840public:
    39     void* operator new(size_t);
    40     void operator delete(void*, size_t);
     41    static void* tryAllocate(HeapKind, size_t);
     42    static void* allocate(HeapKind, size_t);
     43    static void* tryAllocate(HeapKind, size_t alignment, size_t);
     44    static void* allocate(HeapKind, size_t alignment, size_t);
     45    static void deallocate(HeapKind, void*);
     46    static void* reallocate(HeapKind, void*, size_t);
    4147
    42     static void* tryAllocate(size_t);
    43     static void* allocate(size_t);
    44     static void* tryAllocate(size_t alignment, size_t);
    45     static void* allocate(size_t alignment, size_t);
    46     static void deallocate(void*);
    47     static void* reallocate(void*, size_t);
     48    static void scavenge(HeapKind);
    4849
    49     static void scavenge();
    50 
    51     Cache();
     50    Cache(HeapKind);
    5251
    5352    Allocator& allocator() { return m_allocator; }
     
    5554
    5655private:
    57     static void* tryAllocateSlowCaseNullCache(size_t);
    58     static void* allocateSlowCaseNullCache(size_t);
    59     static void* allocateSlowCaseNullCache(size_t alignment, size_t);
    60     static void deallocateSlowCaseNullCache(void*);
    61     static void* reallocateSlowCaseNullCache(void*, size_t);
     56    BEXPORT static void* tryAllocateSlowCaseNullCache(HeapKind, size_t);
     57    BEXPORT static void* allocateSlowCaseNullCache(HeapKind, size_t);
     58    BEXPORT static void* allocateSlowCaseNullCache(HeapKind, size_t alignment, size_t);
     59    BEXPORT static void deallocateSlowCaseNullCache(HeapKind, void*);
     60    BEXPORT static void* reallocateSlowCaseNullCache(HeapKind, void*, size_t);
    6261
    6362    Deallocator m_deallocator;
     
    6564};
    6665
    67 inline void* Cache::tryAllocate(size_t size)
     66inline void* Cache::tryAllocate(HeapKind heapKind, size_t size)
    6867{
    69     Cache* cache = PerThread<Cache>::getFastCase();
    70     if (!cache)
    71         return tryAllocateSlowCaseNullCache(size);
    72     return cache->allocator().tryAllocate(size);
     68    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     69    if (!caches)
     70        return tryAllocateSlowCaseNullCache(heapKind, size);
     71    return caches->at(heapKind).allocator().tryAllocate(size);
    7372}
    7473
    75 inline void* Cache::allocate(size_t size)
     74inline void* Cache::allocate(HeapKind heapKind, size_t size)
    7675{
    77     Cache* cache = PerThread<Cache>::getFastCase();
    78     if (!cache)
    79         return allocateSlowCaseNullCache(size);
    80     return cache->allocator().allocate(size);
     76    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     77    if (!caches)
     78        return allocateSlowCaseNullCache(heapKind, size);
     79    return caches->at(heapKind).allocator().allocate(size);
    8180}
    8281
    83 inline void* Cache::tryAllocate(size_t alignment, size_t size)
     82inline void* Cache::tryAllocate(HeapKind heapKind, size_t alignment, size_t size)
    8483{
    85     Cache* cache = PerThread<Cache>::getFastCase();
    86     if (!cache)
    87         return allocateSlowCaseNullCache(alignment, size);
    88     return cache->allocator().tryAllocate(alignment, size);
     84    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     85    if (!caches)
     86        return allocateSlowCaseNullCache(heapKind, alignment, size);
     87    return caches->at(heapKind).allocator().tryAllocate(alignment, size);
    8988}
    9089
    91 inline void* Cache::allocate(size_t alignment, size_t size)
     90inline void* Cache::allocate(HeapKind heapKind, size_t alignment, size_t size)
    9291{
    93     Cache* cache = PerThread<Cache>::getFastCase();
    94     if (!cache)
    95         return allocateSlowCaseNullCache(alignment, size);
    96     return cache->allocator().allocate(alignment, size);
     92    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     93    if (!caches)
     94        return allocateSlowCaseNullCache(heapKind, alignment, size);
     95    return caches->at(heapKind).allocator().allocate(alignment, size);
    9796}
    9897
    99 inline void Cache::deallocate(void* object)
     98inline void Cache::deallocate(HeapKind heapKind, void* object)
    10099{
    101     Cache* cache = PerThread<Cache>::getFastCase();
    102     if (!cache)
    103         return deallocateSlowCaseNullCache(object);
    104     return cache->deallocator().deallocate(object);
     100    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     101    if (!caches)
     102        return deallocateSlowCaseNullCache(heapKind, object);
     103    return caches->at(heapKind).deallocator().deallocate(object);
    105104}
    106105
    107 inline void* Cache::reallocate(void* object, size_t newSize)
     106inline void* Cache::reallocate(HeapKind heapKind, void* object, size_t newSize)
    108107{
    109     Cache* cache = PerThread<Cache>::getFastCase();
    110     if (!cache)
    111         return reallocateSlowCaseNullCache(object, newSize);
    112     return cache->allocator().reallocate(object, newSize);
     108    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
     109    if (!caches)
     110        return reallocateSlowCaseNullCache(heapKind, object, newSize);
     111    return caches->at(heapKind).allocator().reallocate(object, newSize);
    113112}
    114113
  • trunk/Source/bmalloc/bmalloc/Deallocator.cpp

    r218788 r220118  
    4040namespace bmalloc {
    4141
    42 Deallocator::Deallocator(Heap* heap)
    43     : m_debugHeap(heap->debugHeap())
     42Deallocator::Deallocator(Heap& heap)
     43    : m_heap(heap)
     44    , m_debugHeap(heap.debugHeap())
    4445{
    4546    if (m_debugHeap) {
     
    6061        return;
    6162
    62     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
     63    std::lock_guard<StaticMutex> lock(Heap::mutex());
    6364
    6465    processObjectLog(lock);
    65     PerProcess<Heap>::getFastCase()->deallocateLineCache(lock, lineCache(lock));
     66    m_heap.deallocateLineCache(lock, lineCache(lock));
    6667}
    6768
    6869void Deallocator::processObjectLog(std::lock_guard<StaticMutex>& lock)
    6970{
    70     Heap* heap = PerProcess<Heap>::getFastCase();
    71    
    7271    for (Object object : m_objectLog)
    73         heap->derefSmallLine(lock, object, lineCache(lock));
     72        m_heap.derefSmallLine(lock, object, lineCache(lock));
    7473    m_objectLog.clear();
    7574}
     
    8382        return;
    8483
    85     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    86     if (PerProcess<Heap>::getFastCase()->isLarge(lock, object)) {
    87         PerProcess<Heap>::getFastCase()->deallocateLarge(lock, object);
     84    std::lock_guard<StaticMutex> lock(Heap::mutex());
     85    if (m_heap.isLarge(lock, object)) {
     86        m_heap.deallocateLarge(lock, object);
    8887        return;
    8988    }
  • trunk/Source/bmalloc/bmalloc/Deallocator.h

    r218788 r220118  
    2727#define Deallocator_h
    2828
     29#include "BExport.h"
    2930#include "FixedVector.h"
    3031#include "SmallPage.h"
     
    4142class Deallocator {
    4243public:
    43     Deallocator(Heap*);
     44    Deallocator(Heap&);
    4445    ~Deallocator();
    4546
     
    5354private:
    5455    bool deallocateFastCase(void*);
    55     void deallocateSlowCase(void*);
     56    BEXPORT void deallocateSlowCase(void*);
    5657
     58    Heap& m_heap;
    5759    FixedVector<void*, deallocatorLogCapacity> m_objectLog;
    5860    LineCache m_lineCache; // The Heap removes items from this cache.
  • trunk/Source/bmalloc/bmalloc/Heap.cpp

    r218788 r220118  
    11/*
    2  * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2929#include "BumpAllocator.h"
    3030#include "Chunk.h"
     31#include "Gigacage.h"
    3132#include "DebugHeap.h"
    3233#include "PerProcess.h"
     34#include "Scavenger.h"
    3335#include "SmallLine.h"
    3436#include "SmallPage.h"
     37#include "VMHeap.h"
     38#include "bmalloc.h"
    3539#include <thread>
    3640
    3741namespace bmalloc {
    3842
    39 Heap::Heap(std::lock_guard<StaticMutex>&)
    40     : m_vmPageSizePhysical(vmPageSizePhysical())
     43Heap::Heap(HeapKind kind, std::lock_guard<StaticMutex>&)
     44    : m_kind(kind)
     45    , m_vmPageSizePhysical(vmPageSizePhysical())
    4146    , m_scavenger(*this, &Heap::concurrentScavenge)
    4247    , m_debugHeap(nullptr)
     
    5055    if (m_environment.isDebugHeapEnabled())
    5156        m_debugHeap = PerProcess<DebugHeap>::get();
    52 
    53 #if BOS(DARWIN)
    54     auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);
    55     m_pressureHandlerDispatchSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0, DISPATCH_MEMORYPRESSURE_CRITICAL, queue);
    56     dispatch_source_set_event_handler(m_pressureHandlerDispatchSource, ^{
    57         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    58         scavenge(lock);
    59     });
    60     dispatch_resume(m_pressureHandlerDispatchSource);
    61     dispatch_release(queue);
     57    else {
     58        Gigacage::ensureGigacage();
     59#if GIGACAGE_ENABLED
     60        if (usingGigacage()) {
     61            RELEASE_BASSERT(g_gigacageBasePtr);
     62            m_largeFree.add(LargeRange(g_gigacageBasePtr, GIGACAGE_SIZE, 0));
     63        }
    6264#endif
     65    }
     66   
     67    PerProcess<Scavenger>::get();
     68}
     69
     70bool Heap::usingGigacage()
     71{
     72    return m_kind == HeapKind::Gigacage && g_gigacageBasePtr;
    6373}
    6474
     
    121131void Heap::concurrentScavenge()
    122132{
    123     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
     133    std::lock_guard<StaticMutex> lock(mutex());
    124134
    125135#if BOS(DARWIN)
    126     pthread_set_qos_class_self_np(m_requestedScavengerThreadQOSClass, 0);
     136    pthread_set_qos_class_self_np(PerProcess<Scavenger>::getFastCase()->requestedScavengerThreadQOSClass(), 0);
    127137#endif
    128138
     
    439449}
    440450
    441 LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size)
     451LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size, AllocationKind allocationKind)
    442452{
    443453    LargeRange prev;
     
    458468    }
    459469   
    460     if (range.physicalSize() < range.size()) {
    461         scheduleScavengerIfUnderMemoryPressure(range.size());
     470    switch (allocationKind) {
     471    case AllocationKind::Virtual:
     472        if (range.physicalSize())
     473            vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
     474        break;
    462475       
    463         vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
    464         range.setPhysicalSize(range.size());
     476    case AllocationKind::Physical:
     477        if (range.physicalSize() < range.size()) {
     478            scheduleScavengerIfUnderMemoryPressure(range.size());
     479           
     480            vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
     481            range.setPhysicalSize(range.size());
     482        }
     483        break;
    465484    }
    466485   
     
    477496}
    478497
    479 void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
     498void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size, AllocationKind allocationKind)
    480499{
    481500    BASSERT(isPowerOfTwo(alignment));
     
    495514    LargeRange range = m_largeFree.remove(alignment, size);
    496515    if (!range) {
    497         range = m_vmHeap.tryAllocateLargeChunk(alignment, size);
     516        if (usingGigacage())
     517            return nullptr;
     518
     519        range = PerProcess<VMHeap>::get()->tryAllocateLargeChunk(alignment, size, allocationKind);
    498520        if (!range)
    499521            return nullptr;
    500 
     522       
    501523        m_largeFree.add(range);
    502524
     
    504526    }
    505527
    506     return splitAndAllocate(range, alignment, size).begin();
    507 }
    508 
    509 void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
    510 {
    511     void* result = tryAllocateLarge(lock, alignment, size);
     528    return splitAndAllocate(range, alignment, size, allocationKind).begin();
     529}
     530
     531void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, AllocationKind allocationKind)
     532{
     533    void* result = tryAllocateLarge(lock, alignment, size, allocationKind);
    512534    RELEASE_BASSERT(result);
    513535    return result;
     
    530552    size_t size = m_largeAllocated.remove(object.begin());
    531553    LargeRange range = LargeRange(object, size);
    532     splitAndAllocate(range, alignment, newSize);
     554    splitAndAllocate(range, alignment, newSize, AllocationKind::Physical);
    533555
    534556    scheduleScavenger(size);
    535557}
    536558
    537 void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
     559void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object, AllocationKind allocationKind)
    538560{
    539561    size_t size = m_largeAllocated.remove(object);
    540     m_largeFree.add(LargeRange(object, size, size));
    541    
     562    m_largeFree.add(LargeRange(object, size, allocationKind == AllocationKind::Physical ? size : 0));
    542563    scheduleScavenger(size);
    543564}
  • trunk/Source/bmalloc/bmalloc/Heap.h

    r218788 r220118  
    11/*
    2  * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2727#define Heap_h
    2828
     29#include "AllocationKind.h"
    2930#include "AsyncTask.h"
    3031#include "BumpRange.h"
     32#include "Chunk.h"
    3133#include "Environment.h"
     34#include "HeapKind.h"
    3235#include "LargeMap.h"
    3336#include "LineMetadata.h"
     
    3639#include "Mutex.h"
    3740#include "Object.h"
     41#include "PerHeapKind.h"
     42#include "PerProcess.h"
    3843#include "SmallLine.h"
    3944#include "SmallPage.h"
    40 #include "VMHeap.h"
    4145#include "Vector.h"
    4246#include <array>
    4347#include <mutex>
    44 
    45 #if BOS(DARWIN)
    46 #include <dispatch/dispatch.h>
    47 #endif
    4848
    4949namespace bmalloc {
     
    5656class Heap {
    5757public:
    58     Heap(std::lock_guard<StaticMutex>&);
     58    Heap(HeapKind, std::lock_guard<StaticMutex>&);
     59   
     60    static StaticMutex& mutex() { return PerProcess<PerHeapKind<Heap>>::mutex(); }
     61   
     62    HeapKind kind() const { return m_kind; }
    5963   
    6064    DebugHeap* debugHeap() { return m_debugHeap; }
     
    6569    void deallocateLineCache(std::lock_guard<StaticMutex>&, LineCache&);
    6670
    67     void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
    68     void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
    69     void deallocateLarge(std::lock_guard<StaticMutex>&, void*);
     71    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
     72    void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
     73    void deallocateLarge(std::lock_guard<StaticMutex>&, void*, AllocationKind = AllocationKind::Physical);
    7074
    7175    bool isLarge(std::lock_guard<StaticMutex>&, void*);
     
    7478
    7579    void scavenge(std::lock_guard<StaticMutex>&);
    76 
    77 #if BOS(DARWIN)
    78     void setScavengerThreadQOSClass(qos_class_t overrideClass) { m_requestedScavengerThreadQOSClass = overrideClass; }
    79 #endif
    8080
    8181private:
     
    8989
    9090    ~Heap() = delete;
     91   
     92    bool usingGigacage();
    9193   
    9294    void initializeLineMetadata();
     
    108110    void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
    109111
    110     LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t);
     112    LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t, AllocationKind);
    111113
    112114    void scheduleScavenger(size_t);
     
    114116   
    115117    void concurrentScavenge();
     118   
     119    HeapKind m_kind;
    116120   
    117121    size_t m_vmPageSizePhysical;
     
    135139    Environment m_environment;
    136140    DebugHeap* m_debugHeap;
    137 
    138     VMHeap m_vmHeap;
    139 
    140 #if BOS(DARWIN)
    141     dispatch_source_t m_pressureHandlerDispatchSource;
    142     qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
    143 #endif
    144141};
    145142
  • trunk/Source/bmalloc/bmalloc/ObjectType.cpp

    r199746 r220118  
    11/*
    2  * Copyright (C) 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3333namespace bmalloc {
    3434
    35 ObjectType objectType(void* object)
     35ObjectType objectType(HeapKind kind, void* object)
    3636{
    3737    if (mightBeLarge(object)) {
     
    3939            return ObjectType::Small;
    4040
    41         std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    42         if (PerProcess<Heap>::getFastCase()->isLarge(lock, object))
     41        std::lock_guard<StaticMutex> lock(Heap::mutex());
     42        if (PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).isLarge(lock, object))
    4343            return ObjectType::Large;
    4444    }
  • trunk/Source/bmalloc/bmalloc/ObjectType.h

    r199746 r220118  
    11/*
    2  * Copyright (C) 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2828
    2929#include "BAssert.h"
     30#include "HeapKind.h"
    3031#include "Sizes.h"
    3132
     
    3435enum class ObjectType : unsigned char { Small, Large };
    3536
    36 ObjectType objectType(void*);
     37ObjectType objectType(HeapKind, void*);
    3738
    3839inline bool mightBeLarge(void* object)
  • trunk/Source/bmalloc/bmalloc/PerThread.h

    r209590 r220118  
    11/*
    2  * Copyright (C) 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2929#include "BPlatform.h"
    3030#include "Inline.h"
     31#include "PerHeapKind.h"
     32#include "VMAllocate.h"
    3133#include <mutex>
    3234#include <pthread.h>
     
    6466template<typename T> struct PerThreadStorage;
    6567
    66 // For now, we only support PerThread<Cache>. We can expand to other types by
     68// For now, we only support PerThread<PerHeapKind<Cache>>. We can expand to other types by
    6769// using more keys.
    68 template<> struct PerThreadStorage<Cache> {
     70template<> struct PerThreadStorage<PerHeapKind<Cache>> {
    6971    static const pthread_key_t key = __PTK_FRAMEWORK_JAVASCRIPTCORE_KEY0;
    7072
     
    132134{
    133135    T* t = static_cast<T*>(p);
    134     delete t;
     136    t->~T();
     137    vmDeallocate(t, vmSize(sizeof(T)));
    135138}
    136139
     
    139142{
    140143    BASSERT(!getFastCase());
    141     T* t = new T;
     144    T* t = static_cast<T*>(vmAllocate(vmSize(sizeof(T))));
     145    new (t) T();
    142146    PerThreadStorage<T>::init(t, destructor);
    143147    return t;
  • trunk/Source/bmalloc/bmalloc/VMHeap.cpp

    r217811 r220118  
    11/*
    2  * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3030namespace bmalloc {
    3131
    32 LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
     32VMHeap::VMHeap(std::lock_guard<StaticMutex>&)
     33{
     34}
     35
     36LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size, AllocationKind allocationKind)
    3337{
    3438    // We allocate VM in aligned multiples to increase the chances that
     
    4751    if (!memory)
    4852        return LargeRange();
     53   
     54    if (allocationKind == AllocationKind::Virtual)
     55        vmDeallocatePhysicalPagesSloppy(memory, size);
    4956
    5057    Chunk* chunk = static_cast<Chunk*>(memory);
    5158   
    5259#if BOS(DARWIN)
    53     m_zone.addRange(Range(chunk->bytes(), size));
     60    PerProcess<Zone>::get()->addRange(Range(chunk->bytes(), size));
    5461#endif
    5562
  • trunk/Source/bmalloc/bmalloc/VMHeap.h

    r217811 r220118  
    11/*
    2  * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2727#define VMHeap_h
    2828
     29#include "AllocationKind.h"
    2930#include "Chunk.h"
    3031#include "FixedVector.h"
     32#include "HeapKind.h"
    3133#include "LargeRange.h"
    3234#include "Map.h"
     
    4648class VMHeap {
    4749public:
    48     LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
     50    VMHeap(std::lock_guard<StaticMutex>&);
    4951   
    50 private:
    51 #if BOS(DARWIN)
    52     Zone m_zone;
    53 #endif
     52    LargeRange tryAllocateLargeChunk(size_t alignment, size_t, AllocationKind);
    5453};
    5554
  • trunk/Source/bmalloc/bmalloc/Zone.cpp

    r200983 r220118  
    116116};
    117117
    118 Zone::Zone()
     118Zone::Zone(std::lock_guard<StaticMutex>&)
    119119{
    120120    malloc_zone_t::size = &bmalloc::zoneSize;
  • trunk/Source/bmalloc/bmalloc/Zone.h

    r200983 r220118  
    2929#include "FixedVector.h"
    3030#include "Range.h"
     31#include "StaticMutex.h"
    3132#include <malloc/malloc.h>
     33#include <mutex>
    3234
    3335namespace bmalloc {
     
    4042    static const size_t capacity = 2048;
    4143
    42     Zone();
     44    Zone(std::lock_guard<StaticMutex>&);
    4345    Zone(task_t, memory_reader_t, vm_address_t);
    4446
  • trunk/Source/bmalloc/bmalloc/bmalloc.h

    r217918 r220118  
    11/*
    2  * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2626#include "AvailableMemory.h"
    2727#include "Cache.h"
     28#include "Gigacage.h"
    2829#include "Heap.h"
     30#include "PerHeapKind.h"
    2931#include "PerProcess.h"
     32#include "Scavenger.h"
    3033#include "StaticMutex.h"
    3134
     
    3437
    3538// Returns null on failure.
    36 inline void* tryMalloc(size_t size)
     39inline void* tryMalloc(size_t size, HeapKind kind = HeapKind::Primary)
    3740{
    38     return Cache::tryAllocate(size);
     41    return Cache::tryAllocate(kind, size);
    3942}
    4043
    4144// Crashes on failure.
    42 inline void* malloc(size_t size)
     45inline void* malloc(size_t size, HeapKind kind = HeapKind::Primary)
    4346{
    44     return Cache::allocate(size);
     47    return Cache::allocate(kind, size);
    4548}
    4649
    4750// Returns null on failure.
    48 inline void* tryMemalign(size_t alignment, size_t size)
     51inline void* tryMemalign(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
    4952{
    50     return Cache::tryAllocate(alignment, size);
     53    return Cache::tryAllocate(kind, alignment, size);
    5154}
    5255
    5356// Crashes on failure.
    54 inline void* memalign(size_t alignment, size_t size)
     57inline void* memalign(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
    5558{
    56     return Cache::allocate(alignment, size);
     59    return Cache::allocate(kind, alignment, size);
    5760}
    5861
    5962// Crashes on failure.
    60 inline void* realloc(void* object, size_t newSize)
     63inline void* realloc(void* object, size_t newSize, HeapKind kind = HeapKind::Primary)
    6164{
    62     return Cache::reallocate(object, newSize);
     65    return Cache::reallocate(kind, object, newSize);
    6366}
    6467
    65 inline void free(void* object)
     68// Returns null for failure
     69inline void* tryLargeMemalignVirtual(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
    6670{
    67     Cache::deallocate(object);
     71    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
     72    std::lock_guard<StaticMutex> lock(Heap::mutex());
     73    return heap.allocateLarge(lock, alignment, size, AllocationKind::Virtual);
     74}
     75
     76inline void free(void* object, HeapKind kind = HeapKind::Primary)
     77{
     78    Cache::deallocate(kind, object);
     79}
     80
     81inline void freeLargeVirtual(void* object, HeapKind kind = HeapKind::Primary)
     82{
     83    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
     84    std::lock_guard<StaticMutex> lock(Heap::mutex());
     85    heap.deallocateLarge(lock, object, AllocationKind::Virtual);
    6886}
    6987
    7088inline void scavengeThisThread()
    7189{
    72     Cache::scavenge();
     90    for (unsigned i = numHeaps; i--;)
     91        Cache::scavenge(static_cast<HeapKind>(i));
    7392}
    7493
     
    7796    scavengeThisThread();
    7897
    79     std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
    80     PerProcess<Heap>::get()->scavenge(lock);
     98    PerProcess<Scavenger>::get()->scavenge();
    8199}
    82100
    83 inline bool isEnabled()
     101inline bool isEnabled(HeapKind kind = HeapKind::Primary)
    84102{
    85     std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
    86     return !PerProcess<Heap>::getFastCase()->debugHeap();
     103    std::unique_lock<StaticMutex> lock(Heap::mutex());
     104    return !PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).debugHeap();
    87105}
    88106   
     
    107125inline void setScavengerThreadQOSClass(qos_class_t overrideClass)
    108126{
    109     std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
    110     PerProcess<Heap>::getFastCase()->setScavengerThreadQOSClass(overrideClass);
     127    std::unique_lock<StaticMutex> lock(Heap::mutex());
     128    PerProcess<Scavenger>::get()->setScavengerThreadQOSClass(overrideClass);
    111129}
    112130#endif
  • trunk/Source/bmalloc/bmalloc/mbmalloc.cpp

    r178609 r220118  
    2626#include "bmalloc.h"
    2727
    28 #define EXPORT __attribute__((visibility("default")))
     28#include "BExport.h"
    2929
    3030extern "C" {
    3131
    32 EXPORT void* mbmalloc(size_t);
    33 EXPORT void* mbmemalign(size_t, size_t);
    34 EXPORT void mbfree(void*, size_t);
    35 EXPORT void* mbrealloc(void*, size_t, size_t);
    36 EXPORT void mbscavenge();
     32BEXPORT void* mbmalloc(size_t);
     33BEXPORT void* mbmemalign(size_t, size_t);
     34BEXPORT void mbfree(void*, size_t);
     35BEXPORT void* mbrealloc(void*, size_t, size_t);
     36BEXPORT void mbscavenge();
    3737   
    3838void* mbmalloc(size_t size)
  • trunk/Tools/Scripts/run-jsc-stress-tests

    r219187 r220118  
    12141214        run("wasm-no-call-ic", "-m", "--useCallICsForWebAssemblyToJSCalls=false", *FTL_OPTIONS)
    12151215        run("wasm-no-tls-context", "-m", "--useFastTLSForWasmContext=false", *FTL_OPTIONS)
     1216        run("wasm-slow-memory", "-m", "--useWebAssemblyFastMemory=false", *FTL_OPTIONS)
    12161217    end
    12171218end
Note: See TracChangeset for help on using the changeset viewer.