Changeset 186795 in webkit


Ignore:
Timestamp:
Jul 13, 2015 4:27:30 PM (9 years ago)
Author:
basile_clement@apple.com
Message:

Object cycles should not prevent allocation elimination/sinking
https://bugs.webkit.org/show_bug.cgi?id=143073

Reviewed by Filip Pizlo.

Source/JavaScriptCore:

This patch introduces a new allocation sinking phase that is able to
sink cycles, in DFGAllocationCycleSinkingPhase.cpp. This phase
supersedes the old allocation sinking phase in
DFGObjectAllocationSinkingPhase.cpp, as that previous phase was never
able to sink allocation cycles while the new phase sometimes can; see
DFGAllocationCycleSinkingPhase.cpp for details.

For now, the new sinking phase is kept behind a
JSC_enableAllocationCycleSinking flag that reverts to the old sinking
phase when false (i.e., by default). This also removes the old
JSC_enableObjectAllocationSinking flag. run-javascriptcore-tests
defaults to using the new sinking phase.

  • dfg/DFGGraph.h:

(JSC::DFG::Graph::addStructureSet): Allow empty structure sets

  • dfg/DFGLazyNode.cpp:

(JSC::DFG::LazyNode::dump): Prettier dump

  • dfg/DFGNode.h:

(JSC::DFG::Node::cellOperand): Move to opInfo for MaterializeCreateActivation
(JSC::DFG::Node::hasStructureSet): Add MaterializeNewObject
(JSC::DFG::Node::objectMaterializationData): Move to opInfo2

  • dfg/DFGOSRAvailabilityAnalysisPhase.cpp: Remove unused header
  • dfg/DFGObjectAllocationSinkingPhase.cpp:

(JSC::DFG::ObjectAllocationSinkingPhase::ObjectAllocationSinkingPhase): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::run): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::performSinking): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::determineMaterializationPoints): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::placeMaterializationPoints): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::lowerNonReadingOperationsOnPhantomAllocations): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::promoteSunkenFields): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::resolve): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::handleNode): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::createMaterialize): Deleted.
(JSC::DFG::ObjectAllocationSinkingPhase::populateMaterialize): Deleted.

  • dfg/DFGObjectAllocationSinkingPhase.h:
  • dfg/DFGPromotedHeapLocation.h: Add a hash and a helper function to PromotedLocationDescriptor

(JSC::DFG::PromotedLocationDescriptor::PromotedLocationDescriptor):
(JSC::DFG::PromotedLocationDescriptor::operator bool):
(JSC::DFG::PromotedLocationDescriptor::neededForMaterialization):
(JSC::DFG::PromotedLocationDescriptorHash::hash):
(JSC::DFG::PromotedLocationDescriptorHash::equal):

  • dfg/DFGValidate.cpp:

(JSC::DFG::Validate::validateSSA): Assert that most nodes never see a phantom allocation

  • ftl/FTLLowerDFGToLLVM.cpp:

(JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeNewObject): Use the new structureSet() operand
(JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeCreateActivation): Node has a new child

  • ftl/FTLOSRExitCompiler.cpp: Handle materialization cycles

(JSC::FTL::compileStub):

  • ftl/FTLOperations.cpp: Handle materialization cycles

(JSC::FTL::operationPopulateObjectInOSR):
(JSC::FTL::operationMaterializeObjectInOSR):

  • ftl/FTLOperations.h: Handle materialization cycles
  • tests/stress/correctly-sink-object-even-though-it-dies.js: Added.

(clobber):
(foo):

  • tests/stress/eliminate-object-read-over-call.js: Added.

(clobber):
(foo):

  • tests/stress/materialize-object-on-edge.js: Added.

(call):
(foo):

  • tests/stress/object-sinking-stress.js: Added.

(foo):

  • tests/stress/sink-object-cycle.js: Added.

(clobber):
(foo):

  • tests/stress/sink-object-past-put.js: Added.

(clobber):
(foo):

  • tests/stress/sinkable-new-object-in-loop.js: Added.

(foo):

LayoutTests:

Add a few microbenchmarks that show performance improvement when
sinking or elimininating object cycles.

  • js/regress/elidable-new-object-cycle-expected.txt: Added.
  • js/regress/elidable-new-object-cycle.html: Added.
  • js/regress/script-tests/elidable-new-object-cycle.js: Added.

(sumOfArithSeries):
(foo):

  • js/regress/script-tests/sinkable-closure-cycle.js: Added.

(factorial.f):
(factorial):

  • js/regress/script-tests/sinkable-new-object-cycle.js: Added.

(sumOfArithSeries):
(verify):
(foo):

  • js/regress/sinkable-closure-cycle-expected.txt: Added.
  • js/regress/sinkable-closure-cycle.html: Added.
  • js/regress/sinkable-new-object-cycle-expected.txt: Added.
  • js/regress/sinkable-new-object-cycle.html: Added.
Location:
trunk
Files:
14 edited

Legend:

Unmodified
Added
Removed
  • trunk/LayoutTests/ChangeLog

    r186765 r186795  
     12015-07-13  Basile Clement  <basile_clement@apple.com>
     2
     3        Object cycles should not prevent allocation elimination/sinking
     4        https://bugs.webkit.org/show_bug.cgi?id=143073
     5
     6        Reviewed by Filip Pizlo.
     7
     8        Add a few microbenchmarks that show performance improvement when
     9        sinking or elimininating object cycles.
     10
     11        * js/regress/elidable-new-object-cycle-expected.txt: Added.
     12        * js/regress/elidable-new-object-cycle.html: Added.
     13        * js/regress/script-tests/elidable-new-object-cycle.js: Added.
     14        (sumOfArithSeries):
     15        (foo):
     16        * js/regress/script-tests/sinkable-closure-cycle.js: Added.
     17        (factorial.f):
     18        (factorial):
     19        * js/regress/script-tests/sinkable-new-object-cycle.js: Added.
     20        (sumOfArithSeries):
     21        (verify):
     22        (foo):
     23        * js/regress/sinkable-closure-cycle-expected.txt: Added.
     24        * js/regress/sinkable-closure-cycle.html: Added.
     25        * js/regress/sinkable-new-object-cycle-expected.txt: Added.
     26        * js/regress/sinkable-new-object-cycle.html: Added.
     27
    1282015-07-13  Brent Fulgham  <bfulgham@apple.com>
    229
  • trunk/Source/JavaScriptCore/ChangeLog

    r186794 r186795  
     12015-07-13  Basile Clement  <basile_clement@apple.com>
     2
     3        Object cycles should not prevent allocation elimination/sinking
     4        https://bugs.webkit.org/show_bug.cgi?id=143073
     5
     6        Reviewed by Filip Pizlo.
     7
     8        This patch introduces a new allocation sinking phase that is able to
     9        sink cycles, in DFGAllocationCycleSinkingPhase.cpp. This phase
     10        supersedes the old allocation sinking phase in
     11        DFGObjectAllocationSinkingPhase.cpp, as that previous phase was never
     12        able to sink allocation cycles while the new phase sometimes can; see
     13        DFGAllocationCycleSinkingPhase.cpp for details.
     14
     15        For now, the new sinking phase is kept behind a
     16        JSC_enableAllocationCycleSinking flag that reverts to the old sinking
     17        phase when false (i.e., by default). This also removes the old
     18        JSC_enableObjectAllocationSinking flag. run-javascriptcore-tests
     19        defaults to using the new sinking phase.
     20
     21        * dfg/DFGGraph.h:
     22        (JSC::DFG::Graph::addStructureSet): Allow empty structure sets
     23        * dfg/DFGLazyNode.cpp:
     24        (JSC::DFG::LazyNode::dump): Prettier dump
     25        * dfg/DFGNode.h:
     26        (JSC::DFG::Node::cellOperand): Move to opInfo for MaterializeCreateActivation
     27        (JSC::DFG::Node::hasStructureSet): Add MaterializeNewObject
     28        (JSC::DFG::Node::objectMaterializationData): Move to opInfo2
     29        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp: Remove unused header
     30        * dfg/DFGObjectAllocationSinkingPhase.cpp:
     31        (JSC::DFG::ObjectAllocationSinkingPhase::ObjectAllocationSinkingPhase): Deleted.
     32        (JSC::DFG::ObjectAllocationSinkingPhase::run): Deleted.
     33        (JSC::DFG::ObjectAllocationSinkingPhase::performSinking): Deleted.
     34        (JSC::DFG::ObjectAllocationSinkingPhase::determineMaterializationPoints): Deleted.
     35        (JSC::DFG::ObjectAllocationSinkingPhase::placeMaterializationPoints): Deleted.
     36        (JSC::DFG::ObjectAllocationSinkingPhase::lowerNonReadingOperationsOnPhantomAllocations): Deleted.
     37        (JSC::DFG::ObjectAllocationSinkingPhase::promoteSunkenFields): Deleted.
     38        (JSC::DFG::ObjectAllocationSinkingPhase::resolve): Deleted.
     39        (JSC::DFG::ObjectAllocationSinkingPhase::handleNode): Deleted.
     40        (JSC::DFG::ObjectAllocationSinkingPhase::createMaterialize): Deleted.
     41        (JSC::DFG::ObjectAllocationSinkingPhase::populateMaterialize): Deleted.
     42        * dfg/DFGObjectAllocationSinkingPhase.h:
     43        * dfg/DFGPromotedHeapLocation.h: Add a hash and a helper function to PromotedLocationDescriptor
     44        (JSC::DFG::PromotedLocationDescriptor::PromotedLocationDescriptor):
     45        (JSC::DFG::PromotedLocationDescriptor::operator bool):
     46        (JSC::DFG::PromotedLocationDescriptor::neededForMaterialization):
     47        (JSC::DFG::PromotedLocationDescriptorHash::hash):
     48        (JSC::DFG::PromotedLocationDescriptorHash::equal):
     49        * dfg/DFGValidate.cpp:
     50        (JSC::DFG::Validate::validateSSA): Assert that most nodes never see a phantom allocation
     51        * ftl/FTLLowerDFGToLLVM.cpp:
     52        (JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeNewObject): Use the new structureSet() operand
     53        (JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeCreateActivation): Node has a new child
     54        * ftl/FTLOSRExitCompiler.cpp: Handle materialization cycles
     55        (JSC::FTL::compileStub):
     56        * ftl/FTLOperations.cpp: Handle materialization cycles
     57        (JSC::FTL::operationPopulateObjectInOSR):
     58        (JSC::FTL::operationMaterializeObjectInOSR):
     59        * ftl/FTLOperations.h: Handle materialization cycles
     60        * tests/stress/correctly-sink-object-even-though-it-dies.js: Added.
     61        (clobber):
     62        (foo):
     63        * tests/stress/eliminate-object-read-over-call.js: Added.
     64        (clobber):
     65        (foo):
     66        * tests/stress/materialize-object-on-edge.js: Added.
     67        (call):
     68        (foo):
     69        * tests/stress/object-sinking-stress.js: Added.
     70        (foo):
     71        * tests/stress/sink-object-cycle.js: Added.
     72        (clobber):
     73        (foo):
     74        * tests/stress/sink-object-past-put.js: Added.
     75        (clobber):
     76        (foo):
     77        * tests/stress/sinkable-new-object-in-loop.js: Added.
     78        (foo):
     79
    1802015-07-13  Daniel Bates  <dabates@apple.com>
    281
  • trunk/Source/JavaScriptCore/dfg/DFGGraph.h

    r186691 r186795  
    326326    StructureSet* addStructureSet(const StructureSet& structureSet)
    327327    {
    328         ASSERT(structureSet.size());
    329328        m_structureSet.append(structureSet);
    330329        return &m_structureSet.last();
  • trunk/Source/JavaScriptCore/dfg/DFGLazyNode.cpp

    r184776 r186795  
    3939            out.print("LazyNode:@", asNode()->index());
    4040        else
    41             out.print("LazyNode:FrozenValue:", Graph::opName(op()), ", ", pointerDump(asValue()));
    42         out.print(")");
     41            out.print("LazyNode:FrozenValue(", Graph::opName(op()), ", ", pointerDump(asValue()), ")");
    4342    }
    4443}
  • trunk/Source/JavaScriptCore/dfg/DFGNode.h

    r186136 r186795  
    12941294    {
    12951295        ASSERT(hasCellOperand());
    1296         switch (op()) {
    1297         case MaterializeCreateActivation:
    1298             return reinterpret_cast<FrozenValue*>(m_opInfo2);
    1299         default:
    1300             return reinterpret_cast<FrozenValue*>(m_opInfo);
    1301         }
    1302         RELEASE_ASSERT_NOT_REACHED();
     1296        return reinterpret_cast<FrozenValue*>(m_opInfo);
    13031297    }
    13041298   
     
    13601354        case CheckStructure:
    13611355        case CheckStructureImmediate:
     1356        case MaterializeNewObject:
    13621357            return true;
    13631358        default:
     
    14451440    {
    14461441        ASSERT(hasObjectMaterializationData());
    1447         return *reinterpret_cast<ObjectMaterializationData*>(m_opInfo);
     1442        return *reinterpret_cast<ObjectMaterializationData*>(m_opInfo2);
    14481443    }
    14491444
  • trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp

    r184206 r186795  
    3333#include "DFGInsertionSet.h"
    3434#include "DFGPhase.h"
    35 #include "DFGPromoteHeapAccess.h"
    3635#include "JSCInlines.h"
    3736
  • trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp

    r186136 r186795  
    11/*
    2  * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    2121 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
    2222 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
    23  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
     23 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    2424 */
    2525
     
    2929#if ENABLE(DFG_JIT)
    3030
    31 #include "DFGAbstractHeap.h"
    3231#include "DFGBlockMapInlines.h"
    33 #include "DFGClobberize.h"
    3432#include "DFGCombinedLiveness.h"
    3533#include "DFGGraph.h"
    36 #include "DFGInsertOSRHintsForUpdate.h"
    3734#include "DFGInsertionSet.h"
     35#include "DFGLazyNode.h"
    3836#include "DFGLivenessAnalysisPhase.h"
    3937#include "DFGOSRAvailabilityAnalysisPhase.h"
    4038#include "DFGPhase.h"
    41 #include "DFGPromoteHeapAccess.h"
     39#include "DFGPromotedHeapLocation.h"
    4240#include "DFGSSACalculator.h"
    4341#include "DFGValidate.h"
    4442#include "JSCInlines.h"
    4543
     44#include <list>
     45
    4646namespace JSC { namespace DFG {
    4747
    48 static bool verbose = false;
     48namespace {
     49
     50bool verbose = false;
     51
     52// In order to sink object cycles, we use a points-to analysis coupled
     53// with an escape analysis. This analysis is actually similar to an
     54// abstract interpreter focused on local allocations and ignoring
     55// everything else.
     56//
     57// We represent the local heap using two mappings:
     58//
     59// - A set of the local allocations present in the function, where
     60//   each of those have a further mapping from
     61//   PromotedLocationDescriptor to local allocations they must point
     62//   to.
     63//
     64// - A "pointer" mapping from nodes to local allocations, if they must
     65//   be equal to said local allocation and are currently live. This
     66//   can be because the node is the actual node that created the
     67//   allocation, or any other node that must currently point to it -
     68//   we don't make a difference.
     69//
     70// The following graph is a motivation for why we separate allocations
     71// from pointers:
     72//
     73// Block #0
     74//  0: NewObject({})
     75//  1: NewObject({})
     76//  -: PutByOffset(@0, @1, x)
     77//  -: PutStructure(@0, {x:0})
     78//  2: GetByOffset(@0, x)
     79//  -: Jump(#1)
     80//
     81// Block #1
     82//  -: Return(@2)
     83//
     84// Here, we need to remember in block #1 that @2 points to a local
     85// allocation with appropriate fields and structures information
     86// (because we should be able to place a materialization on top of
     87// block #1 here), even though @1 is dead. We *could* just keep @1
     88// artificially alive here, but there is no real reason to do it:
     89// after all, by the end of block #0, @1 and @2 should be completely
     90// interchangeable, and there is no reason for us to artificially make
     91// @1 more important.
     92//
     93// An important point to consider to understand this separation is
     94// that we should think of the local heap as follow: we have a
     95// bunch of nodes that are pointers to "allocations" that live
     96// someplace on the heap, and those allocations can have pointers in
     97// between themselves as well. We shouldn't care about whatever
     98// names we give to the allocations ; what matters when
     99// comparing/merging two heaps is the isomorphism/comparison between
     100// the allocation graphs as seen by the nodes.
     101//
     102// For instance, in the following graph:
     103//
     104// Block #0
     105//  0: NewObject({})
     106//  -: Branch(#1, #2)
     107//
     108// Block #1
     109//  1: NewObject({})
     110//  -: PutByOffset(@0, @1, x)
     111//  -: PutStructure(@0, {x:0})
     112//  -: Jump(#3)
     113//
     114// Block #2
     115//  2: NewObject({})
     116//  -: PutByOffset(@2, undefined, x)
     117//  -: PutStructure(@2, {x:0})
     118//  -: PutByOffset(@0, @2, x)
     119//  -: PutStructure(@0, {x:0})
     120//  -: Jump(#3)
     121//
     122// Block #3
     123//  -: Return(@0)
     124//
     125// we should think of the heaps at tail of blocks #1 and #2 as being
     126// exactly the same, even though one has @0.x pointing to @1 and the
     127// other has @0.x pointing to @2, because in essence this should not
     128// be different from the graph where we hoisted @1 and @2 into a
     129// single allocation in block #0. We currently will not handle this
     130// case, because we merge allocations based on the node they are
     131// coming from, but this is only a technicality for the sake of
     132// simplicity that shouldn't hide the deeper idea outlined here.
     133
     134class Allocation {
     135public:
     136    // We use Escaped as a special allocation kind because when we
     137    // decide to sink an allocation, we still need to keep track of it
     138    // once it is escaped if it still has pointers to it in order to
     139    // replace any use of those pointers by the corresponding
     140    // materialization
     141    enum class Kind { Escaped, Object, Activation, Function };
     142
     143    explicit Allocation(Node* identifier = nullptr, Kind kind = Kind::Escaped)
     144        : m_identifier(identifier)
     145        , m_kind(kind)
     146    {
     147    }
     148
     149
     150    const HashMap<PromotedLocationDescriptor, Node*>& fields() const
     151    {
     152        return m_fields;
     153    }
     154
     155    Node* get(PromotedLocationDescriptor descriptor)
     156    {
     157        return m_fields.get(descriptor);
     158    }
     159
     160    Allocation& set(PromotedLocationDescriptor descriptor, Node* value)
     161    {
     162        // Pointing to anything else than an unescaped local
     163        // allocation is represented by simply not having the
     164        // field
     165        if (value)
     166            m_fields.set(descriptor, value);
     167        else
     168            m_fields.remove(descriptor);
     169        return *this;
     170    }
     171
     172    void remove(PromotedLocationDescriptor descriptor)
     173    {
     174        set(descriptor, nullptr);
     175    }
     176
     177    bool hasStructures() const
     178    {
     179        switch (kind()) {
     180        case Kind::Object:
     181            return true;
     182
     183        default:
     184            return false;
     185        }
     186    }
     187
     188    Allocation& setStructures(const StructureSet& structures)
     189    {
     190        ASSERT(hasStructures() && !structures.isEmpty());
     191        m_structures = structures;
     192        return *this;
     193    }
     194
     195    Allocation& mergeStructures(const StructureSet& structures)
     196    {
     197        ASSERT(hasStructures() || structures.isEmpty());
     198        m_structures.merge(structures);
     199        return *this;
     200    }
     201
     202    Allocation& filterStructures(const StructureSet& structures)
     203    {
     204        ASSERT(hasStructures());
     205        m_structures.filter(structures);
     206        return *this;
     207    }
     208
     209    const StructureSet& structures() const
     210    {
     211        return m_structures;
     212    }
     213
     214    Node* identifier() const { return m_identifier; }
     215
     216    Kind kind() const { return m_kind; }
     217
     218    bool isEscapedAllocation() const
     219    {
     220        return kind() == Kind::Escaped;
     221    }
     222
     223    bool isObjectAllocation() const
     224    {
     225        return m_kind == Kind::Object;
     226    }
     227
     228    bool isActivationAllocation() const
     229    {
     230        return m_kind == Kind::Activation;
     231    }
     232
     233    bool isFunctionAllocation() const
     234    {
     235        return m_kind == Kind::Function;
     236    }
     237
     238    bool operator==(const Allocation& other) const
     239    {
     240        return m_identifier == other.m_identifier
     241            && m_kind == other.m_kind
     242            && m_fields == other.m_fields
     243            && m_structures == other.m_structures;
     244    }
     245
     246    bool operator!=(const Allocation& other) const
     247    {
     248        return !(*this == other);
     249    }
     250
     251    void dump(PrintStream& out) const
     252    {
     253        dumpInContext(out, nullptr);
     254    }
     255
     256    void dumpInContext(PrintStream& out, DumpContext* context) const
     257    {
     258        switch (m_kind) {
     259        case Kind::Escaped:
     260            out.print("Escaped");
     261            break;
     262
     263        case Kind::Object:
     264            out.print("Object");
     265            break;
     266
     267        case Kind::Function:
     268            out.print("Function");
     269            break;
     270
     271        case Kind::Activation:
     272            out.print("Activation");
     273            break;
     274        }
     275        out.print("Allocation(");
     276        if (!m_structures.isEmpty())
     277            out.print(inContext(m_structures, context));
     278        if (!m_fields.isEmpty()) {
     279            if (!m_structures.isEmpty())
     280                out.print(", ");
     281            out.print(mapDump(m_fields, " => #", ", "));
     282        }
     283        out.print(")");
     284    }
     285
     286private:
     287    Node* m_identifier; // This is the actual node that created the allocation
     288    Kind m_kind;
     289    HashMap<PromotedLocationDescriptor, Node*> m_fields;
     290    StructureSet m_structures;
     291};
     292
     293class LocalHeap {
     294public:
     295    Allocation& newAllocation(Node* node, Allocation::Kind kind)
     296    {
     297        ASSERT(!m_pointers.contains(node) && !isAllocation(node));
     298        m_pointers.add(node, node);
     299        return m_allocations.set(node, Allocation(node, kind)).iterator->value;
     300    }
     301
     302    bool isAllocation(Node* identifier) const
     303    {
     304        return m_allocations.contains(identifier);
     305    }
     306
     307    // Note that this is fundamentally different from
     308    // onlyLocalAllocation() below. getAllocation() takes as argument
     309    // a node-as-identifier, that is, an allocation node. This
     310    // allocation node doesn't have to be alive; it may only be
     311    // pointed to by other nodes or allocation fields.
     312    // For instance, in the following graph:
     313    //
     314    // Block #0
     315    //  0: NewObject({})
     316    //  1: NewObject({})
     317    //  -: PutByOffset(@0, @1, x)
     318    //  -: PutStructure(@0, {x:0})
     319    //  2: GetByOffset(@0, x)
     320    //  -: Jump(#1)
     321    //
     322    // Block #1
     323    //  -: Return(@2)
     324    //
     325    // At head of block #1, the only reachable allocation is #@1,
     326    // which can be reached through node @2. Thus, getAllocation(#@1)
     327    // contains the appropriate metadata for this allocation, but
     328    // onlyLocalAllocation(@1) is null, as @1 is no longer a pointer
     329    // to #@1 (since it is dead). Conversely, onlyLocalAllocation(@2)
     330    // is the same as getAllocation(#@1), while getAllocation(#@2)
     331    // does not make sense since @2 is not an allocation node.
     332    //
     333    // This is meant to be used when the node is already known to be
     334    // an identifier (i.e. an allocation) - probably because it was
     335    // found as value of a field or pointer in the current heap, or
     336    // was the result of a call to follow(). In any other cases (such
     337    // as when doing anything while traversing the graph), the
     338    // appropriate function to call is probably onlyLocalAllocation.
     339    Allocation& getAllocation(Node* identifier)
     340    {
     341        auto iter = m_allocations.find(identifier);
     342        ASSERT(iter != m_allocations.end());
     343        return iter->value;
     344    }
     345
     346    void newPointer(Node* node, Node* identifier)
     347    {
     348        ASSERT(!m_allocations.contains(node) && !m_pointers.contains(node));
     349        ASSERT(isAllocation(identifier));
     350        m_pointers.add(node, identifier);
     351    }
     352
     353    // follow solves the points-to problem. Given a live node, which
     354    // may be either an allocation itself or a heap read (e.g. a
     355    // GetByOffset node), it returns the corresponding allocation
     356    // node, if there is one. If the argument node is neither an
     357    // allocation or a heap read, or may point to different nodes,
     358    // nullptr will be returned. Note that a node that points to
     359    // different nodes can never point to an unescaped local
     360    // allocation.
     361    Node* follow(Node* node) const
     362    {
     363        auto iter = m_pointers.find(node);
     364        ASSERT(iter == m_pointers.end() || m_allocations.contains(iter->value));
     365        return iter == m_pointers.end() ? nullptr : iter->value;
     366    }
     367
     368    Node* follow(PromotedHeapLocation location) const
     369    {
     370        const Allocation& base = m_allocations.find(location.base())->value;
     371        auto iter = base.fields().find(location.descriptor());
     372
     373        if (iter == base.fields().end())
     374            return nullptr;
     375
     376        return iter->value;
     377    }
     378
     379    // onlyLocalAllocation find the corresponding allocation metadata
     380    // for any live node. onlyLocalAllocation(node) is essentially
     381    // getAllocation(follow(node)), with appropriate null handling.
     382    Allocation* onlyLocalAllocation(Node* node)
     383    {
     384        Node* identifier = follow(node);
     385        if (!identifier)
     386            return nullptr;
     387
     388        return &getAllocation(identifier);
     389    }
     390
     391    Allocation* onlyLocalAllocation(PromotedHeapLocation location)
     392    {
     393        Node* identifier = follow(location);
     394        if (!identifier)
     395            return nullptr;
     396
     397        return &getAllocation(identifier);
     398    }
     399
     400    // This allows us to store the escapees only when necessary. If
     401    // set, the current escapees can be retrieved at any time using
     402    // takeEscapees(), which will clear the cached set of escapees;
     403    // otherwise the heap won't remember escaping allocations.
     404    void setWantEscapees()
     405    {
     406        m_wantEscapees = true;
     407    }
     408
     409    HashMap<Node*, Allocation> takeEscapees()
     410    {
     411        return WTF::move(m_escapees);
     412    }
     413
     414    void escape(Node* node)
     415    {
     416        Node* identifier = follow(node);
     417        if (!identifier)
     418            return;
     419
     420        escapeAllocation(identifier);
     421    }
     422
     423    void merge(const LocalHeap& other)
     424    {
     425        assertIsValid();
     426        other.assertIsValid();
     427        ASSERT(!m_wantEscapees);
     428
     429        if (!reached()) {
     430            ASSERT(other.reached());
     431            *this = other;
     432            return;
     433        }
     434
     435        HashSet<Node*> toEscape;
     436
     437        for (auto& allocationEntry : other.m_allocations)
     438            m_allocations.add(allocationEntry.key, allocationEntry.value);
     439        for (auto& allocationEntry : m_allocations) {
     440            auto allocationIter = other.m_allocations.find(allocationEntry.key);
     441
     442            // If we have it and they don't, it died for them but we
     443            // are keeping it alive from another field somewhere.
     444            // There is nothing to do - we will be escaped
     445            // automatically when we handle that other field.
     446            // This will also happen for allocation that we have and
     447            // they don't, and all of those will get pruned.
     448            if (allocationIter == other.m_allocations.end())
     449                continue;
     450
     451            if (allocationEntry.value.kind() != allocationIter->value.kind()) {
     452                toEscape.add(allocationEntry.key);
     453                for (const auto& fieldEntry : allocationIter->value.fields())
     454                    toEscape.add(fieldEntry.value);
     455            } else {
     456                mergePointerSets(
     457                    allocationEntry.value.fields(), allocationIter->value.fields(),
     458                    [&] (Node* identifier) {
     459                        toEscape.add(identifier);
     460                    },
     461                    [&] (PromotedLocationDescriptor field) {
     462                        allocationEntry.value.remove(field);
     463                    });
     464                allocationEntry.value.mergeStructures(allocationIter->value.structures());
     465            }
     466        }
     467
     468        mergePointerSets(m_pointers, other.m_pointers,
     469            [&] (Node* identifier) {
     470                toEscape.add(identifier);
     471            },
     472            [&] (Node* field) {
     473                m_pointers.remove(field);
     474            });
     475
     476        for (Node* identifier : toEscape)
     477            escapeAllocation(identifier);
     478
     479        if (!ASSERT_DISABLED) {
     480            for (const auto& entry : m_allocations)
     481                ASSERT_UNUSED(entry, entry.value.isEscapedAllocation() || other.m_allocations.contains(entry.key));
     482        }
     483
     484        // If there is no remaining pointer to an allocation, we can
     485        // remove it. This should only happen for escaped allocations,
     486        // because we only merge liveness-pruned heaps in the first
     487        // place.
     488        prune();
     489
     490        assertIsValid();
     491    }
     492
     493    void pruneByLiveness(const HashSet<Node*>& live)
     494    {
     495        Vector<Node*> toRemove;
     496        for (const auto& entry : m_pointers) {
     497            if (!live.contains(entry.key))
     498                toRemove.append(entry.key);
     499        }
     500        for (Node* node : toRemove)
     501            m_pointers.remove(node);
     502
     503        prune();
     504    }
     505
     506    void assertIsValid() const
     507    {
     508        if (ASSERT_DISABLED)
     509            return;
     510
     511        // Pointers should point to an actual allocation
     512        for (const auto& entry : m_pointers) {
     513            ASSERT_UNUSED(entry, entry.value);
     514            ASSERT(m_allocations.contains(entry.value));
     515        }
     516
     517        for (const auto& allocationEntry : m_allocations) {
     518            // Fields should point to an actual allocation
     519            for (const auto& fieldEntry : allocationEntry.value.fields()) {
     520                ASSERT_UNUSED(fieldEntry, fieldEntry.value);
     521                ASSERT(m_allocations.contains(fieldEntry.value));
     522            }
     523        }
     524    }
     525
     526    bool operator==(const LocalHeap& other) const
     527    {
     528        assertIsValid();
     529        other.assertIsValid();
     530        return m_allocations == other.m_allocations
     531            && m_pointers == other.m_pointers;
     532    }
     533
     534    bool operator!=(const LocalHeap& other) const
     535    {
     536        return !(*this == other);
     537    }
     538
     539    const HashMap<Node*, Allocation> allocations() const
     540    {
     541        return m_allocations;
     542    }
     543
     544    const HashMap<Node*, Node*> pointers() const
     545    {
     546        return m_pointers;
     547    }
     548
     549    void dump(PrintStream& out) const
     550    {
     551        out.print("  Allocations:\n");
     552        for (const auto& entry : m_allocations)
     553            out.print("    #", entry.key, ": ", entry.value, "\n");
     554        out.print("  Pointers:\n");
     555        for (const auto& entry : m_pointers)
     556            out.print("    ", entry.key, " => #", entry.value, "\n");
     557    }
     558
     559    bool reached() const
     560    {
     561        return m_reached;
     562    }
     563
     564    void setReached()
     565    {
     566        m_reached = true;
     567    }
     568
     569private:
     570    // When we merge two heaps, we escape all fields of allocations,
     571    // unless they point to the same thing in both heaps.
     572    // The reason for this is that it allows us not to do extra work
     573    // for diamond graphs where we would otherwise have to check
     574    // whether we have a single definition or not, which would be
     575    // cumbersome.
     576    //
     577    // Note that we should try to unify nodes even when they are not
     578    // from the same allocation; for instance we should be able to
     579    // completely eliminate all allocations from the following graph:
     580    //
     581    // Block #0
     582    //  0: NewObject({})
     583    //  -: Branch(#1, #2)
     584    //
     585    // Block #1
     586    //  1: NewObject({})
     587    //  -: PutByOffset(@1, "left", val)
     588    //  -: PutStructure(@1, {val:0})
     589    //  -: PutByOffset(@0, @1, x)
     590    //  -: PutStructure(@0, {x:0})
     591    //  -: Jump(#3)
     592    //
     593    // Block #2
     594    //  2: NewObject({})
     595    //  -: PutByOffset(@2, "right", val)
     596    //  -: PutStructure(@2, {val:0})
     597    //  -: PutByOffset(@0, @2, x)
     598    //  -: PutStructure(@0, {x:0})
     599    //  -: Jump(#3)
     600    //
     601    // Block #3:
     602    //  3: GetByOffset(@0, x)
     603    //  4: GetByOffset(@3, val)
     604    //  -: Return(@4)
     605    template<typename Key, typename EscapeFunctor, typename RemoveFunctor>
     606    void mergePointerSets(
     607        const HashMap<Key, Node*>& my, const HashMap<Key, Node*>& their,
     608        const EscapeFunctor& escape, const RemoveFunctor& remove)
     609    {
     610        Vector<Key> toRemove;
     611        for (const auto& entry : my) {
     612            auto iter = their.find(entry.key);
     613            if (iter == their.end()) {
     614                toRemove.append(entry.key);
     615                escape(entry.value);
     616            } else if (iter->value != entry.value) {
     617                toRemove.append(entry.key);
     618                escape(entry.value);
     619                escape(iter->value);
     620            }
     621        }
     622        for (const auto& entry : their) {
     623            if (my.contains(entry.key))
     624                continue;
     625            escape(entry.value);
     626        }
     627        for (Key key : toRemove)
     628            remove(key);
     629    }
     630
     631    void escapeAllocation(Node* identifier)
     632    {
     633        Allocation& allocation = getAllocation(identifier);
     634        if (allocation.isEscapedAllocation())
     635            return;
     636
     637        Allocation unescaped = WTF::move(allocation);
     638        allocation = Allocation(unescaped.identifier(), Allocation::Kind::Escaped);
     639
     640        for (const auto& entry : unescaped.fields())
     641            escapeAllocation(entry.value);
     642
     643        if (m_wantEscapees)
     644            m_escapees.add(unescaped.identifier(), WTF::move(unescaped));
     645    }
     646
     647    void prune()
     648    {
     649        HashSet<Node*> reachable;
     650        for (const auto& entry : m_pointers)
     651            reachable.add(entry.value);
     652
     653        // Repeatedly mark as reachable allocations in fields of other
     654        // reachable allocations
     655        {
     656            Vector<Node*> worklist;
     657            worklist.appendRange(reachable.begin(), reachable.end());
     658
     659            while (!worklist.isEmpty()) {
     660                Node* identifier = worklist.takeLast();
     661                Allocation& allocation = m_allocations.find(identifier)->value;
     662                for (const auto& entry : allocation.fields()) {
     663                    if (reachable.add(entry.value).isNewEntry)
     664                        worklist.append(entry.value);
     665                }
     666            }
     667        }
     668
     669        // Remove unreachable allocations
     670        {
     671            Vector<Node*> toRemove;
     672            for (const auto& entry : m_allocations) {
     673                if (!reachable.contains(entry.key))
     674                    toRemove.append(entry.key);
     675            }
     676            for (Node* identifier : toRemove)
     677                m_allocations.remove(identifier);
     678        }
     679    }
     680
     681    bool m_reached = false;
     682    HashMap<Node*, Node*> m_pointers;
     683    HashMap<Node*, Allocation> m_allocations;
     684
     685    bool m_wantEscapees = false;
     686    HashMap<Node*, Allocation> m_escapees;
     687};
    49688
    50689class ObjectAllocationSinkingPhase : public Phase {
    51690public:
    52691    ObjectAllocationSinkingPhase(Graph& graph)
    53         : Phase(graph, "object allocation sinking")
    54         , m_ssaCalculator(graph)
     692        : Phase(graph, "object allocation elimination")
     693        , m_pointerSSA(graph)
     694        , m_allocationSSA(graph)
    55695        , m_insertionSet(graph)
    56696    {
    57697    }
    58    
     698
    59699    bool run()
    60700    {
     701        ASSERT(m_graph.m_form == SSA);
    61702        ASSERT(m_graph.m_fixpointState == FixpointNotConverged);
    62        
    63         m_graph.m_dominators.computeIfNecessary(m_graph);
    64        
    65         // Logically we wish to consider every NewObject and sink it. However it's probably not
    66         // profitable to sink a NewObject that will always escape. So, first we do a very simple
    67         // forward flow analysis that determines the set of NewObject nodes that have any chance
    68         // of benefiting from object allocation sinking. Then we fixpoint the following rules:
    69         //
    70         // - For each NewObject, we turn the original NewObject into a PhantomNewObject and then
    71         //   we insert MaterializeNewObject just before those escaping sites that come before any
    72         //   other escaping sites - that is, there is no path between the allocation and those sites
    73         //   that would see any other escape. Note that Upsilons constitute escaping sites. Then we
    74         //   insert additional MaterializeNewObject nodes on Upsilons that feed into Phis that mix
    75         //   materializations and the original PhantomNewObject. We then turn each PutByOffset over a
    76         //   PhantomNewObject into a PutHint.
    77         //
    78         // - We perform the same optimization for MaterializeNewObject. This allows us to cover
    79         //   cases where we had MaterializeNewObject flowing into a PutHint.
    80         //
    81         // We could also add this rule:
    82         //
    83         // - If all of the Upsilons of a Phi have a MaterializeNewObject that isn't used by anyone
    84         //   else, then replace the Phi with the MaterializeNewObject.
    85         //
    86         //   FIXME: Implement this. Note that this totally doable but it requires some gnarly
    87         //   code, and to be effective the pruner needs to be aware of it. Currently any Upsilon
    88         //   is considered to be an escape even by the pruner, so it's unlikely that we'll see
    89         //   many cases of Phi over Materializations.
    90         //   https://bugs.webkit.org/show_bug.cgi?id=136927
    91        
     703
    92704        if (!performSinking())
    93705            return false;
    94        
    95         while (performSinking()) { }
    96        
     706
    97707        if (verbose) {
    98             dataLog("Graph after sinking:\n");
     708            dataLog("Graph after elimination:\n");
    99709            m_graph.dump();
    100710        }
    101        
     711
    102712        return true;
    103713    }
     
    107717    {
    108718        m_graph.computeRefCounts();
     719        m_graph.initializeNodeOwners();
    109720        performLivenessAnalysis(m_graph);
    110721        performOSRAvailabilityAnalysis(m_graph);
    111722        m_combinedLiveness = CombinedLiveness(m_graph);
    112        
     723
    113724        CString graphBeforeSinking;
    114725        if (Options::verboseValidationFailure() && Options::validateGraphAtEachPhase()) {
     
    117728            graphBeforeSinking = out.toCString();
    118729        }
    119        
     730
    120731        if (verbose) {
    121             dataLog("Graph before sinking:\n");
     732            dataLog("Graph before elimination:\n");
    122733            m_graph.dump();
    123734        }
    124        
    125         determineMaterializationPoints();
    126         if (m_sinkCandidates.isEmpty())
     735
     736        performAnalysis();
     737
     738        if (!determineSinkCandidates())
    127739            return false;
    128        
    129         // At this point we are committed to sinking the sinking candidates.
    130         placeMaterializationPoints();
    131         lowerNonReadingOperationsOnPhantomAllocations();
    132         promoteSunkenFields();
    133        
     740
     741        if (verbose) {
     742            for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
     743                dataLog("Heap at head of ", *block, ": \n", m_heapAtHead[block]);
     744                dataLog("Heap at tail of ", *block, ": \n", m_heapAtTail[block]);
     745            }
     746        }
     747
     748        promoteLocalHeap();
     749
    134750        if (Options::validateGraphAtEachPhase())
    135751            validate(m_graph, DumpGraph, graphBeforeSinking);
    136        
    137         if (verbose)
    138             dataLog("Sinking iteration changed the graph.\n");
    139752        return true;
    140753    }
    141    
    142     void determineMaterializationPoints()
    143     {
    144         // The premise of this pass is that if there exists a point in the program where some
    145         // path from a phantom allocation site to that point causes materialization, then *all*
    146         // paths cause materialization. This should mean that there are never any redundant
    147         // materializations.
    148        
    149         m_sinkCandidates.clear();
    150         m_materializationToEscapee.clear();
    151         m_materializationSiteToMaterializations.clear();
    152        
    153         BlockMap<HashMap<Node*, bool>> materializedAtHead(m_graph);
    154         BlockMap<HashMap<Node*, bool>> materializedAtTail(m_graph);
    155        
     754
     755    void performAnalysis()
     756    {
     757        m_heapAtHead = BlockMap<LocalHeap>(m_graph);
     758        m_heapAtTail = BlockMap<LocalHeap>(m_graph);
     759
    156760        bool changed;
    157761        do {
    158762            if (verbose)
    159                 dataLog("Doing iteration of materialization point placement.\n");
     763                dataLog("Doing iteration of escape analysis.\n");
    160764            changed = false;
    161             for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    162                 HashMap<Node*, bool> materialized = materializedAtHead[block];
    163                 if (verbose)
    164                     dataLog("    Looking at block ", pointerDump(block), "\n");
     765
     766            for (BasicBlock* block : m_graph.blocksInPreOrder()) {
     767                m_heapAtHead[block].setReached();
     768                m_heap = m_heapAtHead[block];
     769
    165770                for (Node* node : *block) {
    166771                    handleNode(
    167772                        node,
    168                         [&] () {
    169                             materialized.add(node, false);
    170                         },
    171                         [&] (Node* escapee) {
    172                             auto iter = materialized.find(escapee);
    173                             if (iter != materialized.end()) {
    174                                 if (verbose)
    175                                     dataLog("    ", escapee, " escapes at ", node, "\n");
    176                                 iter->value = true;
    177                             }
     773                        [] (PromotedHeapLocation, LazyNode) { },
     774                        [&] (PromotedHeapLocation) -> Node* {
     775                            return nullptr;
    178776                        });
    179777                }
    180                
    181                 if (verbose)
    182                     dataLog("    Materialized at tail of ", pointerDump(block), ": ", mapDump(materialized), "\n");
    183                
    184                 if (materialized == materializedAtTail[block])
     778
     779                if (m_heap == m_heapAtTail[block])
    185780                    continue;
    186                
    187                 materializedAtTail[block] = materialized;
     781
     782                m_heapAtTail[block] = m_heap;
    188783                changed = true;
    189                
    190                 // Only propagate things to our successors if they are alive in all successors.
    191                 // So, we prune materialized-at-tail to only include things that are live.
    192                 Vector<Node*> toRemove;
    193                 for (auto pair : materialized) {
    194                     if (!m_combinedLiveness.liveAtTail[block].contains(pair.key))
    195                         toRemove.append(pair.key);
    196                 }
    197                 for (Node* key : toRemove)
    198                     materialized.remove(key);
    199                
    200                 for (BasicBlock* successorBlock : block->successors()) {
    201                     for (auto pair : materialized) {
    202                         materializedAtHead[successorBlock].add(
    203                             pair.key, false).iterator->value |= pair.value;
    204                     }
    205                 }
     784
     785                m_heap.assertIsValid();
     786
     787                // We keep only pointers that are live, and only
     788                // allocations that are either live, pointed to by a
     789                // live pointer, or (recursively) stored in a field of
     790                // a live allocation.
     791                //
     792                // This means we can accidentaly leak non-dominating
     793                // nodes into the successor. However, due to the
     794                // non-dominance property, we are guaranteed that the
     795                // successor has at least one predecessor that is not
     796                // dominated either: this means any reference to a
     797                // non-dominating allocation in the successor will
     798                // trigger an escape and get pruned during the merge.
     799                m_heap.pruneByLiveness(m_combinedLiveness.liveAtTail[block]);
     800
     801                for (BasicBlock* successorBlock : block->successors())
     802                    m_heapAtHead[successorBlock].merge(m_heap);
    206803            }
    207804        } while (changed);
    208        
    209         // Determine the sink candidates. Broadly, a sink candidate is a node that handleNode()
    210         // believes is sinkable, and one of the following is true:
    211         //
    212         // 1) There exists a basic block with only backward outgoing edges (or no outgoing edges)
    213         //    in which the node wasn't materialized. This is meant to catch effectively-infinite
    214         //    loops in which we don't need to have allocated the object.
    215         //
    216         // 2) There exists a basic block at the tail of which the node is not materialized and the
    217         //    node is dead.
    218         //
    219         // 3) The sum of execution counts of the materializations is less than the sum of
    220         //    execution counts of the original node.
    221         //
    222         // We currently implement only rule #2.
    223         // FIXME: Implement the two other rules.
    224         // https://bugs.webkit.org/show_bug.cgi?id=137073 (rule #1)
    225         // https://bugs.webkit.org/show_bug.cgi?id=137074 (rule #3)
    226        
    227         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    228             for (auto pair : materializedAtTail[block]) {
    229                 if (pair.value)
    230                     continue; // It was materialized.
    231                
    232                 if (m_combinedLiveness.liveAtTail[block].contains(pair.key))
    233                     continue; // It might still get materialized in all of the successors.
    234                
    235                 // We know that it died in this block and it wasn't materialized. That means that
    236                 // if we sink this allocation, then *this* will be a path along which we never
    237                 // have to allocate. Profit!
    238                 m_sinkCandidates.add(pair.key);
    239             }
    240         }
    241        
    242         if (m_sinkCandidates.isEmpty())
    243             return;
    244        
    245         // A materialization edge exists at any point where a node escapes but hasn't been
    246         // materialized yet. We do this in two parts. First we find all of the nodes that cause
    247         // escaping to happen, where the escapee had not yet been materialized. This catches
    248         // everything but loops. We then catch loops - as well as weirder control flow constructs -
    249         // in a subsequent pass that looks at places in the CFG where an edge exists from a block
    250         // that hasn't materialized to a block that has. We insert the materialization along such an
    251         // edge, and we rely on the fact that critical edges were already broken so that we actually
    252         // either insert the materialization at the head of the successor or the tail of the
    253         // predecessor.
    254         //
    255         // FIXME: This can create duplicate allocations when we really only needed to perform one.
    256         // For example:
    257         //
    258         //     var o = new Object();
    259         //     if (rare) {
    260         //         if (stuff)
    261         //             call(o); // o escapes here.
    262         //         return;
    263         //     }
    264         //     // o doesn't escape down here.
    265         //
    266         // In this example, we would place a materialization point at call(o) and then we would find
    267         // ourselves having to insert another one at the implicit else case of that if statement
    268         // ('cause we've broken critical edges). We would instead really like to just have one
    269         // materialization point right at the top of the then case of "if (rare)". To do this, we can
    270         // find the LCA of the various materializations in the dom tree.
    271         // https://bugs.webkit.org/show_bug.cgi?id=137124
    272        
    273         // First pass: find intra-block materialization points.
    274         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    275             HashSet<Node*> materialized;
    276             for (auto pair : materializedAtHead[block]) {
    277                 if (pair.value && m_sinkCandidates.contains(pair.key))
    278                     materialized.add(pair.key);
    279             }
    280            
    281             if (verbose)
    282                 dataLog("Placing materialization points in ", pointerDump(block), " with materialization set ", listDump(materialized), "\n");
    283            
    284             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
    285                 Node* node = block->at(nodeIndex);
    286                
    287                 handleNode(
    288                     node,
    289                     [&] () { },
    290                     [&] (Node* escapee) {
    291                         if (verbose)
    292                             dataLog("Seeing ", escapee, " escape at ", node, "\n");
    293                        
    294                         if (!m_sinkCandidates.contains(escapee)) {
    295                             if (verbose)
    296                                 dataLog("    It's not a sink candidate.\n");
    297                             return;
    298                         }
    299                        
    300                         if (!materialized.add(escapee).isNewEntry) {
    301                             if (verbose)
    302                                 dataLog("   It's already materialized.\n");
    303                             return;
    304                         }
    305                        
    306                         createMaterialize(escapee, node);
    307                     });
    308             }
    309         }
    310        
    311         // Second pass: find CFG edge materialization points.
    312         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    313             for (BasicBlock* successorBlock : block->successors()) {
    314                 for (auto pair : materializedAtHead[successorBlock]) {
    315                     Node* allocation = pair.key;
    316                    
    317                     // We only care if it is materialized in the successor.
    318                     if (!pair.value)
    319                         continue;
    320                    
    321                     // We only care about sinking candidates.
    322                     if (!m_sinkCandidates.contains(allocation))
    323                         continue;
    324                    
    325                     // We only care if it isn't materialized in the predecessor.
    326                     if (materializedAtTail[block].get(allocation))
    327                         continue;
    328                    
    329                     // We need to decide if we put the materialization at the head of the successor,
    330                     // or the tail of the predecessor. It needs to be placed so that the allocation
    331                     // is never materialized before, and always materialized after.
    332                    
    333                     // Is it never materialized in any of successor's predecessors? I like to think
    334                     // of "successors' predecessors" and "predecessor's successors" as the "shadow",
    335                     // because of what it looks like when you draw it.
    336                     bool neverMaterializedInShadow = true;
    337                     for (BasicBlock* shadowBlock : successorBlock->predecessors) {
    338                         if (materializedAtTail[shadowBlock].get(allocation)) {
    339                             neverMaterializedInShadow = false;
    340                             break;
    341                         }
    342                     }
    343                    
    344                     if (neverMaterializedInShadow) {
    345                         createMaterialize(allocation, successorBlock->firstOriginNode());
    346                         continue;
    347                     }
    348                    
    349                     // Because we had broken critical edges, it must be the case that the
    350                     // predecessor's successors all materialize the object. This is because the
    351                     // previous case is guaranteed to catch the case where the successor only has
    352                     // one predecessor. When critical edges are broken, this is also the only case
    353                     // where the predecessor could have had multiple successors. Therefore we have
    354                     // already handled the case where the predecessor has multiple successors.
    355                     DFG_ASSERT(m_graph, block, block->numSuccessors() == 1);
    356                    
    357                     createMaterialize(allocation, block->terminal());
    358                 }
    359             }
    360         }
    361     }
    362    
    363     void placeMaterializationPoints()
    364     {
    365         m_ssaCalculator.reset();
    366        
    367         // The "variables" are the object allocations that we are sinking. So, nodeToVariable maps
    368         // sink candidates (aka escapees) to the SSACalculator's notion of Variable, and indexToNode
    369         // maps in the opposite direction using the SSACalculator::Variable::index() as the key.
    370         HashMap<Node*, SSACalculator::Variable*> nodeToVariable;
    371         Vector<Node*> indexToNode;
    372        
    373         for (Node* node : m_sinkCandidates) {
    374             SSACalculator::Variable* variable = m_ssaCalculator.newVariable();
    375             nodeToVariable.add(node, variable);
    376             ASSERT(indexToNode.size() == variable->index());
    377             indexToNode.append(node);
    378         }
    379        
    380         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    381             for (Node* node : *block) {
    382                 if (SSACalculator::Variable* variable = nodeToVariable.get(node))
    383                     m_ssaCalculator.newDef(variable, block, node);
    384                
    385                 for (Node* materialize : m_materializationSiteToMaterializations.get(node)) {
    386                     m_ssaCalculator.newDef(
    387                         nodeToVariable.get(m_materializationToEscapee.get(materialize)),
    388                         block, materialize);
    389                 }
    390             }
    391         }
    392        
    393         m_ssaCalculator.computePhis(
    394             [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
    395                 Node* allocation = indexToNode[variable->index()];
    396                 if (!m_combinedLiveness.liveAtHead[block].contains(allocation))
    397                     return nullptr;
    398                
    399                 Node* phiNode = m_graph.addNode(allocation->prediction(), Phi, NodeOrigin());
    400                 phiNode->mergeFlags(NodeResultJS);
    401                 return phiNode;
    402             });
    403        
    404         // Place Phis in the right places. Replace all uses of any allocation with the appropriate
    405         // materialization. Create the appropriate Upsilon nodes.
    406         LocalOSRAvailabilityCalculator availabilityCalculator;
    407         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    408             HashMap<Node*, Node*> mapping;
    409            
    410             for (Node* candidate : m_combinedLiveness.liveAtHead[block]) {
    411                 SSACalculator::Variable* variable = nodeToVariable.get(candidate);
    412                 if (!variable)
    413                     continue;
    414                
    415                 SSACalculator::Def* def = m_ssaCalculator.reachingDefAtHead(block, variable);
    416                 if (!def)
    417                     continue;
    418                
    419                 mapping.set(indexToNode[variable->index()], def->value());
    420             }
    421            
    422             availabilityCalculator.beginBlock(block);
    423             for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(block)) {
    424                 m_insertionSet.insert(0, phiDef->value());
    425                
    426                 Node* originalNode = indexToNode[phiDef->variable()->index()];
    427                 insertOSRHintsForUpdate(
    428                     m_insertionSet, 0, NodeOrigin(), availabilityCalculator.m_availability,
    429                     originalNode, phiDef->value());
    430 
    431                 mapping.set(originalNode, phiDef->value());
    432             }
    433            
    434             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
    435                 Node* node = block->at(nodeIndex);
    436 
    437                 for (Node* materialize : m_materializationSiteToMaterializations.get(node)) {
    438                     Node* escapee = m_materializationToEscapee.get(materialize);
    439                     m_insertionSet.insert(nodeIndex, materialize);
    440                     insertOSRHintsForUpdate(
    441                         m_insertionSet, nodeIndex, node->origin,
    442                         availabilityCalculator.m_availability, escapee, materialize);
    443                     mapping.set(escapee, materialize);
    444                 }
    445                    
    446                 availabilityCalculator.executeNode(node);
    447                
    448                 m_graph.doToChildren(
    449                     node,
    450                     [&] (Edge& edge) {
    451                         if (Node* materialize = mapping.get(edge.node()))
    452                             edge.setNode(materialize);
    453                     });
    454                
    455                 // If you cause an escape, you shouldn't see the original escapee.
    456                 if (validationEnabled()) {
    457                     handleNode(
    458                         node,
    459                         [&] () { },
    460                         [&] (Node* escapee) {
    461                             DFG_ASSERT(m_graph, node, !m_sinkCandidates.contains(escapee));
    462                         });
    463                 }
    464             }
    465            
    466             NodeAndIndex terminal = block->findTerminal();
    467             size_t upsilonInsertionPoint = terminal.index;
    468             Node* upsilonWhere = terminal.node;
    469             NodeOrigin upsilonOrigin = upsilonWhere->origin;
    470             for (BasicBlock* successorBlock : block->successors()) {
    471                 for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(successorBlock)) {
    472                     Node* phiNode = phiDef->value();
    473                     SSACalculator::Variable* variable = phiDef->variable();
    474                     Node* allocation = indexToNode[variable->index()];
    475                    
    476                     Node* incoming = mapping.get(allocation);
    477                     DFG_ASSERT(m_graph, incoming, incoming != allocation);
    478                    
    479                     m_insertionSet.insertNode(
    480                         upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
    481                         OpInfo(phiNode), incoming->defaultEdge());
    482                 }
    483             }
    484            
    485             m_insertionSet.execute(block);
    486         }
    487        
    488         // At this point we have dummy materialization nodes along with edges to them. This means
    489         // that the part of the control flow graph that prefers to see actual object allocations
    490         // is completely fixed up, except for the materializations themselves.
    491     }
    492    
    493     void lowerNonReadingOperationsOnPhantomAllocations()
    494     {
    495         // Lower everything but reading operations on phantom allocations. We absolutely have to
    496         // lower all writes so as to reveal them to the SSA calculator. We cannot lower reads
    497         // because the whole point is that those go away completely.
    498        
    499         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    500             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
    501                 Node* node = block->at(nodeIndex);
    502                 switch (node->op()) {
    503                 case PutByOffset: {
    504                     Node* target = node->child2().node();
    505                     if (m_sinkCandidates.contains(target)) {
    506                         ASSERT(target->isPhantomObjectAllocation());
    507                         node->convertToPutByOffsetHint();
    508                     }
    509                     break;
    510                 }
    511 
    512                 case PutClosureVar: {
    513                     Node* target = node->child1().node();
    514                     if (m_sinkCandidates.contains(target)) {
    515                         ASSERT(target->isPhantomActivationAllocation());
    516                         node->convertToPutClosureVarHint();
    517                     }
    518                     break;
    519                 }
    520                    
    521                 case PutStructure: {
    522                     Node* target = node->child1().node();
    523                     if (m_sinkCandidates.contains(target)) {
    524                         ASSERT(target->isPhantomObjectAllocation());
    525                         Node* structure = m_insertionSet.insertConstant(
    526                             nodeIndex, node->origin, JSValue(node->transition()->next));
    527                         node->convertToPutStructureHint(structure);
    528                     }
    529                     break;
    530                 }
    531                    
    532                 case NewObject: {
    533                     if (m_sinkCandidates.contains(node)) {
    534                         Node* structure = m_insertionSet.insertConstant(
    535                             nodeIndex + 1, node->origin, JSValue(node->structure()));
    536                         m_insertionSet.insert(
    537                             nodeIndex + 1,
    538                             PromotedHeapLocation(StructurePLoc, node).createHint(
    539                                 m_graph, node->origin, structure));
    540                         node->convertToPhantomNewObject();
    541                     }
    542                     break;
    543                 }
    544                    
    545                 case MaterializeNewObject: {
    546                     if (m_sinkCandidates.contains(node)) {
    547                         m_insertionSet.insert(
    548                             nodeIndex + 1,
    549                             PromotedHeapLocation(StructurePLoc, node).createHint(
    550                                 m_graph, node->origin, m_graph.varArgChild(node, 0).node()));
    551                         for (unsigned i = 0; i < node->objectMaterializationData().m_properties.size(); ++i) {
    552                             unsigned identifierNumber =
    553                                 node->objectMaterializationData().m_properties[i].m_identifierNumber;
    554                             m_insertionSet.insert(
    555                                 nodeIndex + 1,
    556                                 PromotedHeapLocation(
    557                                     NamedPropertyPLoc, node, identifierNumber).createHint(
    558                                     m_graph, node->origin,
    559                                     m_graph.varArgChild(node, i + 1).node()));
    560                         }
    561                         node->convertToPhantomNewObject();
    562                     }
    563                     break;
    564                 }
    565 
    566                 case NewFunction: {
    567                     if (m_sinkCandidates.contains(node)) {
    568                         Node* executable = m_insertionSet.insertConstant(
    569                             nodeIndex + 1, node->origin, node->cellOperand());
    570                         m_insertionSet.insert(
    571                             nodeIndex + 1,
    572                             PromotedHeapLocation(FunctionExecutablePLoc, node).createHint(
    573                                 m_graph, node->origin, executable));
    574                         m_insertionSet.insert(
    575                             nodeIndex + 1,
    576                             PromotedHeapLocation(FunctionActivationPLoc, node).createHint(
    577                                 m_graph, node->origin, node->child1().node()));
    578                         node->convertToPhantomNewFunction();
    579                     }
    580                     break;
    581                 }
    582 
    583                 case CreateActivation: {
    584                     if (m_sinkCandidates.contains(node)) {
    585                         m_insertionSet.insert(
    586                             nodeIndex + 1,
    587                             PromotedHeapLocation(ActivationScopePLoc, node).createHint(
    588                                 m_graph, node->origin, node->child1().node()));
    589                         Node* symbolTableNode = m_insertionSet.insertConstant(
    590                             nodeIndex + 1, node->origin, node->cellOperand());
    591                         m_insertionSet.insert(
    592                             nodeIndex + 1,
    593                             PromotedHeapLocation(ActivationSymbolTablePLoc, node).createHint(
    594                                 m_graph, node->origin, symbolTableNode));
    595 
    596                         {
    597                             SymbolTable* symbolTable = node->castOperand<SymbolTable*>();
    598                             Node* undefined = m_insertionSet.insertConstant(
    599                                 nodeIndex + 1, node->origin, jsUndefined());
    600                             ConcurrentJITLocker locker(symbolTable->m_lock);
    601                             for (auto iter = symbolTable->begin(locker), end = symbolTable->end(locker); iter != end; ++iter) {
    602                                 m_insertionSet.insert(
    603                                     nodeIndex + 1,
    604                                     PromotedHeapLocation(
    605                                         ClosureVarPLoc, node, iter->value.scopeOffset().offset()).createHint(
    606                                         m_graph, node->origin, undefined));
    607                             }
    608                         }
    609 
    610                         node->convertToPhantomCreateActivation();
    611                     }
    612                     break;
    613                 }
    614 
    615                 case MaterializeCreateActivation: {
    616                     if (m_sinkCandidates.contains(node)) {
    617                         m_insertionSet.insert(
    618                             nodeIndex + 1,
    619                             PromotedHeapLocation(ActivationScopePLoc, node).createHint(
    620                                 m_graph, node->origin, m_graph.varArgChild(node, 0).node()));
    621                         Node* symbolTableNode = m_insertionSet.insertConstant(
    622                             nodeIndex + 1, node->origin, node->cellOperand());
    623                         m_insertionSet.insert(
    624                             nodeIndex + 1,
    625                             PromotedHeapLocation(ActivationSymbolTablePLoc, node).createHint(
    626                                 m_graph, node->origin, symbolTableNode));
    627                         ObjectMaterializationData& data = node->objectMaterializationData();
    628                         for (unsigned i = 0; i < data.m_properties.size(); ++i) {
    629                             unsigned identifierNumber = data.m_properties[i].m_identifierNumber;
    630                             m_insertionSet.insert(
    631                                 nodeIndex + 1,
    632                                 PromotedHeapLocation(
    633                                     ClosureVarPLoc, node, identifierNumber).createHint(
    634                                     m_graph, node->origin,
    635                                     m_graph.varArgChild(node, i + 1).node()));
    636                         }
    637                         node->convertToPhantomCreateActivation();
    638                     }
    639                     break;
    640                 }
    641 
    642                 case StoreBarrier: {
    643                     DFG_CRASH(m_graph, node, "Unexpected store barrier during sinking.");
    644                     break;
    645                 }
    646                    
    647                 default:
    648                     break;
    649                 }
    650                
    651                 m_graph.doToChildren(
    652                     node,
    653                     [&] (Edge& edge) {
    654                         if (m_sinkCandidates.contains(edge.node()))
    655                             edge.setUseKind(KnownCellUse);
    656                     });
    657             }
    658             m_insertionSet.execute(block);
    659         }
    660     }
    661    
    662     void promoteSunkenFields()
    663     {
    664         // Collect the set of heap locations that we will be operating over.
    665         HashSet<PromotedHeapLocation> locations;
    666         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    667             for (Node* node : *block) {
    668                 promoteHeapAccess(
    669                     node,
    670                     [&] (PromotedHeapLocation location, Edge) {
    671                         if (m_sinkCandidates.contains(location.base()))
    672                             locations.add(location);
    673                     },
    674                     [&] (PromotedHeapLocation location) {
    675                         if (m_sinkCandidates.contains(location.base()))
    676                             locations.add(location);
    677                     });
    678             }
    679         }
    680        
    681         // Figure out which locations belong to which allocations.
    682         m_locationsForAllocation.clear();
    683         for (PromotedHeapLocation location : locations) {
    684             auto result = m_locationsForAllocation.add(location.base(), Vector<PromotedHeapLocation>());
    685             ASSERT(!result.iterator->value.contains(location));
    686             result.iterator->value.append(location);
    687         }
    688        
    689         // For each sunken thingy, make sure we create Bottom values for all of its fields.
    690         // Note that this has the hilarious slight inefficiency of creating redundant hints for
    691         // things that were previously materializations. This should only impact compile times and
    692         // not code quality, and it's necessary for soundness without some data structure hackage.
    693         // For example, a MaterializeNewObject that we choose to sink may have new fields added to
    694         // it conditionally. That would necessitate Bottoms.
    695         Node* bottom = nullptr;
    696         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    697             if (block == m_graph.block(0))
    698                 bottom = m_insertionSet.insertConstant(0, NodeOrigin(), jsUndefined());
    699            
    700             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
    701                 Node* node = block->at(nodeIndex);
    702                 for (PromotedHeapLocation location : m_locationsForAllocation.get(node)) {
    703                     m_insertionSet.insert(
    704                         nodeIndex + 1, location.createHint(m_graph, node->origin, bottom));
    705                 }
    706             }
    707             m_insertionSet.execute(block);
    708         }
    709 
    710         m_ssaCalculator.reset();
    711 
    712         // Collect the set of "variables" that we will be sinking.
    713         m_locationToVariable.clear();
    714         m_indexToLocation.clear();
    715         for (PromotedHeapLocation location : locations) {
    716             SSACalculator::Variable* variable = m_ssaCalculator.newVariable();
    717             m_locationToVariable.add(location, variable);
    718             ASSERT(m_indexToLocation.size() == variable->index());
    719             m_indexToLocation.append(location);
    720         }
    721        
    722         // Create Defs from the existing hints.
    723         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
    724             for (Node* node : *block) {
    725                 promoteHeapAccess(
    726                     node,
    727                     [&] (PromotedHeapLocation location, Edge value) {
    728                         if (!m_sinkCandidates.contains(location.base()))
    729                             return;
    730                         SSACalculator::Variable* variable = m_locationToVariable.get(location);
    731                         m_ssaCalculator.newDef(variable, block, value.node());
    732                     },
    733                     [&] (PromotedHeapLocation) { });
    734             }
    735         }
    736        
    737         // OMG run the SSA calculator to create Phis!
    738         m_ssaCalculator.computePhis(
    739             [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
    740                 PromotedHeapLocation location = m_indexToLocation[variable->index()];
    741                 if (!m_combinedLiveness.liveAtHead[block].contains(location.base()))
    742                     return nullptr;
    743                
    744                 Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, NodeOrigin());
    745                 phiNode->mergeFlags(NodeResultJS);
    746                 return phiNode;
    747             });
    748        
    749         // Place Phis in the right places, replace all uses of any load with the appropriate
    750         // value, and create the appropriate Upsilon nodes.
    751         m_graph.clearReplacements();
    752         for (BasicBlock* block : m_graph.blocksInPreOrder()) {
    753             // This mapping table is intended to be lazy. If something is omitted from the table,
    754             // it means that there haven't been any local stores to that promoted heap location.
    755             m_localMapping.clear();
    756            
    757             // Insert the Phi functions that we had previously created.
    758             for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(block)) {
    759                 PromotedHeapLocation location = m_indexToLocation[phiDef->variable()->index()];
    760                
    761                 m_insertionSet.insert(
    762                     0, phiDef->value());
    763                 m_insertionSet.insert(
    764                     0, location.createHint(m_graph, NodeOrigin(), phiDef->value()));
    765                 m_localMapping.add(location, phiDef->value());
    766             }
    767            
    768             if (verbose)
    769                 dataLog("Local mapping at ", pointerDump(block), ": ", mapDump(m_localMapping), "\n");
    770            
    771             // Process the block and replace all uses of loads with the promoted value.
    772             for (Node* node : *block) {
    773                 m_graph.performSubstitution(node);
    774                
    775                 if (Node* escapee = m_materializationToEscapee.get(node))
    776                     populateMaterialize(block, node, escapee);
    777                
    778                 promoteHeapAccess(
    779                     node,
    780                     [&] (PromotedHeapLocation location, Edge value) {
    781                         if (m_sinkCandidates.contains(location.base()))
    782                             m_localMapping.set(location, value.node());
    783                     },
    784                     [&] (PromotedHeapLocation location) {
    785                         if (m_sinkCandidates.contains(location.base())) {
    786                             switch (node->op()) {
    787                             case CheckStructure:
    788                                 node->convertToCheckStructureImmediate(resolve(block, location));
    789                                 break;
    790 
    791                             default:
    792                                 node->replaceWith(resolve(block, location));
    793                                 break;
    794                             }
    795                         }
    796                     });
    797             }
    798            
    799             // Gotta drop some Upsilons.
    800             NodeAndIndex terminal = block->findTerminal();
    801             size_t upsilonInsertionPoint = terminal.index;
    802             NodeOrigin upsilonOrigin = terminal.node->origin;
    803             for (BasicBlock* successorBlock : block->successors()) {
    804                 for (SSACalculator::Def* phiDef : m_ssaCalculator.phisForBlock(successorBlock)) {
    805                     Node* phiNode = phiDef->value();
    806                     SSACalculator::Variable* variable = phiDef->variable();
    807                     PromotedHeapLocation location = m_indexToLocation[variable->index()];
    808                     Node* incoming = resolve(block, location);
    809                    
    810                     m_insertionSet.insertNode(
    811                         upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
    812                         OpInfo(phiNode), incoming->defaultEdge());
    813                 }
    814             }
    815            
    816             m_insertionSet.execute(block);
    817         }
    818     }
    819    
    820     Node* resolve(BasicBlock* block, PromotedHeapLocation location)
    821     {
    822         if (Node* result = m_localMapping.get(location))
    823             return result;
    824        
    825         // This implies that there is no local mapping. Find a non-local mapping.
    826         SSACalculator::Def* def = m_ssaCalculator.nonLocalReachingDef(
    827             block, m_locationToVariable.get(location));
    828         ASSERT(def);
    829         ASSERT(def->value());
    830         m_localMapping.add(location, def->value());
    831         return def->value();
    832     }
    833 
    834     template<typename SinkCandidateFunctor, typename EscapeFunctor>
     805    }
     806
     807    template<typename WriteFunctor, typename ResolveFunctor>
    835808    void handleNode(
    836809        Node* node,
    837         const SinkCandidateFunctor& sinkCandidate,
    838         const EscapeFunctor& escape)
    839     {
     810        const WriteFunctor& heapWrite,
     811        const ResolveFunctor& heapResolve)
     812    {
     813        m_heap.assertIsValid();
     814        ASSERT(m_heap.takeEscapees().isEmpty());
     815
     816        Allocation* target = nullptr;
     817        HashMap<PromotedLocationDescriptor, LazyNode> writes;
     818        PromotedLocationDescriptor exactRead;
     819
    840820        switch (node->op()) {
    841821        case NewObject:
    842         case MaterializeNewObject:
    843         case MaterializeCreateActivation:
    844             sinkCandidate();
    845             m_graph.doToChildren(
    846                 node,
    847                 [&] (Edge edge) {
    848                     escape(edge.node());
    849                 });
    850             break;
    851 
    852         case NewFunction:
    853             if (!node->castOperand<FunctionExecutable*>()->singletonFunction()->isStillValid())
    854                 sinkCandidate();
    855             escape(node->child1().node());
    856             break;
    857 
    858         case CreateActivation:
    859             if (!node->castOperand<SymbolTable*>()->singletonScope()->isStillValid())
    860                 sinkCandidate();
    861             escape(node->child1().node());
     822            target = &m_heap.newAllocation(node, Allocation::Kind::Object);
     823            target->setStructures(node->structure());
     824            writes.add(
     825                StructurePLoc, LazyNode(m_graph.freeze(node->structure())));
     826            break;
     827
     828        case MaterializeNewObject: {
     829            target = &m_heap.newAllocation(node, Allocation::Kind::Object);
     830            target->setStructures(node->structureSet());
     831            writes.add(
     832                StructurePLoc, LazyNode(m_graph.varArgChild(node, 0).node()));
     833            for (unsigned i = 0; i < node->objectMaterializationData().m_properties.size(); ++i) {
     834                writes.add(
     835                    PromotedLocationDescriptor(
     836                        NamedPropertyPLoc,
     837                        node->objectMaterializationData().m_properties[i].m_identifierNumber),
     838                    LazyNode(m_graph.varArgChild(node, i + 1).node()));
     839            }
     840            break;
     841        }
     842
     843        case NewFunction: {
     844            if (node->castOperand<FunctionExecutable*>()->singletonFunction()->isStillValid()) {
     845                m_heap.escape(node->child1().node());
     846                break;
     847            }
     848            target = &m_heap.newAllocation(node, Allocation::Kind::Function);
     849            writes.add(FunctionExecutablePLoc, LazyNode(node->cellOperand()));
     850            writes.add(FunctionActivationPLoc, LazyNode(node->child1().node()));
     851            break;
     852        }
     853
     854        case CreateActivation: {
     855            if (node->castOperand<SymbolTable*>()->singletonScope()->isStillValid()) {
     856                m_heap.escape(node->child1().node());
     857                break;
     858            }
     859            target = &m_heap.newAllocation(node, Allocation::Kind::Activation);
     860            writes.add(ActivationSymbolTablePLoc, LazyNode(node->cellOperand()));
     861            writes.add(ActivationScopePLoc, LazyNode(node->child1().node()));
     862            {
     863                SymbolTable* symbolTable = node->castOperand<SymbolTable*>();
     864                ConcurrentJITLocker locker(symbolTable->m_lock);
     865                LazyNode undefined(m_graph.freeze(jsUndefined()));
     866                for (auto iter = symbolTable->begin(locker), end = symbolTable->end(locker); iter != end; ++iter) {
     867                    writes.add(
     868                        PromotedLocationDescriptor(ClosureVarPLoc, iter->value.scopeOffset().offset()),
     869                        undefined);
     870                }
     871            }
     872            break;
     873        }
     874
     875        case MaterializeCreateActivation: {
     876            // We have sunk this once already - there is no way the
     877            // watchpoint is still valid.
     878            ASSERT(!node->castOperand<SymbolTable*>()->singletonScope()->isStillValid());
     879            target = &m_heap.newAllocation(node, Allocation::Kind::Activation);
     880            writes.add(ActivationSymbolTablePLoc, LazyNode(m_graph.varArgChild(node, 0).node()));
     881            writes.add(ActivationScopePLoc, LazyNode(m_graph.varArgChild(node, 1).node()));
     882            for (unsigned i = 0; i < node->objectMaterializationData().m_properties.size(); ++i) {
     883                writes.add(
     884                    PromotedLocationDescriptor(
     885                        ClosureVarPLoc,
     886                        node->objectMaterializationData().m_properties[i].m_identifierNumber),
     887                    LazyNode(m_graph.varArgChild(node, i + 2).node()));
     888            }
     889            break;
     890        }
     891
     892        case PutStructure:
     893            target = m_heap.onlyLocalAllocation(node->child1().node());
     894            if (target && target->isObjectAllocation()) {
     895                writes.add(StructurePLoc, LazyNode(m_graph.freeze(JSValue(node->transition()->next))));
     896                target->setStructures(node->transition()->next);
     897            } else
     898                m_heap.escape(node->child1().node());
     899            break;
     900
     901        case CheckStructure: {
     902            Allocation* allocation = m_heap.onlyLocalAllocation(node->child1().node());
     903            if (allocation && allocation->isObjectAllocation()) {
     904                allocation->filterStructures(node->structureSet());
     905                if (Node* value = heapResolve(PromotedHeapLocation(allocation->identifier(), StructurePLoc)))
     906                    node->convertToCheckStructureImmediate(value);
     907            } else
     908                m_heap.escape(node->child1().node());
     909            break;
     910        }
     911
     912        case GetByOffset:
     913        case GetGetterSetterByOffset:
     914            target = m_heap.onlyLocalAllocation(node->child2().node());
     915            if (target && target->isObjectAllocation()) {
     916                unsigned identifierNumber = node->storageAccessData().identifierNumber;
     917                exactRead = PromotedLocationDescriptor(NamedPropertyPLoc, identifierNumber);
     918            } else {
     919                m_heap.escape(node->child1().node());
     920                m_heap.escape(node->child2().node());
     921            }
     922            break;
     923
     924        case MultiGetByOffset:
     925            target = m_heap.onlyLocalAllocation(node->child1().node());
     926            if (target && target->isObjectAllocation()) {
     927                unsigned identifierNumber = node->multiGetByOffsetData().identifierNumber;
     928                exactRead = PromotedLocationDescriptor(NamedPropertyPLoc, identifierNumber);
     929            } else
     930                m_heap.escape(node->child1().node());
     931            break;
     932
     933        case PutByOffset:
     934            target = m_heap.onlyLocalAllocation(node->child2().node());
     935            if (target && target->isObjectAllocation()) {
     936                unsigned identifierNumber = node->storageAccessData().identifierNumber;
     937                writes.add(
     938                    PromotedLocationDescriptor(NamedPropertyPLoc, identifierNumber),
     939                    LazyNode(node->child3().node()));
     940            } else {
     941                m_heap.escape(node->child1().node());
     942                m_heap.escape(node->child2().node());
     943                m_heap.escape(node->child3().node());
     944            }
     945            break;
     946
     947        case GetClosureVar:
     948            target = m_heap.onlyLocalAllocation(node->child1().node());
     949            if (target && target->isActivationAllocation()) {
     950                exactRead =
     951                    PromotedLocationDescriptor(ClosureVarPLoc, node->scopeOffset().offset());
     952            } else
     953                m_heap.escape(node->child1().node());
     954            break;
     955
     956        case PutClosureVar:
     957            target = m_heap.onlyLocalAllocation(node->child1().node());
     958            if (target && target->isActivationAllocation()) {
     959                writes.add(
     960                    PromotedLocationDescriptor(ClosureVarPLoc, node->scopeOffset().offset()),
     961                    LazyNode(node->child2().node()));
     962            } else {
     963                m_heap.escape(node->child1().node());
     964                m_heap.escape(node->child2().node());
     965            }
     966            break;
     967
     968        case SkipScope:
     969            target = m_heap.onlyLocalAllocation(node->child1().node());
     970            if (target && target->isActivationAllocation())
     971                exactRead = ActivationScopePLoc;
     972            else
     973                m_heap.escape(node->child1().node());
     974            break;
     975
     976        case GetExecutable:
     977            target = m_heap.onlyLocalAllocation(node->child1().node());
     978            if (target && target->isFunctionAllocation())
     979                exactRead = FunctionExecutablePLoc;
     980            else
     981                m_heap.escape(node->child1().node());
     982            break;
     983
     984        case GetScope:
     985            target = m_heap.onlyLocalAllocation(node->child1().node());
     986            if (target && target->isFunctionAllocation())
     987                exactRead = FunctionActivationPLoc;
     988            else
     989                m_heap.escape(node->child1().node());
    862990            break;
    863991
     
    868996                    if (edge.willNotHaveCheck())
    869997                        return;
    870                    
     998
    871999                    if (alreadyChecked(edge.useKind(), SpecObject))
    8721000                        return;
    873                    
    874                     escape(edge.node());
     1001
     1002                    m_heap.escape(edge.node());
    8751003                });
    8761004            break;
     
    8781006        case MovHint:
    8791007        case PutHint:
    880             break;
    881 
    882         case PutStructure:
    883         case CheckStructure:
    884         case GetByOffset:
    885         case MultiGetByOffset:
    886         case GetGetterSetterByOffset: {
    887             Node* target = node->child1().node();
    888             if (!target->isObjectAllocation())
    889                 escape(target);
    890             break;
    891         }
    892            
    893         case PutByOffset: {
    894             Node* target = node->child2().node();
    895             if (!target->isObjectAllocation()) {
    896                 escape(target);
    897                 escape(node->child1().node());
    898             }
    899             escape(node->child3().node());
    900             break;
    901         }
    902 
    903         case GetClosureVar: {
    904             Node* target = node->child1().node();
    905             if (!target->isActivationAllocation())
    906                 escape(target);
    907             break;
    908         }
    909 
    910         case PutClosureVar: {
    911             Node* target = node->child1().node();
    912             if (!target->isActivationAllocation())
    913                 escape(target);
    914             escape(node->child2().node());
    915             break;
    916         }
    917 
    918         case GetScope: {
    919             Node* target = node->child1().node();
    920             if (!target->isFunctionAllocation())
    921                 escape(target);
    922             break;
    923         }
    924 
    925         case GetExecutable: {
    926             Node* target = node->child1().node();
    927             if (!target->isFunctionAllocation())
    928                 escape(target);
    929             break;
    930         }
    931 
    932         case SkipScope: {
    933             Node* target = node->child1().node();
    934             if (!target->isActivationAllocation())
    935                 escape(target);
    936             break;
    937         }
    938            
    939         case MultiPutByOffset:
    940             // FIXME: In the future we should be able to handle this. It's just a matter of
    941             // building the appropriate *Hint variant of this instruction, along with a
    942             // PhantomStructureSelect node - since this transforms the Structure in a conditional
    943             // way.
    944             // https://bugs.webkit.org/show_bug.cgi?id=136924
    945             escape(node->child1().node());
    946             escape(node->child2().node());
     1008            // Handled by OSR availability analysis
    9471009            break;
    9481010
     
    9511013                node,
    9521014                [&] (Edge edge) {
    953                     escape(edge.node());
     1015                    m_heap.escape(edge.node());
    9541016                });
    9551017            break;
    9561018        }
    957     }
    958    
    959     Node* createMaterialize(Node* escapee, Node* where)
    960     {
    961         Node* result = nullptr;
    962        
    963         switch (escapee->op()) {
    964         case NewObject:
    965         case MaterializeNewObject: {
     1019
     1020        if (exactRead) {
     1021            ASSERT(target);
     1022            ASSERT(writes.isEmpty());
     1023            if (Node* value = heapResolve(PromotedHeapLocation(target->identifier(), exactRead))) {
     1024                ASSERT(!value->replacement());
     1025                node->replaceWith(value);
     1026            }
     1027            Node* identifier = target->get(exactRead);
     1028            if (identifier)
     1029                m_heap.newPointer(node, identifier);
     1030        }
     1031
     1032        for (auto entry : writes) {
     1033            ASSERT(target);
     1034            if (entry.value.isNode())
     1035                target->set(entry.key, m_heap.follow(entry.value.asNode()));
     1036            else
     1037                target->remove(entry.key);
     1038            heapWrite(PromotedHeapLocation(target->identifier(), entry.key), entry.value);
     1039        }
     1040
     1041        m_heap.assertIsValid();
     1042    }
     1043
     1044    bool determineSinkCandidates()
     1045    {
     1046        m_sinkCandidates.clear();
     1047        m_materializationToEscapee.clear();
     1048        m_materializationSiteToMaterializations.clear();
     1049        m_materializationSiteToRecoveries.clear();
     1050
     1051        // Logically we wish to consider every allocation and sink
     1052        // it. However, it is probably not profitable to sink an
     1053        // allocation that will always escape. So, we only sink an
     1054        // allocation if one of the following is true:
     1055        //
     1056        // 1) There exists a basic block with only backwards outgoing
     1057        //    edges (or no outgoing edges) in which the node wasn't
     1058        //    materialized. This is meant to catch
     1059        //    effectively-infinite loops in which we don't need to
     1060        //    have allocated the object.
     1061        //
     1062        // 2) There exists a basic block at the tail of which the node
     1063        //    is dead and not materialized.
     1064        //
     1065        // 3) The sum of execution counts of the materializations is
     1066        //    less than the sum of execution counts of the original
     1067        //    node.
     1068        //
     1069        // We currently implement only rule #2.
     1070        // FIXME: Implement the two other rules.
     1071        // https://bugs.webkit.org/show_bug.cgi?id=137073 (rule #1)
     1072        // https://bugs.webkit.org/show_bug.cgi?id=137074 (rule #3)
     1073        //
     1074        // However, these rules allow for a sunk object to be put into
     1075        // a non-sunk one, which we don't support. We could solve this
     1076        // by supporting PutHints on local allocations, making these
     1077        // objects only partially correct, and we would need to adapt
     1078        // the OSR availability analysis and OSR exit to handle
     1079        // this. This would be totally doable, but would create a
     1080        // super rare, and thus bug-prone, code path.
     1081        // So, instead, we need to implement one of the following
     1082        // closure rules:
     1083        //
     1084        // 1) If we put a sink candidate into a local allocation that
     1085        //    is not a sink candidate, change our minds and don't
     1086        //    actually sink the sink candidate.
     1087        //
     1088        // 2) If we put a sink candidate into a local allocation, that
     1089        //    allocation becomes a sink candidate as well.
     1090        //
     1091        // We currently choose to implement closure rule #2.
     1092        HashMap<Node*, Vector<Node*>> dependencies;
     1093        bool hasUnescapedReads = false;
     1094        for (BasicBlock* block : m_graph.blocksInPreOrder()) {
     1095            m_heap = m_heapAtHead[block];
     1096
     1097            for (Node* node : *block) {
     1098                handleNode(
     1099                    node,
     1100                    [&] (PromotedHeapLocation location, LazyNode value) {
     1101                        if (!value.isNode())
     1102                            return;
     1103
     1104                        Allocation* allocation = m_heap.onlyLocalAllocation(value.asNode());
     1105                        if (allocation && !allocation->isEscapedAllocation())
     1106                            dependencies.add(allocation->identifier(), Vector<Node*>()).iterator->value.append(location.base());
     1107                    },
     1108                    [&] (PromotedHeapLocation) -> Node* {
     1109                        hasUnescapedReads = true;
     1110                        return nullptr;
     1111                    });
     1112            }
     1113
     1114            // The sink candidates are initially the unescaped
     1115            // allocations dying at tail of blocks
     1116            HashSet<Node*> allocations;
     1117            for (const auto& entry : m_heap.allocations()) {
     1118                if (!entry.value.isEscapedAllocation())
     1119                    allocations.add(entry.key);
     1120            }
     1121
     1122            m_heap.pruneByLiveness(m_combinedLiveness.liveAtTail[block]);
     1123
     1124            for (Node* identifier : allocations) {
     1125                if (!m_heap.isAllocation(identifier))
     1126                    m_sinkCandidates.add(identifier);
     1127            }
     1128        }
     1129
     1130        // Ensure that the set of sink candidates is closed for put operations
     1131        Vector<Node*> worklist;
     1132        worklist.appendRange(m_sinkCandidates.begin(), m_sinkCandidates.end());
     1133
     1134        while (!worklist.isEmpty()) {
     1135            for (Node* identifier : dependencies.get(worklist.takeLast())) {
     1136                if (m_sinkCandidates.add(identifier).isNewEntry)
     1137                    worklist.append(identifier);
     1138            }
     1139        }
     1140
     1141        if (m_sinkCandidates.isEmpty())
     1142            return hasUnescapedReads;
     1143
     1144        if (verbose)
     1145            dataLog("Candidates: ", listDump(m_sinkCandidates), "\n");
     1146
     1147        // Create the materialization nodes
     1148        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
     1149            m_heap = m_heapAtHead[block];
     1150            m_heap.setWantEscapees();
     1151
     1152            for (Node* node : *block) {
     1153                handleNode(
     1154                    node,
     1155                    [] (PromotedHeapLocation, LazyNode) { },
     1156                    [] (PromotedHeapLocation) -> Node* {
     1157                        return nullptr;
     1158                    });
     1159                auto escapees = m_heap.takeEscapees();
     1160                if (!escapees.isEmpty())
     1161                    placeMaterializations(escapees, node);
     1162            }
     1163
     1164            m_heap.pruneByLiveness(m_combinedLiveness.liveAtTail[block]);
     1165
     1166            {
     1167                HashMap<Node*, Allocation> escapingOnEdge;
     1168                for (const auto& entry : m_heap.allocations()) {
     1169                    if (entry.value.isEscapedAllocation())
     1170                        continue;
     1171
     1172                    bool mustEscape = false;
     1173                    for (BasicBlock* successorBlock : block->successors()) {
     1174                        if (!m_heapAtHead[successorBlock].isAllocation(entry.key)
     1175                            || m_heapAtHead[successorBlock].getAllocation(entry.key).isEscapedAllocation())
     1176                            mustEscape = true;
     1177                    }
     1178
     1179                    if (mustEscape)
     1180                        escapingOnEdge.add(entry.key, entry.value);
     1181                }
     1182                placeMaterializations(WTF::move(escapingOnEdge), block->terminal());
     1183            }
     1184        }
     1185
     1186        return hasUnescapedReads || !m_sinkCandidates.isEmpty();
     1187    }
     1188
     1189    void placeMaterializations(HashMap<Node*, Allocation> escapees, Node* where)
     1190    {
     1191        // We don't create materializations if the escapee is not a
     1192        // sink candidate
     1193        Vector<Node*> toRemove;
     1194        for (const auto& entry : escapees) {
     1195            if (!m_sinkCandidates.contains(entry.key))
     1196                toRemove.append(entry.key);
     1197        }
     1198        for (Node* identifier : toRemove)
     1199            escapees.remove(identifier);
     1200
     1201        if (escapees.isEmpty())
     1202            return;
     1203
     1204        // First collect the hints that will be needed when the node
     1205        // we materialize is still stored into other unescaped sink candidates
     1206        Vector<PromotedHeapLocation> hints;
     1207        for (const auto& entry : m_heap.allocations()) {
     1208            if (escapees.contains(entry.key))
     1209                continue;
     1210
     1211            for (const auto& field : entry.value.fields()) {
     1212                ASSERT(m_sinkCandidates.contains(entry.key) || !escapees.contains(field.value));
     1213                if (escapees.contains(field.value) && !field.key.neededForMaterialization())
     1214                    hints.append(PromotedHeapLocation(entry.key, field.key));
     1215            }
     1216        }
     1217
     1218        // Now we need to order the materialization. Any order is
     1219        // valid (as long as we materialize a node first if it is
     1220        // needed for the materialization of another node, e.g. a
     1221        // function's activation must be materialized before the
     1222        // function itself), but we want to try minimizing the number
     1223        // of times we have to place Puts to close cycles after a
     1224        // materialization. In other words, we are trying to find the
     1225        // minimum number of materializations to remove from the
     1226        // materialization graph to make it a DAG, known as the
     1227        // (vertex) feedback set problem. Unfortunately, this is a
     1228        // NP-hard problem, which we don't want to solve exactly.
     1229        //
     1230        // Instead, we use a simple greedy procedure, that procedes as
     1231        // follow:
     1232        //  - While there is at least one node with no outgoing edge
     1233        //    amongst the remaining materializations, materialize it
     1234        //    first
     1235        //
     1236        //  - Similarily, while there is at least one node with no
     1237        //    incoming edge amongst the remaining materializations,
     1238        //    materialize it last.
     1239        //
     1240        //  - When both previous conditions are false, we have an
     1241        //    actual cycle, and we need to pick a node to
     1242        //    materialize. We try greedily to remove the "pressure" on
     1243        //    the remaining nodes by choosing the node with maximum
     1244        //    |incoming edges| * |outgoing edges| as a measure of how
     1245        //    "central" to the graph it is. We materialize it first,
     1246        //    so that all the recoveries will be Puts of things into
     1247        //    it (rather than Puts of the materialization into other
     1248        //    objects), which means we will have a single
     1249        //    StoreBarrier.
     1250
     1251
     1252        // Compute dependencies between materializations
     1253        HashMap<Node*, HashSet<Node*>> dependencies;
     1254        HashMap<Node*, HashSet<Node*>> reverseDependencies;
     1255        HashMap<Node*, HashSet<Node*>> forMaterialization;
     1256        for (const auto& entry : escapees) {
     1257            auto& myDependencies = dependencies.add(entry.key, HashSet<Node*>()).iterator->value;
     1258            auto& myDependenciesForMaterialization = forMaterialization.add(entry.key, HashSet<Node*>()).iterator->value;
     1259            reverseDependencies.add(entry.key, HashSet<Node*>());
     1260            for (const auto& field : entry.value.fields()) {
     1261                if (escapees.contains(field.value) && field.value != entry.key) {
     1262                    myDependencies.add(field.value);
     1263                    reverseDependencies.add(field.value, HashSet<Node*>()).iterator->value.add(entry.key);
     1264                    if (field.key.neededForMaterialization())
     1265                        myDependenciesForMaterialization.add(field.value);
     1266                }
     1267            }
     1268        }
     1269
     1270        // Helper function to update the materialized set and the
     1271        // dependencies
     1272        HashSet<Node*> materialized;
     1273        auto materialize = [&] (Node* identifier) {
     1274            materialized.add(identifier);
     1275            for (Node* dep : dependencies.get(identifier))
     1276                reverseDependencies.find(dep)->value.remove(identifier);
     1277            for (Node* rdep : reverseDependencies.get(identifier)) {
     1278                dependencies.find(rdep)->value.remove(identifier);
     1279                forMaterialization.find(rdep)->value.remove(identifier);
     1280            }
     1281            dependencies.remove(identifier);
     1282            reverseDependencies.remove(identifier);
     1283            forMaterialization.remove(identifier);
     1284        };
     1285
     1286        // Nodes without remaining unmaterialized fields will be
     1287        // materialized first - amongst the remaining unmaterialized
     1288        // nodes
     1289        std::list<Allocation> toMaterialize;
     1290        auto firstPos = toMaterialize.begin();
     1291        auto materializeFirst = [&] (Allocation&& allocation) {
     1292            materialize(allocation.identifier());
     1293            // We need to insert *after* the current position
     1294            if (firstPos != toMaterialize.end())
     1295                ++firstPos;
     1296            firstPos = toMaterialize.insert(firstPos, WTF::move(allocation));
     1297        };
     1298
     1299        // Nodes that no other unmaterialized node points to will be
     1300        // materialized last - amongst the remaining unmaterialized
     1301        // nodes
     1302        auto lastPos = toMaterialize.end();
     1303        auto materializeLast = [&] (Allocation&& allocation) {
     1304            materialize(allocation.identifier());
     1305            lastPos = toMaterialize.insert(lastPos, WTF::move(allocation));
     1306        };
     1307
     1308        // These are the promoted locations that contains some of the
     1309        // allocations we are currently escaping. If they are a location on
     1310        // some other allocation we are currently materializing, we will need
     1311        // to "recover" their value with a real put once the corresponding
     1312        // allocation is materialized; if they are a location on some other
     1313        // not-yet-materialized allocation, we will need a PutHint.
     1314        Vector<PromotedHeapLocation> toRecover;
     1315
     1316        // This loop does the actual cycle breaking
     1317        while (!escapees.isEmpty()) {
     1318            materialized.clear();
     1319
     1320            // Materialize nodes that won't require recoveries if we can
     1321            for (auto& entry : escapees) {
     1322                if (!forMaterialization.find(entry.key)->value.isEmpty())
     1323                    continue;
     1324
     1325                if (dependencies.find(entry.key)->value.isEmpty()) {
     1326                    materializeFirst(WTF::move(entry.value));
     1327                    continue;
     1328                }
     1329
     1330                if (reverseDependencies.find(entry.key)->value.isEmpty()) {
     1331                    materializeLast(WTF::move(entry.value));
     1332                    continue;
     1333                }
     1334            }
     1335
     1336            // We reach this only if there is an actual cycle that needs
     1337            // breaking. Because we do not want to solve a NP-hard problem
     1338            // here, we just heuristically pick a node and materialize it
     1339            // first.
     1340            if (materialized.isEmpty()) {
     1341                uint64_t maxEvaluation = 0;
     1342                Allocation* bestAllocation;
     1343                for (auto& entry : escapees) {
     1344                    if (!forMaterialization.find(entry.key)->value.isEmpty())
     1345                        continue;
     1346
     1347                    uint64_t evaluation =
     1348                        static_cast<uint64_t>(dependencies.get(entry.key).size()) * reverseDependencies.get(entry.key).size();
     1349                    if (evaluation > maxEvaluation) {
     1350                        maxEvaluation = evaluation;
     1351                        bestAllocation = &entry.value;
     1352                    }
     1353                }
     1354                RELEASE_ASSERT(maxEvaluation > 0);
     1355
     1356                materializeFirst(WTF::move(*bestAllocation));
     1357            }
     1358            RELEASE_ASSERT(!materialized.isEmpty());
     1359
     1360            for (Node* identifier : materialized)
     1361                escapees.remove(identifier);
     1362        }
     1363
     1364        materialized.clear();
     1365
     1366        HashSet<Node*> escaped;
     1367        for (const Allocation& allocation : toMaterialize)
     1368            escaped.add(allocation.identifier());
     1369        for (const Allocation& allocation : toMaterialize) {
     1370            for (const auto& field : allocation.fields()) {
     1371                if (escaped.contains(field.value) && !materialized.contains(field.value))
     1372                    toRecover.append(PromotedHeapLocation(allocation.identifier(), field.key));
     1373            }
     1374            materialized.add(allocation.identifier());
     1375        }
     1376
     1377        Vector<Node*>& materializations = m_materializationSiteToMaterializations.add(
     1378            where, Vector<Node*>()).iterator->value;
     1379
     1380        for (const Allocation& allocation : toMaterialize) {
     1381            Node* materialization = createMaterialization(allocation, where);
     1382            materializations.append(materialization);
     1383            m_materializationToEscapee.add(materialization, allocation.identifier());
     1384        }
     1385
     1386        if (!toRecover.isEmpty()) {
     1387            m_materializationSiteToRecoveries.add(
     1388                where, Vector<PromotedHeapLocation>()).iterator->value.appendVector(toRecover);
     1389        }
     1390
     1391        // The hints need to be after the "real" recoveries so that we
     1392        // don't hint not-yet-complete objects
     1393        if (!hints.isEmpty()) {
     1394            m_materializationSiteToRecoveries.add(
     1395                where, Vector<PromotedHeapLocation>()).iterator->value.appendVector(hints);
     1396        }
     1397    }
     1398
     1399    Node* createMaterialization(const Allocation& allocation, Node* where)
     1400    {
     1401        // FIXME: This is the only place where we actually use the
     1402        // fact that an allocation's identifier is indeed the node
     1403        // that created the allocation.
     1404        switch (allocation.kind()) {
     1405        case Allocation::Kind::Object: {
    9661406            ObjectMaterializationData* data = m_graph.m_objectMaterializationData.add();
    967            
    968             result = m_graph.addNode(
    969                 escapee->prediction(), Node::VarArg, MaterializeNewObject,
     1407            StructureSet* set = m_graph.addStructureSet(allocation.structures());
     1408
     1409            return m_graph.addNode(
     1410                allocation.identifier()->prediction(), Node::VarArg, MaterializeNewObject,
    9701411                NodeOrigin(
    971                     escapee->origin.semantic,
     1412                    allocation.identifier()->origin.semantic,
    9721413                    where->origin.forExit),
    973                 OpInfo(data), OpInfo(), 0, 0);
    974             break;
    975         }
    976 
    977         case NewFunction:
    978             result = m_graph.addNode(
    979                 escapee->prediction(), NewFunction,
     1414                OpInfo(set), OpInfo(data), 0, 0);
     1415        }
     1416
     1417        case Allocation::Kind::Function: {
     1418            FrozenValue* executable = allocation.identifier()->cellOperand();
     1419
     1420            return m_graph.addNode(
     1421                allocation.identifier()->prediction(), NewFunction,
    9801422                NodeOrigin(
    981                     escapee->origin.semantic,
     1423                    allocation.identifier()->origin.semantic,
    9821424                    where->origin.forExit),
    983                 OpInfo(escapee->cellOperand()),
    984                 escapee->child1());
    985             break;
    986 
    987         case CreateActivation:
    988         case MaterializeCreateActivation: {
     1425                OpInfo(executable));
     1426            break;
     1427        }
     1428
     1429        case Allocation::Kind::Activation: {
    9891430            ObjectMaterializationData* data = m_graph.m_objectMaterializationData.add();
    990             FrozenValue* symbolTable = escapee->cellOperand();
    991             result = m_graph.addNode(
    992                 escapee->prediction(), Node::VarArg, MaterializeCreateActivation,
     1431            FrozenValue* symbolTable = allocation.identifier()->cellOperand();
     1432
     1433            return m_graph.addNode(
     1434                allocation.identifier()->prediction(), Node::VarArg, MaterializeCreateActivation,
    9931435                NodeOrigin(
    994                     escapee->origin.semantic,
     1436                    allocation.identifier()->origin.semantic,
    9951437                    where->origin.forExit),
    996                 OpInfo(data), OpInfo(symbolTable), 0, 0);
    997             break;
     1438                OpInfo(symbolTable), OpInfo(data), 0, 0);
    9981439        }
    9991440
    10001441        default:
    1001             DFG_CRASH(m_graph, escapee, "Bad escapee op");
    1002             break;
    1003         }
    1004        
    1005         if (verbose)
    1006             dataLog("Creating materialization point at ", where, " for ", escapee, ": ", result, "\n");
    1007        
    1008         m_materializationToEscapee.add(result, escapee);
    1009         m_materializationSiteToMaterializations.add(
    1010             where, Vector<Node*>()).iterator->value.append(result);
    1011        
     1442            DFG_CRASH(m_graph, allocation.identifier(), "Bad allocation kind");
     1443        }
     1444    }
     1445
     1446    void promoteLocalHeap()
     1447    {
     1448        // Collect the set of heap locations that we will be operating
     1449        // over.
     1450        HashSet<PromotedHeapLocation> locations;
     1451        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
     1452            m_heap = m_heapAtHead[block];
     1453
     1454            for (Node* node : *block) {
     1455                handleNode(
     1456                    node,
     1457                    [&] (PromotedHeapLocation location, LazyNode) {
     1458                        // If the location is not on a sink candidate,
     1459                        // we only sink it if it is read
     1460                        if (m_sinkCandidates.contains(location.base()))
     1461                            locations.add(location);
     1462                    },
     1463                    [&] (PromotedHeapLocation location) -> Node* {
     1464                        locations.add(location);
     1465                        return nullptr;
     1466                    });
     1467            }
     1468        }
     1469
     1470        // Figure out which locations belong to which allocations.
     1471        m_locationsForAllocation.clear();
     1472        for (PromotedHeapLocation location : locations) {
     1473            auto result = m_locationsForAllocation.add(
     1474                location.base(),
     1475                Vector<PromotedHeapLocation>());
     1476            ASSERT(!result.iterator->value.contains(location));
     1477            result.iterator->value.append(location);
     1478        }
     1479
     1480        m_pointerSSA.reset();
     1481        m_allocationSSA.reset();
     1482
     1483        // Collect the set of "variables" that we will be sinking.
     1484        m_locationToVariable.clear();
     1485        m_nodeToVariable.clear();
     1486        Vector<Node*> indexToNode;
     1487        Vector<PromotedHeapLocation> indexToLocation;
     1488
     1489        for (Node* index : m_sinkCandidates) {
     1490            SSACalculator::Variable* variable = m_allocationSSA.newVariable();
     1491            m_nodeToVariable.add(index, variable);
     1492            ASSERT(indexToNode.size() == variable->index());
     1493            indexToNode.append(index);
     1494        }
     1495
     1496        for (PromotedHeapLocation location : locations) {
     1497            SSACalculator::Variable* variable = m_pointerSSA.newVariable();
     1498            m_locationToVariable.add(location, variable);
     1499            ASSERT(indexToLocation.size() == variable->index());
     1500            indexToLocation.append(location);
     1501        }
     1502
     1503        // We insert all required constants at top of block 0 so that
     1504        // they are inserted only once and we don't clutter the graph
     1505        // with useless constants everywhere
     1506        HashMap<FrozenValue*, Node*> lazyMapping;
     1507        if (!m_bottom)
     1508            m_bottom = m_insertionSet.insertConstant(0, NodeOrigin(), jsNumber(1927));
     1509        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
     1510            m_heap = m_heapAtHead[block];
     1511
     1512            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
     1513                Node* node = block->at(nodeIndex);
     1514
     1515                // Some named properties can be added conditionally,
     1516                // and that would necessitate bottoms
     1517                for (PromotedHeapLocation location : m_locationsForAllocation.get(node)) {
     1518                    if (location.kind() != NamedPropertyPLoc)
     1519                        continue;
     1520
     1521                    SSACalculator::Variable* variable = m_locationToVariable.get(location);
     1522                    m_pointerSSA.newDef(variable, block, m_bottom);
     1523                }
     1524
     1525                for (Node* materialization : m_materializationSiteToMaterializations.get(node)) {
     1526                    Node* escapee = m_materializationToEscapee.get(materialization);
     1527                    m_allocationSSA.newDef(m_nodeToVariable.get(escapee), block, materialization);
     1528                }
     1529
     1530                if (m_sinkCandidates.contains(node))
     1531                    m_allocationSSA.newDef(m_nodeToVariable.get(node), block, node);
     1532
     1533                handleNode(
     1534                    node,
     1535                    [&] (PromotedHeapLocation location, LazyNode value) {
     1536                        if (!locations.contains(location))
     1537                            return;
     1538
     1539                        Node* nodeValue;
     1540                        if (value.isNode())
     1541                            nodeValue = value.asNode();
     1542                        else {
     1543                            auto iter = lazyMapping.find(value.asValue());
     1544                            if (iter != lazyMapping.end())
     1545                                nodeValue = iter->value;
     1546                            else {
     1547                                nodeValue = value.ensureIsNode(
     1548                                    m_insertionSet, m_graph.block(0), 0);
     1549                                lazyMapping.add(value.asValue(), nodeValue);
     1550                            }
     1551                        }
     1552
     1553                        SSACalculator::Variable* variable = m_locationToVariable.get(location);
     1554                        m_pointerSSA.newDef(variable, block, nodeValue);
     1555                    },
     1556                    [] (PromotedHeapLocation) -> Node* {
     1557                        return nullptr;
     1558                    });
     1559            }
     1560        }
     1561        m_insertionSet.execute(m_graph.block(0));
     1562
     1563        // Run the SSA calculators to create Phis
     1564        m_pointerSSA.computePhis(
     1565            [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
     1566                PromotedHeapLocation location = indexToLocation[variable->index()];
     1567
     1568                // Don't create Phi nodes for fields of dead allocations
     1569                if (!m_heapAtHead[block].isAllocation(location.base()))
     1570                    return nullptr;
     1571
     1572                // Don't create Phi nodes once we are escaped
     1573                if (m_heapAtHead[block].getAllocation(location.base()).isEscapedAllocation())
     1574                    return nullptr;
     1575
     1576                // If we point to a single allocation, we will
     1577                // directly use its materialization
     1578                if (m_heapAtHead[block].follow(location))
     1579                    return nullptr;
     1580
     1581                Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, NodeOrigin());
     1582                phiNode->mergeFlags(NodeResultJS);
     1583                return phiNode;
     1584            });
     1585
     1586        m_allocationSSA.computePhis(
     1587            [&] (SSACalculator::Variable* variable, BasicBlock* block) -> Node* {
     1588                Node* identifier = indexToNode[variable->index()];
     1589
     1590                // Don't create Phi nodes for dead allocations
     1591                if (!m_heapAtHead[block].isAllocation(identifier))
     1592                    return nullptr;
     1593
     1594                // Don't create Phi nodes until we are escaped
     1595                if (!m_heapAtHead[block].getAllocation(identifier).isEscapedAllocation())
     1596                    return nullptr;
     1597
     1598                Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, NodeOrigin());
     1599                phiNode->mergeFlags(NodeResultJS);
     1600                return phiNode;
     1601            });
     1602
     1603        // Place Phis in the right places, replace all uses of any load with the appropriate
     1604        // value, and create the materialization nodes.
     1605        LocalOSRAvailabilityCalculator availabilityCalculator;
     1606        m_graph.clearReplacements();
     1607        for (BasicBlock* block : m_graph.blocksInPreOrder()) {
     1608            m_heap = m_heapAtHead[block];
     1609            availabilityCalculator.beginBlock(block);
     1610
     1611            // These mapping tables are intended to be lazy. If
     1612            // something is omitted from the table, it means that
     1613            // there haven't been any local stores to the promoted
     1614            // heap location (or any local materialization).
     1615            m_localMapping.clear();
     1616            m_escapeeToMaterialization.clear();
     1617
     1618            // Insert the Phi functions that we had previously
     1619            // created.
     1620            for (SSACalculator::Def* phiDef : m_pointerSSA.phisForBlock(block)) {
     1621                SSACalculator::Variable* variable = phiDef->variable();
     1622                m_insertionSet.insert(0, phiDef->value());
     1623
     1624                PromotedHeapLocation location = indexToLocation[variable->index()];
     1625                m_localMapping.set(location, phiDef->value());
     1626
     1627                if (m_sinkCandidates.contains(location.base())) {
     1628                    m_insertionSet.insert(
     1629                        0, location.createHint(m_graph, NodeOrigin(), phiDef->value()));
     1630                }
     1631            }
     1632
     1633            for (SSACalculator::Def* phiDef : m_allocationSSA.phisForBlock(block)) {
     1634                SSACalculator::Variable* variable = phiDef->variable();
     1635                m_insertionSet.insert(0, phiDef->value());
     1636
     1637                Node* identifier = indexToNode[variable->index()];
     1638                m_escapeeToMaterialization.add(identifier, phiDef->value());
     1639                insertOSRHintsForUpdate(0, NodeOrigin(), availabilityCalculator.m_availability, identifier, phiDef->value());
     1640            }
     1641
     1642            if (verbose) {
     1643                dataLog("Local mapping at ", pointerDump(block), ": ", mapDump(m_localMapping), "\n");
     1644                dataLog("Local materializations at ", pointerDump(block), ": ", mapDump(m_escapeeToMaterialization), "\n");
     1645            }
     1646
     1647            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
     1648                Node* node = block->at(nodeIndex);
     1649                for (PromotedHeapLocation location : m_locationsForAllocation.get(node)) {
     1650                    if (location.kind() != NamedPropertyPLoc)
     1651                        continue;
     1652
     1653                    m_localMapping.set(location, m_bottom);
     1654
     1655                    if (m_sinkCandidates.contains(node)) {
     1656                        m_insertionSet.insert(
     1657                            nodeIndex + 1,
     1658                            location.createHint(m_graph, node->origin, m_bottom));
     1659                    }
     1660                }
     1661
     1662                for (Node* materialization : m_materializationSiteToMaterializations.get(node)) {
     1663                    Node* escapee = m_materializationToEscapee.get(materialization);
     1664                    populateMaterialization(block, materialization, escapee);
     1665                    m_escapeeToMaterialization.set(escapee, materialization);
     1666                    m_insertionSet.insert(nodeIndex, materialization);
     1667                    if (verbose)
     1668                        dataLog("Materializing ", escapee, " => ", materialization, " at ", node, "\n");
     1669                }
     1670
     1671                for (PromotedHeapLocation location : m_materializationSiteToRecoveries.get(node))
     1672                    m_insertionSet.insert(nodeIndex, createRecovery(block, location, node));
     1673
     1674                // We need to put the OSR hints after the recoveries,
     1675                // because we only want the hints once the object is
     1676                // complete
     1677                for (Node* materialization : m_materializationSiteToMaterializations.get(node)) {
     1678                    Node* escapee = m_materializationToEscapee.get(materialization);
     1679                    insertOSRHintsForUpdate(
     1680                        nodeIndex, node->origin,
     1681                        availabilityCalculator.m_availability, escapee, materialization);
     1682                }
     1683
     1684                if (m_sinkCandidates.contains(node))
     1685                    m_escapeeToMaterialization.set(node, node);
     1686
     1687                availabilityCalculator.executeNode(node);
     1688
     1689                bool doLower = false;
     1690                handleNode(
     1691                    node,
     1692                    [&] (PromotedHeapLocation location, LazyNode value) {
     1693                        if (!locations.contains(location))
     1694                            return;
     1695
     1696                        Node* nodeValue;
     1697                        if (value.isNode())
     1698                            nodeValue = value.asNode();
     1699                        else
     1700                            nodeValue = lazyMapping.get(value.asValue());
     1701
     1702                        nodeValue = resolve(block, nodeValue);
     1703
     1704                        m_localMapping.set(location, nodeValue);
     1705
     1706                        if (!m_sinkCandidates.contains(location.base()))
     1707                            return;
     1708
     1709                        doLower = true;
     1710
     1711                        m_insertionSet.insert(nodeIndex + 1,
     1712                            location.createHint(m_graph, node->origin, nodeValue));
     1713                    },
     1714                    [&] (PromotedHeapLocation location) -> Node* {
     1715                        return resolve(block, location);
     1716                    });
     1717
     1718                if (m_sinkCandidates.contains(node) || doLower) {
     1719                    switch (node->op()) {
     1720                    case NewObject:
     1721                    case MaterializeNewObject:
     1722                        node->convertToPhantomNewObject();
     1723                        break;
     1724
     1725                    case NewFunction:
     1726                        node->convertToPhantomNewFunction();
     1727                        break;
     1728
     1729                    case CreateActivation:
     1730                    case MaterializeCreateActivation:
     1731                        node->convertToPhantomCreateActivation();
     1732                        break;
     1733
     1734                    default:
     1735                        node->remove();
     1736                        break;
     1737                    }
     1738                }
     1739
     1740                m_graph.doToChildren(
     1741                    node,
     1742                    [&] (Edge& edge) {
     1743                        edge.setNode(resolve(block, edge.node()));
     1744                    });
     1745            }
     1746
     1747            // Gotta drop some Upsilons.
     1748            NodeAndIndex terminal = block->findTerminal();
     1749            size_t upsilonInsertionPoint = terminal.index;
     1750            NodeOrigin upsilonOrigin = terminal.node->origin;
     1751            for (BasicBlock* successorBlock : block->successors()) {
     1752                for (SSACalculator::Def* phiDef : m_pointerSSA.phisForBlock(successorBlock)) {
     1753                    Node* phiNode = phiDef->value();
     1754                    SSACalculator::Variable* variable = phiDef->variable();
     1755                    PromotedHeapLocation location = indexToLocation[variable->index()];
     1756                    Node* incoming = resolve(block, location);
     1757
     1758                    m_insertionSet.insertNode(
     1759                        upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
     1760                        OpInfo(phiNode), incoming->defaultEdge());
     1761                }
     1762
     1763                for (SSACalculator::Def* phiDef : m_allocationSSA.phisForBlock(successorBlock)) {
     1764                    Node* phiNode = phiDef->value();
     1765                    SSACalculator::Variable* variable = phiDef->variable();
     1766                    Node* incoming = getMaterialization(block, indexToNode[variable->index()]);
     1767
     1768                    m_insertionSet.insertNode(
     1769                        upsilonInsertionPoint, SpecNone, Upsilon, upsilonOrigin,
     1770                        OpInfo(phiNode), incoming->defaultEdge());
     1771                }
     1772            }
     1773
     1774            m_insertionSet.execute(block);
     1775        }
     1776    }
     1777
     1778    Node* resolve(BasicBlock* block, PromotedHeapLocation location)
     1779    {
     1780        // If we are currently pointing to a single local allocation,
     1781        // simply return the associated materialization.
     1782        if (Node* identifier = m_heap.follow(location))
     1783            return getMaterialization(block, identifier);
     1784
     1785        if (Node* result = m_localMapping.get(location))
     1786            return result;
     1787
     1788        // This implies that there is no local mapping. Find a non-local mapping.
     1789        SSACalculator::Def* def = m_pointerSSA.nonLocalReachingDef(
     1790            block, m_locationToVariable.get(location));
     1791        ASSERT(def);
     1792        ASSERT(def->value());
     1793
     1794        Node* result = def->value();
     1795
     1796        ASSERT(!result->replacement());
     1797
     1798        m_localMapping.add(location, result);
    10121799        return result;
    10131800    }
    1014    
    1015     void populateMaterialize(BasicBlock* block, Node* node, Node* escapee)
    1016     {
     1801
     1802    Node* resolve(BasicBlock* block, Node* node)
     1803    {
     1804        // If we are currently pointing to a single local allocation,
     1805        // simply return the associated materialization.
     1806        if (Node* identifier = m_heap.follow(node))
     1807            return getMaterialization(block, identifier);
     1808
     1809        if (node->replacement())
     1810            node = node->replacement();
     1811        ASSERT(!node->replacement());
     1812
     1813        return node;
     1814    }
     1815
     1816    Node* getMaterialization(BasicBlock* block, Node* identifier)
     1817    {
     1818        ASSERT(m_heap.isAllocation(identifier));
     1819        if (!m_sinkCandidates.contains(identifier))
     1820            return identifier;
     1821
     1822        if (Node* materialization = m_escapeeToMaterialization.get(identifier))
     1823            return materialization;
     1824
     1825        SSACalculator::Def* def = m_allocationSSA.nonLocalReachingDef(
     1826            block, m_nodeToVariable.get(identifier));
     1827        ASSERT(def && def->value());
     1828        m_escapeeToMaterialization.add(identifier, def->value());
     1829        ASSERT(!def->value()->replacement());
     1830        return def->value();
     1831    }
     1832
     1833    void insertOSRHintsForUpdate(unsigned nodeIndex, NodeOrigin origin, AvailabilityMap& availability, Node* escapee, Node* materialization)
     1834    {
     1835        // We need to follow() the value in the heap.
     1836        // Consider the following graph:
     1837        //
     1838        // Block #0
     1839        //   0: NewObject({})
     1840        //   1: NewObject({})
     1841        //   -: PutByOffset(@0, @1, x:0)
     1842        //   -: PutStructure(@0, {x:0})
     1843        //   2: GetByOffset(@0, x:0)
     1844        //   -: MovHint(@2, loc1)
     1845        //   -: Branch(#1, #2)
     1846        //
     1847        // Block #1
     1848        //   3: Call(f, @1)
     1849        //   4: Return(@0)
     1850        //
     1851        // Block #2
     1852        //   -: Return(undefined)
     1853        //
     1854        // We need to materialize @1 at @3, and when doing so we need
     1855        // to insert a MovHint for the materialization into loc1 as
     1856        // well.
     1857        // In order to do this, we say that we need to insert an
     1858        // update hint for any availability whose node resolve()s to
     1859        // the materialization.
     1860        for (auto entry : availability.m_heap) {
     1861            if (!entry.value.hasNode())
     1862                continue;
     1863            if (m_heap.follow(entry.value.node()) != escapee)
     1864                continue;
     1865
     1866            m_insertionSet.insert(
     1867                nodeIndex, entry.key.createHint(m_graph, origin, materialization));
     1868        }
     1869
     1870        for (unsigned i = availability.m_locals.size(); i--;) {
     1871            if (!availability.m_locals[i].hasNode())
     1872                continue;
     1873            if (m_heap.follow(availability.m_locals[i].node()) != escapee)
     1874                continue;
     1875
     1876            int operand = availability.m_locals.operandForIndex(i);
     1877            m_insertionSet.insertNode(
     1878                nodeIndex, SpecNone, MovHint, origin, OpInfo(operand),
     1879                materialization->defaultEdge());
     1880        }
     1881    }
     1882
     1883    void populateMaterialization(BasicBlock* block, Node* node, Node* escapee)
     1884    {
     1885        Allocation& allocation = m_heap.getAllocation(escapee);
    10171886        switch (node->op()) {
    10181887        case MaterializeNewObject: {
    10191888            ObjectMaterializationData& data = node->objectMaterializationData();
    10201889            unsigned firstChild = m_graph.m_varArgChildren.size();
    1021            
     1890
    10221891            Vector<PromotedHeapLocation> locations = m_locationsForAllocation.get(escapee);
    1023            
    1024             PromotedHeapLocation structure(StructurePLoc, escapee);
     1892
     1893            PromotedHeapLocation structure(StructurePLoc, allocation.identifier());
    10251894            ASSERT(locations.contains(structure));
    1026            
     1895
    10271896            m_graph.m_varArgChildren.append(Edge(resolve(block, structure), KnownCellUse));
    1028            
    1029             for (unsigned i = 0; i < locations.size(); ++i) {
    1030                 switch (locations[i].kind()) {
    1031                 case StructurePLoc: {
    1032                     ASSERT(locations[i] == structure);
     1897
     1898            for (PromotedHeapLocation location : locations) {
     1899                switch (location.kind()) {
     1900                case StructurePLoc:
     1901                    ASSERT(location == structure);
    10331902                    break;
    1034                 }
    1035                    
     1903
    10361904                case NamedPropertyPLoc: {
    1037                     Node* value = resolve(block, locations[i]);
    1038                     if (value->op() == BottomValue) {
    1039                         // We can skip Bottoms entirely.
    1040                         break;
    1041                     }
    1042                    
    1043                     data.m_properties.append(PhantomPropertyValue(locations[i].info()));
    1044                     m_graph.m_varArgChildren.append(value);
     1905                    ASSERT(location.base() == allocation.identifier());
     1906                    data.m_properties.append(PhantomPropertyValue(location.info()));
     1907                    Node* value = resolve(block, location);
     1908                    if (m_sinkCandidates.contains(value))
     1909                        m_graph.m_varArgChildren.append(m_bottom);
     1910                    else
     1911                        m_graph.m_varArgChildren.append(value);
    10451912                    break;
    10461913                }
    1047                    
     1914
    10481915                default:
    10491916                    DFG_CRASH(m_graph, node, "Bad location kind");
    10501917                }
    10511918            }
    1052            
     1919
    10531920            node->children = AdjacencyList(
    10541921                AdjacencyList::Variable,
     
    10641931            Vector<PromotedHeapLocation> locations = m_locationsForAllocation.get(escapee);
    10651932
    1066             PromotedHeapLocation scope(ActivationScopePLoc, escapee);
    1067             PromotedHeapLocation symbolTable(ActivationSymbolTablePLoc, escapee);
     1933            PromotedHeapLocation symbolTable(ActivationSymbolTablePLoc, allocation.identifier());
     1934            ASSERT(locations.contains(symbolTable));
     1935            ASSERT(node->cellOperand() == resolve(block, symbolTable)->constant());
     1936            m_graph.m_varArgChildren.append(Edge(resolve(block, symbolTable), KnownCellUse));
     1937
     1938            PromotedHeapLocation scope(ActivationScopePLoc, allocation.identifier());
    10681939            ASSERT(locations.contains(scope));
    1069 
    10701940            m_graph.m_varArgChildren.append(Edge(resolve(block, scope), KnownCellUse));
    10711941
    1072             for (unsigned i = 0; i < locations.size(); ++i) {
    1073                 switch (locations[i].kind()) {
     1942            for (PromotedHeapLocation location : locations) {
     1943                switch (location.kind()) {
    10741944                case ActivationScopePLoc: {
    1075                     ASSERT(locations[i] == scope);
     1945                    ASSERT(location == scope);
    10761946                    break;
    10771947                }
    10781948
    10791949                case ActivationSymbolTablePLoc: {
    1080                     ASSERT(locations[i] == symbolTable);
     1950                    ASSERT(location == symbolTable);
    10811951                    break;
    10821952                }
    10831953
    10841954                case ClosureVarPLoc: {
    1085                     Node* value = resolve(block, locations[i]);
    1086                     if (value->op() == BottomValue)
    1087                         break;
    1088 
    1089                     data.m_properties.append(PhantomPropertyValue(locations[i].info()));
    1090                     m_graph.m_varArgChildren.append(value);
     1955                    ASSERT(location.base() == allocation.identifier());
     1956                    data.m_properties.append(PhantomPropertyValue(location.info()));
     1957                    Node* value = resolve(block, location);
     1958                    if (m_sinkCandidates.contains(value))
     1959                        m_graph.m_varArgChildren.append(m_bottom);
     1960                    else
     1961                        m_graph.m_varArgChildren.append(value);
    10911962                    break;
    10921963                }
     
    11041975
    11051976        case NewFunction: {
    1106             if (!ASSERT_DISABLED) {
    1107                 Vector<PromotedHeapLocation> locations = m_locationsForAllocation.get(escapee);
    1108 
    1109                 ASSERT(locations.size() == 2);
    1110 
    1111                 PromotedHeapLocation executable(FunctionExecutablePLoc, escapee);
    1112                 ASSERT(locations.contains(executable));
    1113 
    1114                 PromotedHeapLocation activation(FunctionActivationPLoc, escapee);
    1115                 ASSERT(locations.contains(activation));
    1116 
    1117                 for (unsigned i = 0; i < locations.size(); ++i) {
    1118                     switch (locations[i].kind()) {
    1119                     case FunctionExecutablePLoc: {
    1120                         ASSERT(locations[i] == executable);
    1121                         break;
    1122                     }
    1123 
    1124                     case FunctionActivationPLoc: {
    1125                         ASSERT(locations[i] == activation);
    1126                         break;
    1127                     }
    1128 
    1129                     default:
    1130                         DFG_CRASH(m_graph, node, "Bad location kind");
    1131                     }
    1132                 }
    1133             }
    1134 
     1977            Vector<PromotedHeapLocation> locations = m_locationsForAllocation.get(escapee);
     1978            ASSERT(locations.size() == 2);
     1979
     1980            PromotedHeapLocation executable(FunctionExecutablePLoc, allocation.identifier());
     1981            ASSERT_UNUSED(executable, locations.contains(executable));
     1982
     1983            PromotedHeapLocation activation(FunctionActivationPLoc, allocation.identifier());
     1984            ASSERT(locations.contains(activation));
     1985
     1986            node->child1() = Edge(resolve(block, activation), KnownCellUse);
    11351987            break;
    11361988        }
     
    11381990        default:
    11391991            DFG_CRASH(m_graph, node, "Bad materialize op");
    1140             break;
    1141         }
    1142     }
    1143    
     1992        }
     1993    }
     1994
     1995    Node* createRecovery(BasicBlock* block, PromotedHeapLocation location, Node* where)
     1996    {
     1997        if (verbose)
     1998            dataLog("Recovering ", location, " at ", where, "\n");
     1999        ASSERT(location.base()->isPhantomAllocation());
     2000        Node* base = getMaterialization(block, location.base());
     2001        Node* value = resolve(block, location);
     2002
     2003        if (verbose)
     2004            dataLog("Base is ", base, " and value is ", value, "\n");
     2005
     2006        if (base->isPhantomAllocation()) {
     2007            return PromotedHeapLocation(base, location.descriptor()).createHint(
     2008                m_graph,
     2009                NodeOrigin(
     2010                    base->origin.semantic,
     2011                    where->origin.forExit),
     2012                value);
     2013        }
     2014
     2015        switch (location.kind()) {
     2016        case NamedPropertyPLoc: {
     2017            Allocation& allocation = m_heap.getAllocation(location.base());
     2018
     2019            Vector<Structure*> structures;
     2020            structures.appendRange(allocation.structures().begin(), allocation.structures().end());
     2021            unsigned identifierNumber = location.info();
     2022            UniquedStringImpl* uid = m_graph.identifiers()[identifierNumber];
     2023
     2024            std::sort(
     2025                structures.begin(),
     2026                structures.end(),
     2027                [uid] (Structure *a, Structure* b) -> bool {
     2028                    return a->getConcurrently(uid) < b->getConcurrently(uid);
     2029                });
     2030
     2031            PropertyOffset firstOffset = structures[0]->getConcurrently(uid);
     2032
     2033            if (firstOffset == structures.last()->getConcurrently(uid)) {
     2034                Node* storage = base;
     2035                // FIXME: When we decide to sink objects with a
     2036                // property storage, we should handle non-inline offsets.
     2037                RELEASE_ASSERT(isInlineOffset(firstOffset));
     2038
     2039                StorageAccessData* data = m_graph.m_storageAccessData.add();
     2040                data->offset = firstOffset;
     2041                data->identifierNumber = identifierNumber;
     2042
     2043                return m_graph.addNode(
     2044                    SpecNone,
     2045                    PutByOffset,
     2046                    where->origin,
     2047                    OpInfo(data),
     2048                    Edge(storage, KnownCellUse),
     2049                    Edge(base, KnownCellUse),
     2050                    value->defaultEdge());
     2051            }
     2052
     2053            MultiPutByOffsetData* data = m_graph.m_multiPutByOffsetData.add();
     2054            data->identifierNumber = identifierNumber;
     2055
     2056            {
     2057                PropertyOffset currentOffset = firstOffset;
     2058                StructureSet currentSet;
     2059                for (Structure* structure : structures) {
     2060                    PropertyOffset offset = structure->getConcurrently(uid);
     2061                    if (offset != currentOffset) {
     2062                        data->variants.append(
     2063                            PutByIdVariant::replace(currentSet, currentOffset));
     2064                        currentOffset = offset;
     2065                        currentSet.clear();
     2066                    }
     2067                    currentSet.add(structure);
     2068                }
     2069                data->variants.append(PutByIdVariant::replace(currentSet, currentOffset));
     2070            }
     2071
     2072            return m_graph.addNode(
     2073                SpecNone,
     2074                MultiPutByOffset,
     2075                NodeOrigin(
     2076                    base->origin.semantic,
     2077                    where->origin.forExit),
     2078                OpInfo(data),
     2079                Edge(base, KnownCellUse),
     2080                value->defaultEdge());
     2081            break;
     2082        }
     2083
     2084        case ClosureVarPLoc: {
     2085            return m_graph.addNode(
     2086                SpecNone,
     2087                PutClosureVar,
     2088                NodeOrigin(
     2089                    base->origin.semantic,
     2090                    where->origin.forExit),
     2091                OpInfo(location.info()),
     2092                Edge(base, KnownCellUse),
     2093                value->defaultEdge());
     2094            break;
     2095        }
     2096
     2097        default:
     2098            DFG_CRASH(m_graph, base, "Bad location kind");
     2099            break;
     2100        }
     2101    }
     2102
     2103    SSACalculator m_pointerSSA;
     2104    SSACalculator m_allocationSSA;
     2105    HashSet<Node*> m_sinkCandidates;
     2106    HashMap<PromotedHeapLocation, SSACalculator::Variable*> m_locationToVariable;
     2107    HashMap<Node*, SSACalculator::Variable*> m_nodeToVariable;
     2108    HashMap<PromotedHeapLocation, Node*> m_localMapping;
     2109    HashMap<Node*, Node*> m_escapeeToMaterialization;
     2110    InsertionSet m_insertionSet;
    11442111    CombinedLiveness m_combinedLiveness;
    1145     SSACalculator m_ssaCalculator;
    1146     HashSet<Node*> m_sinkCandidates;
     2112
    11472113    HashMap<Node*, Node*> m_materializationToEscapee;
    11482114    HashMap<Node*, Vector<Node*>> m_materializationSiteToMaterializations;
     2115    HashMap<Node*, Vector<PromotedHeapLocation>> m_materializationSiteToRecoveries;
     2116
    11492117    HashMap<Node*, Vector<PromotedHeapLocation>> m_locationsForAllocation;
    1150     HashMap<PromotedHeapLocation, SSACalculator::Variable*> m_locationToVariable;
    1151     Vector<PromotedHeapLocation> m_indexToLocation;
    1152     HashMap<PromotedHeapLocation, Node*> m_localMapping;
    1153     InsertionSet m_insertionSet;
     2118
     2119    BlockMap<LocalHeap> m_heapAtHead;
     2120    BlockMap<LocalHeap> m_heapAtTail;
     2121    LocalHeap m_heap;
     2122
     2123    Node* m_bottom = nullptr;
    11542124};
    1155    
     2125
     2126}
     2127
    11562128bool performObjectAllocationSinking(Graph& graph)
    11572129{
     
    11632135
    11642136#endif // ENABLE(DFG_JIT)
    1165 
  • trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.h

    r173993 r186795  
    11/*
    2  * Copyright (C) 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2015 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3333class Graph;
    3434
    35 // Sinks object allocations down to their uses. This will sink the allocations over OSR exits, by
    36 // replacing all stores to those objects with store hints so that OSR exit can materialize the
    37 // object. This may sink allocations past returns, creating control flow paths along which the
    38 // objects are not allocated at all. Replaces all uses of the objects' fields with SSA data flow.
     35// Eliminates allocations allocations that are never used except
     36// locally. This will insert phantom allocations and store hints so
     37// that OSR exit can materialize the objects. Replaces all uses of the
     38// objects' fields with SSA data flow. This phase is able to handle cyclic allocation graphs.
    3939
    4040bool performObjectAllocationSinking(Graph&);
     
    4545
    4646#endif // DFGObjectAllocationSinkingPhase_h
    47 
  • trunk/Source/JavaScriptCore/dfg/DFGPromotedHeapLocation.h

    r184747 r186795  
    5858    {
    5959    }
    60    
     60
     61    PromotedLocationDescriptor(WTF::HashTableDeletedValueType)
     62        : m_kind(InvalidPromotedLocationKind)
     63        , m_info(1)
     64    {
     65    }
     66
    6167    bool operator!() const { return m_kind == InvalidPromotedLocationKind; }
     68
     69    explicit operator bool() const { return !!*this; }
    6270   
    6371    PromotedLocationKind kind() const { return m_kind; }
     
    8694    {
    8795        return m_kind == InvalidPromotedLocationKind && m_info;
     96    }
     97
     98    bool neededForMaterialization() const
     99    {
     100        switch (kind()) {
     101        case NamedPropertyPLoc:
     102        case ClosureVarPLoc:
     103            return false;
     104
     105        default:
     106            return true;
     107        }
    88108    }
    89109   
     
    93113    PromotedLocationKind m_kind;
    94114    unsigned m_info;
     115};
     116
     117struct PromotedLocationDescriptorHash {
     118    static unsigned hash(const PromotedLocationDescriptor& key) { return key.hash(); }
     119    static bool equal(const PromotedLocationDescriptor& a, const PromotedLocationDescriptor& b) { return a == b; }
     120    static const bool safeToCompareToEmptyOrDeleted = true;
    95121};
    96122
     
    177203};
    178204
     205template<typename T> struct DefaultHash;
     206template<> struct DefaultHash<JSC::DFG::PromotedLocationDescriptor> {
     207    typedef JSC::DFG::PromotedLocationDescriptorHash Hash;
     208};
     209
     210template<typename T> struct HashTraits;
     211template<> struct HashTraits<JSC::DFG::PromotedLocationDescriptor> : SimpleClassHashTraits<JSC::DFG::PromotedLocationDescriptor> {
     212    static const bool emptyValueIsZero = false;
     213};
     214
    179215} // namespace WTF
    180216
  • trunk/Source/JavaScriptCore/dfg/DFGValidate.cpp

    r186691 r186795  
    529529                    VALIDATE((node), !"bad node type for SSA");
    530530                    break;
    531                    
     531
    532532                default:
    533533                    // FIXME: Add more things here.
    534534                    // https://bugs.webkit.org/show_bug.cgi?id=123471
     535                    break;
     536                }
     537                switch (node->op()) {
     538                case PhantomNewObject:
     539                case PhantomNewFunction:
     540                case PhantomCreateActivation:
     541                case PhantomDirectArguments:
     542                case PhantomClonedArguments:
     543                case MovHint:
     544                case Upsilon:
     545                case ForwardVarargs:
     546                case CallForwardVarargs:
     547                case ConstructForwardVarargs:
     548                case GetMyArgumentByVal:
     549                    break;
     550
     551                case Check:
     552                    // FIXME: This is probably not correct.
     553                    break;
     554
     555                case PutHint:
     556                    VALIDATE((node), node->child1()->isPhantomAllocation());
     557                    break;
     558
     559                default:
     560                    m_graph.doToChildren(
     561                        node,
     562                        [&] (const Edge& edge) {
     563                            VALIDATE((node), !edge->isPhantomAllocation());
     564                        });
    535565                    break;
    536566                }
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp

    r186605 r186795  
    52885288            values.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i)));
    52895289       
    5290         StructureSet set;
    5291         m_interpreter.phiChildren()->forAllTransitiveIncomingValues(
    5292             m_graph.varArgChild(m_node, 0).node(),
    5293             [&] (Node* incoming) {
    5294                 set.add(incoming->castConstant<Structure*>());
    5295             });
     5290        const StructureSet& set = m_node->structureSet();
    52965291       
    52975292        Vector<LBasicBlock, 1> blocks(set.size());
     
    53925387        Vector<LValue, 8> values;
    53935388        for (unsigned i = 0; i < data.m_properties.size(); ++i)
    5394             values.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i)));
    5395 
    5396         LValue scope = lowCell(m_graph.varArgChild(m_node, 0));
     5389            values.append(lowJSValue(m_graph.varArgChild(m_node, 2 + i)));
     5390
     5391        LValue scope = lowCell(m_graph.varArgChild(m_node, 1));
    53975392        SymbolTable* table = m_node->castOperand<SymbolTable*>();
     5393        ASSERT(table == m_graph.varArgChild(m_node, 0)->castConstant<SymbolTable*>());
    53985394        Structure* structure = m_graph.globalObjectFor(m_node->origin.semantic)->activationStructure();
    53995395
  • trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp

    r184260 r186795  
    221221            }
    222222        }
    223        
     223
    224224        if (!!exit.m_valueProfile)
    225225            jit.store64(GPRInfo::regT0, exit.m_valueProfile.getSpecFailBucket(0));
    226226    }
    227    
    228     // Materialize all objects. Don't materialize an object until all of the objects it needs
    229     // have been materialized. Curiously, this is the only place that we have an algorithm that prevents
    230     // OSR exit from handling cyclic object materializations. Of course, object allocation sinking
    231     // currently wouldn't recognize a cycle as being sinkable - but if it did then the only thing that
    232     // would ahve to change is this fixpoint. Instead we would allocate the objects first and populate
    233     // them with data later.
     227
     228    // Materialize all objects. Don't materialize an object until all
     229    // of the objects it needs have been materialized. We break cycles
     230    // by populating objects late - we only consider an object as
     231    // needing another object if the later is needed for the
     232    // allocation of the former.
     233
    234234    HashSet<ExitTimeObjectMaterialization*> toMaterialize;
    235235    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations)
    236236        toMaterialize.add(materialization);
    237    
     237
    238238    while (!toMaterialize.isEmpty()) {
    239239        unsigned previousToMaterializeSize = toMaterialize.size();
    240        
     240
    241241        Vector<ExitTimeObjectMaterialization*> worklist;
    242242        worklist.appendRange(toMaterialize.begin(), toMaterialize.end());
     
    247247                if (!value.value().isObjectMaterialization())
    248248                    continue;
     249                if (!value.location().neededForMaterialization())
     250                    continue;
    249251                if (toMaterialize.contains(value.value().objectMaterialization())) {
    250                     // Gotta skip this one, since one of its fields points to a materialization
    251                     // that hasn't been materialized.
     252                    // Gotta skip this one, since it needs a
     253                    // materialization that hasn't been materialized.
    252254                    allGood = false;
    253255                    break;
     
    256258            if (!allGood)
    257259                continue;
    258            
    259             // All systems go for materializing the object. First we recover the values of all of
    260             // its fields and then we call a function to actually allocate the beast.
     260
     261            // All systems go for materializing the object. First we
     262            // recover the values of all of its fields and then we
     263            // call a function to actually allocate the beast.
     264            // We only recover the fields that are needed for the allocation.
    261265            for (unsigned propertyIndex = materialization->properties().size(); propertyIndex--;) {
    262                 const ExitValue& value = materialization->properties()[propertyIndex].value();
     266                const ExitPropertyValue& property = materialization->properties()[propertyIndex];
     267                const ExitValue& value = property.value();
     268                if (!property.location().neededForMaterialization())
     269                    continue;
     270
    263271                compileRecovery(
    264272                    jit, value, record, jitCode->stackmaps, registerScratch,
     
    274282            jit.call(GPRInfo::nonArgGPR0);
    275283            jit.storePtr(GPRInfo::returnValueGPR, materializationToPointer.get(materialization));
    276            
     284
    277285            // Let everyone know that we're done.
    278286            toMaterialize.remove(materialization);
     
    283291        // "materializations form a DAG" rule.
    284292        RELEASE_ASSERT(toMaterialize.size() < previousToMaterializeSize);
     293    }
     294
     295    // Now that all the objects have been allocated, we populate them
     296    // with the correct values. This time we can recover all the
     297    // fields, including those that are only needed for the allocation.
     298    for (ExitTimeObjectMaterialization* materialization : exit.m_materializations) {
     299        for (unsigned propertyIndex = materialization->properties().size(); propertyIndex--;) {
     300            const ExitValue& value = materialization->properties()[propertyIndex].value();
     301            compileRecovery(
     302                jit, value, record, jitCode->stackmaps, registerScratch,
     303                materializationToPointer);
     304            jit.storePtr(GPRInfo::regT0, materializationArguments + propertyIndex);
     305        }
     306
     307        // This call assumes that we don't pass arguments on the stack
     308        jit.setupArgumentsWithExecState(
     309            CCallHelpers::TrustedImmPtr(materialization),
     310            CCallHelpers::TrustedImmPtr(materializationToPointer.get(materialization)),
     311            CCallHelpers::TrustedImmPtr(materializationArguments));
     312        jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(operationPopulateObjectInOSR)), GPRInfo::nonArgGPR0);
     313        jit.call(GPRInfo::nonArgGPR0);
    285314    }
    286315
  • trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp

    r184828 r186795  
    4949}
    5050
     51extern "C" void JIT_OPERATION operationPopulateObjectInOSR(
     52    ExecState* exec, ExitTimeObjectMaterialization* materialization,
     53    EncodedJSValue* encodedValue, EncodedJSValue* values)
     54{
     55    VM& vm = exec->vm();
     56    CodeBlock* codeBlock = exec->codeBlock();
     57
     58    // We cannot GC. We've got pointers in evil places.
     59    // FIXME: We are not doing anything that can GC here, and this is
     60    // probably unnecessary.
     61    DeferGCForAWhile deferGC(vm.heap);
     62
     63    switch (materialization->type()) {
     64    case PhantomNewObject: {
     65        JSFinalObject* object = jsCast<JSFinalObject*>(JSValue::decode(*encodedValue));
     66        Structure* structure = object->structure();
     67
     68        // Figure out what the heck to populate the object with. Use
     69        // getPropertiesConcurrently() because that happens to be
     70        // lower-level and more convenient. It doesn't change the
     71        // materialization of the property table. We want to have
     72        // minimal visible effects on the system. Also, don't mind
     73        // that this is O(n^2). It doesn't matter. We only get here
     74        // from OSR exit.
     75        for (PropertyMapEntry entry : structure->getPropertiesConcurrently()) {
     76            for (unsigned i = materialization->properties().size(); i--;) {
     77                const ExitPropertyValue& property = materialization->properties()[i];
     78                if (property.location().kind() != NamedPropertyPLoc)
     79                    continue;
     80                if (codeBlock->identifier(property.location().info()).impl() != entry.key)
     81                    continue;
     82
     83                object->putDirect(vm, entry.offset, JSValue::decode(values[i]));
     84            }
     85        }
     86        break;
     87    }
     88
     89    case PhantomNewFunction:
     90    case PhantomDirectArguments:
     91    case PhantomClonedArguments:
     92        // Those are completely handled by operationMaterializeObjectInOSR
     93        break;
     94
     95    case PhantomCreateActivation: {
     96        JSLexicalEnvironment* activation = jsCast<JSLexicalEnvironment*>(JSValue::decode(*encodedValue));
     97
     98        // Figure out what to populate the activation with
     99        for (unsigned i = materialization->properties().size(); i--;) {
     100            const ExitPropertyValue& property = materialization->properties()[i];
     101            if (property.location().kind() != ClosureVarPLoc)
     102                continue;
     103
     104            activation->variableAt(ScopeOffset(property.location().info())).set(exec->vm(), activation, JSValue::decode(values[i]));
     105        }
     106
     107        break;
     108    }
     109
     110
     111    default:
     112        RELEASE_ASSERT_NOT_REACHED();
     113        break;
     114
     115    }
     116}
     117
    51118extern "C" JSCell* JIT_OPERATION operationMaterializeObjectInOSR(
    52119    ExecState* exec, ExitTimeObjectMaterialization* materialization, EncodedJSValue* values)
    53120{
    54121    VM& vm = exec->vm();
    55     CodeBlock* codeBlock = exec->codeBlock();
    56122
    57123    // We cannot GC. We've got pointers in evil places.
     
    60126    switch (materialization->type()) {
    61127    case PhantomNewObject: {
    62         // First figure out what the structure is.
     128        // Figure out what the structure is
    63129        Structure* structure = nullptr;
    64130        for (unsigned i = materialization->properties().size(); i--;) {
     
    66132            if (property.location() != PromotedLocationDescriptor(StructurePLoc))
    67133                continue;
    68        
     134
    69135            structure = jsCast<Structure*>(JSValue::decode(values[i]));
    70136            break;
    71137        }
    72138        RELEASE_ASSERT(structure);
    73    
    74         // Let's create that object!
     139
    75140        JSFinalObject* result = JSFinalObject::create(vm, structure);
    76    
    77         // Now figure out what the heck to populate the object with. Use getPropertiesConcurrently()
    78         // because that happens to be lower-level and more convenient. It doesn't change the
    79         // materialization of the property table. We want to have minimal visible effects on the
    80         // system. Also, don't mind that this is O(n^2). It doesn't matter. We only get here from OSR
    81         // exit.
    82         for (PropertyMapEntry entry : structure->getPropertiesConcurrently()) {
    83             for (unsigned i = materialization->properties().size(); i--;) {
    84                 const ExitPropertyValue& property = materialization->properties()[i];
    85                 if (property.location().kind() != NamedPropertyPLoc)
    86                     continue;
    87                 if (codeBlock->identifier(property.location().info()).impl() != entry.key)
    88                     continue;
    89            
    90                 result->putDirect(vm, entry.offset, JSValue::decode(values[i]));
    91             }
    92         }
    93    
     141
     142        // The real values will be put subsequently by
     143        // operationPopulateNewObjectInOSR. We can't fill them in
     144        // now, because they may not be available yet (typically
     145        // because we have a cyclic dependency graph).
     146
     147        // We put a dummy value here in order to avoid super-subtle
     148        // GC-and-OSR-exit crashes in case we have a bug and some
     149        // field is, for any reason, not filled later.
     150        // We use a random-ish number instead of a sensible value like
     151        // undefined to make possible bugs easier to track.
     152        for (PropertyMapEntry entry : structure->getPropertiesConcurrently())
     153            result->putDirect(vm, entry.offset, jsNumber(19723));
     154
    94155        return result;
    95156    }
     
    114175
    115176    case PhantomCreateActivation: {
    116         // Figure out where the scope is
     177        // Figure out what the scope and symbol table are
    117178        JSScope* scope = nullptr;
    118179        SymbolTable* table = nullptr;
     
    134195
    135196        RELEASE_ASSERT(materialization->properties().size() - 2 == table->scopeSize());
    136         // Figure out what to populate the activation with
     197
     198        // The real values will be put subsequently by
     199        // operationPopulateNewObjectInOSR. See the PhantomNewObject
     200        // case for details.
    137201        for (unsigned i = materialization->properties().size(); i--;) {
    138202            const ExitPropertyValue& property = materialization->properties()[i];
     
    140204                continue;
    141205
    142             result->variableAt(ScopeOffset(property.location().info())).set(exec->vm(), result, JSValue::decode(values[i]));
     206            result->variableAt(ScopeOffset(property.location().info())).set(
     207                exec->vm(), result, jsNumber(29834));
    143208        }
    144209
  • trunk/Source/JavaScriptCore/ftl/FTLOperations.h

    r173993 r186795  
    4141    ExecState*, ExitTimeObjectMaterialization*, EncodedJSValue*) WTF_INTERNAL;
    4242
     43void JIT_OPERATION operationPopulateObjectInOSR(
     44    ExecState*, ExitTimeObjectMaterialization*, EncodedJSValue*, EncodedJSValue*) WTF_INTERNAL;
     45
    4346} // extern "C"
    4447
Note: See TracChangeset for help on using the changeset viewer.