Changeset 222871 in webkit


Ignore:
Timestamp:
Oct 4, 2017 1:00:01 PM (6 years ago)
Author:
mark.lam@apple.com
Message:

Add support for using Probe DFG OSR Exit behind a runtime flag.
https://bugs.webkit.org/show_bug.cgi?id=177844
<rdar://problem/34801425>

Reviewed by Saam Barati.

Source/JavaScriptCore:

This is based on the code originally posted in https://bugs.webkit.org/show_bug.cgi?id=175144
(in r221774 and r221832) with some optimizations and bug fixes added. The probe
based DFG OSR Exit is only enabled if Options::useProbeOSRExit() is true. We're
landing this behind an option switch to make it easier to tune performance using
the probe based OSR exit.

  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/MacroAssembler.cpp:

(JSC::stdFunctionCallback):

  • assembler/MacroAssemblerPrinter.cpp:

(JSC::Printer::printCallback):

  • assembler/ProbeContext.cpp:

(JSC::Probe::executeProbe):
(JSC::Probe::flushDirtyStackPages):

  • assembler/ProbeContext.h:

(JSC::Probe::Context::Context):
(JSC::Probe::Context::arg):

  • assembler/ProbeFrame.h: Added.

(JSC::Probe::Frame::Frame):
(JSC::Probe::Frame::argument):
(JSC::Probe::Frame::operand):
(JSC::Probe::Frame::setArgument):
(JSC::Probe::Frame::setOperand):
(JSC::Probe::Frame::get):
(JSC::Probe::Frame::set):

  • assembler/ProbeStack.cpp:

(JSC::Probe::Page::lowWatermarkFromVisitingDirtyChunks):
(JSC::Probe::Stack::Stack):
(JSC::Probe::Stack::lowWatermarkFromVisitingDirtyPages):

  • assembler/ProbeStack.h:

(JSC::Probe::Stack::Stack):
(JSC::Probe::Stack::lowWatermark):
(JSC::Probe::Stack::set):
(JSC::Probe::Stack::savedStackPointer const):
(JSC::Probe::Stack::setSavedStackPointer):
(JSC::Probe::Stack::newStackPointer const): Deleted.
(JSC::Probe::Stack::setNewStackPointer): Deleted.

  • bytecode/ArrayProfile.h:

(JSC::ArrayProfile::observeArrayMode):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):

  • bytecode/CodeBlock.h:

(JSC::CodeBlock::addressOfOSRExitCounter): Deleted.

  • bytecode/ExecutionCounter.h:

(JSC::ExecutionCounter::hasCrossedThreshold const):
(JSC::ExecutionCounter::setNewThresholdForOSRExit):

  • bytecode/MethodOfGettingAValueProfile.cpp:

(JSC::MethodOfGettingAValueProfile::reportValue):

  • bytecode/MethodOfGettingAValueProfile.h:
  • dfg/DFGDriver.cpp:

(JSC::DFG::compileImpl):

  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::link):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::jsValueFor):
(JSC::DFG::restoreCalleeSavesFor):
(JSC::DFG::saveCalleeSavesFor):
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::saveOrCopyCalleeSavesFor):
(JSC::DFG::createDirectArgumentsDuringExit):
(JSC::DFG::createClonedArgumentsDuringExit):
(JSC::DFG::emitRestoreArguments):
(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::printOSRExit):

  • dfg/DFGOSRExit.h:

(JSC::DFG::OSRExitState::OSRExitState):

  • dfg/DFGThunks.cpp:

(JSC::DFG::osrExitThunkGenerator):

  • dfg/DFGThunks.h:
  • dfg/DFGVariableEventStream.cpp:

(JSC::DFG::tryToSetConstantRecovery):
(JSC::DFG::VariableEventStream::reconstruct const):
(JSC::DFG::VariableEventStream::tryToSetConstantRecovery const): Deleted.

  • dfg/DFGVariableEventStream.h:
  • profiler/ProfilerOSRExit.h:

(JSC::Profiler::OSRExit::incCount):

  • runtime/JSCJSValue.h:
  • runtime/JSCJSValueInlines.h:
  • runtime/Options.h:

Tools:

Enable --useProbeOSrExit=true for dfg-eager and ftl-no-cjit-validate-sampling-profiler
test configurations.

  • Scripts/run-jsc-stress-tests:
Location:
trunk
Files:
1 added
28 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r222870 r222871  
     12017-10-04  Mark Lam  <mark.lam@apple.com>
     2
     3        Add support for using Probe DFG OSR Exit behind a runtime flag.
     4        https://bugs.webkit.org/show_bug.cgi?id=177844
     5        <rdar://problem/34801425>
     6
     7        Reviewed by Saam Barati.
     8
     9        This is based on the code originally posted in https://bugs.webkit.org/show_bug.cgi?id=175144
     10        (in r221774 and r221832) with some optimizations and bug fixes added.  The probe
     11        based DFG OSR Exit is only enabled if Options::useProbeOSRExit() is true.  We're
     12        landing this behind an option switch to make it easier to tune performance using
     13        the probe based OSR exit.
     14
     15        * JavaScriptCore.xcodeproj/project.pbxproj:
     16        * assembler/MacroAssembler.cpp:
     17        (JSC::stdFunctionCallback):
     18        * assembler/MacroAssemblerPrinter.cpp:
     19        (JSC::Printer::printCallback):
     20        * assembler/ProbeContext.cpp:
     21        (JSC::Probe::executeProbe):
     22        (JSC::Probe::flushDirtyStackPages):
     23        * assembler/ProbeContext.h:
     24        (JSC::Probe::Context::Context):
     25        (JSC::Probe::Context::arg):
     26        * assembler/ProbeFrame.h: Added.
     27        (JSC::Probe::Frame::Frame):
     28        (JSC::Probe::Frame::argument):
     29        (JSC::Probe::Frame::operand):
     30        (JSC::Probe::Frame::setArgument):
     31        (JSC::Probe::Frame::setOperand):
     32        (JSC::Probe::Frame::get):
     33        (JSC::Probe::Frame::set):
     34        * assembler/ProbeStack.cpp:
     35        (JSC::Probe::Page::lowWatermarkFromVisitingDirtyChunks):
     36        (JSC::Probe::Stack::Stack):
     37        (JSC::Probe::Stack::lowWatermarkFromVisitingDirtyPages):
     38        * assembler/ProbeStack.h:
     39        (JSC::Probe::Stack::Stack):
     40        (JSC::Probe::Stack::lowWatermark):
     41        (JSC::Probe::Stack::set):
     42        (JSC::Probe::Stack::savedStackPointer const):
     43        (JSC::Probe::Stack::setSavedStackPointer):
     44        (JSC::Probe::Stack::newStackPointer const): Deleted.
     45        (JSC::Probe::Stack::setNewStackPointer): Deleted.
     46        * bytecode/ArrayProfile.h:
     47        (JSC::ArrayProfile::observeArrayMode):
     48        * bytecode/CodeBlock.cpp:
     49        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
     50        * bytecode/CodeBlock.h:
     51        (JSC::CodeBlock::addressOfOSRExitCounter): Deleted.
     52        * bytecode/ExecutionCounter.h:
     53        (JSC::ExecutionCounter::hasCrossedThreshold const):
     54        (JSC::ExecutionCounter::setNewThresholdForOSRExit):
     55        * bytecode/MethodOfGettingAValueProfile.cpp:
     56        (JSC::MethodOfGettingAValueProfile::reportValue):
     57        * bytecode/MethodOfGettingAValueProfile.h:
     58        * dfg/DFGDriver.cpp:
     59        (JSC::DFG::compileImpl):
     60        * dfg/DFGJITCompiler.cpp:
     61        (JSC::DFG::JITCompiler::linkOSRExits):
     62        (JSC::DFG::JITCompiler::link):
     63        * dfg/DFGOSRExit.cpp:
     64        (JSC::DFG::jsValueFor):
     65        (JSC::DFG::restoreCalleeSavesFor):
     66        (JSC::DFG::saveCalleeSavesFor):
     67        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
     68        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
     69        (JSC::DFG::saveOrCopyCalleeSavesFor):
     70        (JSC::DFG::createDirectArgumentsDuringExit):
     71        (JSC::DFG::createClonedArgumentsDuringExit):
     72        (JSC::DFG::emitRestoreArguments):
     73        (JSC::DFG::OSRExit::executeOSRExit):
     74        (JSC::DFG::reifyInlinedCallFrames):
     75        (JSC::DFG::adjustAndJumpToTarget):
     76        (JSC::DFG::printOSRExit):
     77        * dfg/DFGOSRExit.h:
     78        (JSC::DFG::OSRExitState::OSRExitState):
     79        * dfg/DFGThunks.cpp:
     80        (JSC::DFG::osrExitThunkGenerator):
     81        * dfg/DFGThunks.h:
     82        * dfg/DFGVariableEventStream.cpp:
     83        (JSC::DFG::tryToSetConstantRecovery):
     84        (JSC::DFG::VariableEventStream::reconstruct const):
     85        (JSC::DFG::VariableEventStream::tryToSetConstantRecovery const): Deleted.
     86        * dfg/DFGVariableEventStream.h:
     87        * profiler/ProfilerOSRExit.h:
     88        (JSC::Profiler::OSRExit::incCount):
     89        * runtime/JSCJSValue.h:
     90        * runtime/JSCJSValueInlines.h:
     91        * runtime/Options.h:
     92
    1932017-10-04  Ryan Haddad  <ryanhaddad@apple.com>
    294
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r222870 r222871  
    45974597                FEA0C4011CDD7D0E00481991 /* FunctionWhitelist.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FunctionWhitelist.h; sourceTree = "<group>"; };
    45984598                FEB137561BB11EEE00CD5100 /* MacroAssemblerARM64.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MacroAssemblerARM64.cpp; sourceTree = "<group>"; };
     4599                FEB41CCB1F73284200C5481E /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; };
    45994600                FEB51F6A1A97B688001F921C /* Regress141809.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Regress141809.h; path = API/tests/Regress141809.h; sourceTree = "<group>"; };
    46004601                FEB51F6B1A97B688001F921C /* Regress141809.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; name = Regress141809.mm; path = API/tests/Regress141809.mm; sourceTree = "<group>"; };
     
    72967297                                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */,
    72977298                                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */,
     7299                                FEB41CCB1F73284200C5481E /* ProbeFrame.h */,
    72987300                                FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */,
    72997301                                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */,
  • trunk/Source/JavaScriptCore/assembler/MacroAssembler.cpp

    r222009 r222871  
    3939static void stdFunctionCallback(Probe::Context& context)
    4040{
    41     auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);
     41    auto func = context.arg<const std::function<void(Probe::Context&)>*>();
    4242    (*func)(context);
    4343}
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp

    r222009 r222871  
    176176{
    177177    auto& out = WTF::dataFile();
    178     PrintRecordList& list = *reinterpret_cast<PrintRecordList*>(probeContext.arg);
     178    PrintRecordList& list = *probeContext.arg<PrintRecordList*>();
    179179    for (size_t i = 0; i < list.size(); i++) {
    180180        auto& record = list[i];
  • trunk/Source/JavaScriptCore/assembler/ProbeContext.cpp

    r220960 r222871  
    5353
    5454    if (context.hasWritesToFlush()) {
    55         context.stack().setNewStackPointer(state->cpu.sp());
    56         state->cpu.sp() = std::min(context.stack().lowWatermark(), state->cpu.sp());
     55        context.stack().setSavedStackPointer(state->cpu.sp());
     56        void* lowWatermark = context.stack().lowWatermark(state->cpu.sp());
     57        state->cpu.sp() = std::min(lowWatermark, state->cpu.sp());
    5758
    5859        state->initializeStackFunction = flushDirtyStackPages;
     
    6566    std::unique_ptr<Stack> stack(reinterpret_cast<Probe::Stack*>(state->initializeStackArg));
    6667    stack->flushWrites();
    67     state->cpu.sp() = stack->newStackPointer();
     68    state->cpu.sp() = stack->savedStackPointer();
    6869}
    6970
  • trunk/Source/JavaScriptCore/assembler/ProbeContext.h

    r222058 r222871  
    192192
    193193    Context(State* state)
    194         : m_state(state)
    195         , arg(state->arg)
    196         , cpu(state->cpu)
     194        : cpu(state->cpu)
     195        , m_state(state)
    197196    { }
     197
     198    template<typename T>
     199    T arg() { return reinterpret_cast<T>(m_state->arg); }
    198200
    199201    uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
     
    225227    Stack* releaseStack() { return new Stack(WTFMove(m_stack)); }
    226228
     229    CPUState& cpu;
     230
    227231private:
    228232    State* m_state;
    229 public:
    230     void* arg;
    231     CPUState& cpu;
    232 
    233 private:
    234233    Stack m_stack;
    235234
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.cpp

    r222058 r222871  
    3535namespace Probe {
    3636
     37static void* const maxLowWatermark = reinterpret_cast<void*>(std::numeric_limits<uintptr_t>::max());
     38
    3739#if ASAN_ENABLED
    3840// FIXME: we should consider using the copy function for both ASan and non-ASan builds.
     
    5052}
    5153#else
    52 #define copyStackPage(dst, src, size) std::memcpy(dst, src, size);
     54#define copyStackPage(dst, src, size) std::memcpy(dst, src, size)
    5355#endif
    5456
     
    8587}
    8688
     89void* Page::lowWatermarkFromVisitingDirtyChunks()
     90{
     91    uint64_t dirtyBits = m_dirtyBits;
     92    size_t offset = 0;
     93    while (dirtyBits) {
     94        if (dirtyBits & 1)
     95            return reinterpret_cast<uint8_t*>(m_baseLogicalAddress) + offset;
     96        dirtyBits = dirtyBits >> 1;
     97        offset += s_chunkSize;
     98    }
     99    return maxLowWatermark;
     100}
     101
    87102Stack::Stack(Stack&& other)
    88     : m_newStackPointer(other.m_newStackPointer)
    89     , m_lowWatermark(other.m_lowWatermark)
    90     , m_stackBounds(WTFMove(other.m_stackBounds))
     103    : m_stackBounds(WTFMove(other.m_stackBounds))
    91104    , m_pages(WTFMove(other.m_pages))
    92105{
     106    m_savedStackPointer = other.m_savedStackPointer;
    93107#if !ASSERT_DISABLED
    94108    other.m_isValid = false;
     
    129143}
    130144
     145void* Stack::lowWatermarkFromVisitingDirtyPages()
     146{
     147    void* low = maxLowWatermark;
     148    for (auto it = m_pages.begin(); it != m_pages.end(); ++it) {
     149        Page& page = *it->value;
     150        if (!page.hasWritesToFlush() || low < page.baseAddress())
     151            continue;
     152        low = std::min(low, page.lowWatermarkFromVisitingDirtyChunks());
     153    }
     154    return low;
     155}
     156
    131157} // namespace Probe
    132158} // namespace JSC
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.h

    r222058 r222871  
    9696    }
    9797
     98    void* lowWatermarkFromVisitingDirtyChunks();
     99
    98100private:
    99101    uint64_t dirtyBitFor(void* logicalAddress)
     
    147149public:
    148150    Stack()
    149         : m_lowWatermark(reinterpret_cast<void*>(-1))
    150         , m_stackBounds(Thread::current().stack())
     151        : m_stackBounds(Thread::current().stack())
    151152    { }
    152153    Stack(Stack&& other);
    153154
    154     void* lowWatermark()
    155     {
    156         // We use the chunkAddress for the low watermark because we'll be doing write backs
    157         // to the stack in increments of chunks. Hence, we'll treat the lowest address of
    158         // the chunk as the low watermark of any given set address.
    159         return Page::chunkAddressFor(m_lowWatermark);
     155    void* lowWatermarkFromVisitingDirtyPages();
     156    void* lowWatermark(void* stackPointer)
     157    {
     158        ASSERT(Page::chunkAddressFor(stackPointer) == lowWatermarkFromVisitingDirtyPages());
     159        return Page::chunkAddressFor(stackPointer);
    160160    }
    161161
     
    177177        Page* page = pageFor(address);
    178178        page->set<T>(address, value);
    179 
    180         if (address < m_lowWatermark)
    181             m_lowWatermark = address;
    182179    }
    183180
     
    190187    JS_EXPORT_PRIVATE Page* ensurePageFor(void* address);
    191188
    192     void* newStackPointer() const { return m_newStackPointer; };
    193     void setNewStackPointer(void* sp) { m_newStackPointer = sp; };
     189    void* savedStackPointer() const { return m_savedStackPointer; }
     190    void setSavedStackPointer(void* sp) { m_savedStackPointer = sp; }
    194191
    195192    bool hasWritesToFlush();
     
    208205    }
    209206
    210     void* m_newStackPointer { nullptr };
    211     void* m_lowWatermark;
     207    void* m_savedStackPointer { nullptr };
    212208
    213209    // A cache of the last accessed page details for quick access.
  • trunk/Source/JavaScriptCore/bytecode/ArrayProfile.h

    r222009 r222871  
    11/*
    2  * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    219219    void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure);
    220220   
     221    void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; }
    221222    ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; }
    222223    bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; }
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r222827 r222871  
    23212321}
    23222322
     2323auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction
     2324{
     2325    DFG::OSRExitBase& exit = exitState.exit;
     2326    if (!exitKindMayJettison(exit.m_kind)) {
     2327        // FIXME: We may want to notice that we're frequently exiting
     2328        // at an op_catch that we didn't compile an entrypoint for, and
     2329        // then trigger a reoptimization of this CodeBlock:
     2330        // https://bugs.webkit.org/show_bug.cgi?id=175842
     2331        return OptimizeAction::None;
     2332    }
     2333
     2334    exit.m_count++;
     2335    m_osrExitCounter++;
     2336
     2337    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
     2338    ASSERT(baselineCodeBlock == baselineAlternative());
     2339    if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold()))
     2340        return OptimizeAction::ReoptimizeNow;
     2341
     2342    // We want to figure out if there's a possibility that we're in a loop. For the outermost
     2343    // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
     2344    // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
     2345    // problem is the inlined functions, which might also have loops, but whose baseline versions
     2346    // don't know where to look for the exit count. Figure out if those loops are severe enough
     2347    // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
     2348    // Otherwise, we should use the normal reoptimization trigger.
     2349
     2350    bool didTryToEnterInLoop = false;
     2351    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     2352        if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) {
     2353            didTryToEnterInLoop = true;
     2354            break;
     2355        }
     2356    }
     2357
     2358    uint32_t exitCountThreshold = didTryToEnterInLoop
     2359        ? exitCountThresholdForReoptimizationFromLoop()
     2360        : exitCountThresholdForReoptimization();
     2361
     2362    if (m_osrExitCounter > exitCountThreshold)
     2363        return OptimizeAction::ReoptimizeNow;
     2364
     2365    // Too few fails. Adjust the execution counter such that the target is to only optimize after a while.
     2366    baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold);
     2367    return OptimizeAction::None;
     2368}
     2369
    23232370void CodeBlock::optimizeNextInvocation()
    23242371{
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.h

    r222009 r222871  
    7878namespace JSC {
    7979
     80namespace DFG {
     81struct OSRExitState;
     82} // namespace DFG
     83
    8084class BytecodeLivenessAnalysis;
    8185class CodeBlockSet;
     
    763767    void countOSRExit() { m_osrExitCounter++; }
    764768
    765     uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; }
     769    enum class OptimizeAction { None, ReoptimizeNow };
     770    OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&);
    766771
    767772    static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
  • trunk/Source/JavaScriptCore/bytecode/ExecutionCounter.h

    r222009 r222871  
    11/*
    2  * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    5151    return u.i;
    5252}
    53    
     53
    5454template<CountingVariant countingVariant>
    5555class ExecutionCounter {
     
    5858    void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
    5959    bool checkIfThresholdCrossedAndSet(CodeBlock*);
     60    bool hasCrossedThreshold() const { return m_counter >= 0; }
    6061    void setNewThreshold(int32_t threshold, CodeBlock*);
    6162    void deferIndefinitely();
     
    6364    void dump(PrintStream&) const;
    6465   
     66    void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold)
     67    {
     68        m_activeThreshold = activeThreshold;
     69        m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold);
     70        m_totalCount = memoryUsageAdjustedThreshold;
     71    }
     72
    6573    static int32_t maximumExecutionCountsBetweenCheckpoints()
    6674    {
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp

    r222009 r222871  
    11/*
    2  * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    7575}
    7676
     77void MethodOfGettingAValueProfile::reportValue(JSValue value)
     78{
     79    switch (m_kind) {
     80    case None:
     81        return;
     82
     83    case Ready:
     84        *u.profile->specFailBucket(0) = JSValue::encode(value);
     85        return;
     86
     87    case LazyOperand: {
     88        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
     89
     90        ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
     91        LazyOperandValueProfile* profile =
     92            u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key);
     93        *profile->specFailBucket(0) = JSValue::encode(value);
     94        return;
     95    }
     96
     97    case ArithProfileReady: {
     98        u.arithProfile->observeResult(value);
     99        return;
     100    } }
     101
     102    RELEASE_ASSERT_NOT_REACHED();
     103}
     104
    77105} // namespace JSC
    78106
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h

    r222009 r222871  
    11/*
    2  * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    7171   
    7272    explicit operator bool() const { return m_kind != None; }
    73    
     73
    7474    void emitReportValue(CCallHelpers&, JSValueRegs) const;
    75    
     75    void reportValue(JSValue);
     76
    7677private:
    7778    enum Kind {
  • trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp

    r222009 r222871  
    11/*
    2  * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    9090    // Make sure that any stubs that the DFG is going to use are initialized. We want to
    9191    // make sure that all JIT code generation does finalization on the main thread.
     92    vm.getCTIStub(osrExitThunkGenerator);
    9293    vm.getCTIStub(osrExitGenerationThunkGenerator);
    9394    vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

    r222791 r222871  
    8686    }
    8787   
     88    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator);
     89    CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code());
    8890    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
    89         OSRExit& exit = m_jitCode->osrExit[i];
    9091        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
    9192        JumpList& failureJumps = info.m_failureJumps;
     
    9798        jitAssertHasValidCallFrame();
    9899        store32(TrustedImm32(i), &vm()->osrExitIndex);
    99         exit.setPatchableCodeOffset(patchableJump());
     100        if (Options::useProbeOSRExit()) {
     101            Jump target = jump();
     102            addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
     103                linkBuffer.link(target, osrExitThunkLabel);
     104            });
     105        } else {
     106            OSRExit& exit = m_jitCode->osrExit[i];
     107            exit.setPatchableCodeOffset(patchableJump());
     108        }
    100109    }
    101110}
     
    307316    CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());
    308317    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
    309         OSRExit& exit = m_jitCode->osrExit[i];
    310318        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
    311         linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
    312         exit.correctJump(linkBuffer);
     319        if (!Options::useProbeOSRExit()) {
     320            OSRExit& exit = m_jitCode->osrExit[i];
     321            linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
     322            exit.correctJump(linkBuffer);
     323        }
    313324        if (info.m_replacementSource.isSet()) {
    314325            m_jitCode->common.jumpReplacements.append(JumpReplacement(
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r222791 r222871  
    3030
    3131#include "AssemblyHelpers.h"
     32#include "ClonedArguments.h"
    3233#include "DFGGraph.h"
    3334#include "DFGMayExit.h"
     
    3637#include "DFGOperations.h"
    3738#include "DFGSpeculativeJIT.h"
     39#include "DirectArguments.h"
    3840#include "FrameTracers.h"
     41#include "InlineCallFrame.h"
    3942#include "JSCInlines.h"
     43#include "JSCJSValue.h"
    4044#include "OperandsInlines.h"
     45#include "ProbeContext.h"
     46#include "ProbeFrame.h"
    4147
    4248namespace JSC { namespace DFG {
     49
     50// Probe based OSR Exit.
     51
     52using CPUState = Probe::CPUState;
     53using Context = Probe::Context;
     54using Frame = Probe::Frame;
     55
     56static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
     57static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
     58static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
     59
     60static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
     61{
     62    if (source.isAddress()) {
     63        JSValue result;
     64        std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
     65        return result;
     66    }
     67#if USE(JSVALUE64)
     68    return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
     69#else
     70    if (source.hasKnownTag())
     71        return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
     72    return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
     73#endif
     74}
     75
     76#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     77
     78static_assert(is64Bit(), "we only support callee save registers on 64-bit");
     79
     80// Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
     81static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
     82{
     83    ASSERT(codeBlock);
     84
     85    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     86    RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     87    unsigned registerCount = calleeSaves->size();
     88
     89    uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
     90    for (unsigned i = 0; i < registerCount; i++) {
     91        RegisterAtOffset entry = calleeSaves->at(i);
     92        if (dontRestoreRegisters.get(entry.reg()))
     93            continue;
     94        // The callee saved values come from the original stack, not the recovered stack.
     95        // Hence, we read the values directly from the physical stack memory instead of
     96        // going through context.stack().
     97        ASSERT(!(entry.offset() % sizeof(uintptr_t)));
     98        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
     99    }
     100}
     101
     102// Based on AssemblyHelpers::emitSaveCalleeSavesFor().
     103static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
     104{
     105    auto& stack = context.stack();
     106    ASSERT(codeBlock);
     107
     108    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     109    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     110    unsigned registerCount = calleeSaves->size();
     111
     112    for (unsigned i = 0; i < registerCount; i++) {
     113        RegisterAtOffset entry = calleeSaves->at(i);
     114        if (dontSaveRegisters.get(entry.reg()))
     115            continue;
     116        stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     117    }
     118}
     119
     120// Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
     121static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
     122{
     123    VM& vm = *context.arg<VM*>();
     124
     125    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
     126    RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
     127    unsigned registerCount = allCalleeSaves->size();
     128
     129    VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
     130    uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
     131
     132    // Restore all callee saves.
     133    for (unsigned i = 0; i < registerCount; i++) {
     134        RegisterAtOffset entry = allCalleeSaves->at(i);
     135        if (dontRestoreRegisters.get(entry.reg()))
     136            continue;
     137        size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
     138        if (entry.reg().isGPR())
     139            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
     140        else
     141            context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
     142    }
     143}
     144
     145// Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
     146static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
     147{
     148    VM& vm = *context.arg<VM*>();
     149    auto& stack = context.stack();
     150
     151    VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
     152    void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
     153
     154    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
     155    RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
     156    unsigned registerCount = allCalleeSaves->size();
     157
     158    for (unsigned i = 0; i < registerCount; i++) {
     159        RegisterAtOffset entry = allCalleeSaves->at(i);
     160        if (dontCopyRegisters.get(entry.reg()))
     161            continue;
     162        if (entry.reg().isGPR())
     163            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     164        else
     165            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
     166    }
     167}
     168
     169// Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
     170static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
     171{
     172    Frame frame(context.fp(), context.stack());
     173    ASSERT(codeBlock);
     174
     175    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     176    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     177    unsigned registerCount = calleeSaves->size();
     178
     179    RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
     180
     181    for (unsigned i = 0; i < registerCount; i++) {
     182        RegisterAtOffset entry = calleeSaves->at(i);
     183        if (dontSaveRegisters.get(entry.reg()))
     184            continue;
     185
     186        uintptr_t savedRegisterValue;
     187
     188        if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
     189            savedRegisterValue = frame.get<uintptr_t>(entry.offset());
     190        else
     191            savedRegisterValue = context.gpr(entry.reg().gpr());
     192
     193        frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
     194    }
     195}
     196#else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     197
     198static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
     199static void saveCalleeSavesFor(Context&, CodeBlock*) { }
     200static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
     201static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
     202static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
     203
     204#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     205
     206static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     207{
     208    VM& vm = *context.arg<VM*>();
     209
     210    ASSERT(vm.heap.isDeferred());
     211
     212    if (inlineCallFrame)
     213        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     214
     215    unsigned length = argumentCount - 1;
     216    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
     217    DirectArguments* result = DirectArguments::create(
     218        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
     219
     220    result->callee().set(vm, result, callee);
     221
     222    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
     223    Frame frame(frameBase, context.stack());
     224    for (unsigned i = length; i--;)
     225        result->setIndexQuickly(vm, i, frame.argument(i));
     226
     227    return result;
     228}
     229
     230static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     231{
     232    VM& vm = *context.arg<VM*>();
     233    ExecState* exec = context.fp<ExecState*>();
     234
     235    ASSERT(vm.heap.isDeferred());
     236
     237    if (inlineCallFrame)
     238        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     239
     240    unsigned length = argumentCount - 1;
     241    ClonedArguments* result = ClonedArguments::createEmpty(
     242        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
     243
     244    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
     245    Frame frame(frameBase, context.stack());
     246    for (unsigned i = length; i--;)
     247        result->putDirectIndex(exec, i, frame.argument(i));
     248    return result;
     249}
     250
     251static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
     252{
     253    Frame frame(context.fp(), context.stack());
     254
     255    HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
     256    for (size_t index = 0; index < operands.size(); ++index) {
     257        const ValueRecovery& recovery = operands[index];
     258        int operand = operands.operandForIndex(index);
     259
     260        if (recovery.technique() != DirectArgumentsThatWereNotCreated
     261            && recovery.technique() != ClonedArgumentsThatWereNotCreated)
     262            continue;
     263
     264        MinifiedID id = recovery.nodeID();
     265        auto iter = alreadyAllocatedArguments.find(id);
     266        if (iter != alreadyAllocatedArguments.end()) {
     267            frame.setOperand(operand, frame.operand(iter->value));
     268            continue;
     269        }
     270
     271        InlineCallFrame* inlineCallFrame =
     272            dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
     273
     274        int stackOffset;
     275        if (inlineCallFrame)
     276            stackOffset = inlineCallFrame->stackOffset;
     277        else
     278            stackOffset = 0;
     279
     280        JSFunction* callee;
     281        if (!inlineCallFrame || inlineCallFrame->isClosureCall)
     282            callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
     283        else
     284            callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
     285
     286        int32_t argumentCount;
     287        if (!inlineCallFrame || inlineCallFrame->isVarargs())
     288            argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset);
     289        else
     290            argumentCount = inlineCallFrame->argumentCountIncludingThis;
     291
     292        JSCell* argumentsObject;
     293        switch (recovery.technique()) {
     294        case DirectArgumentsThatWereNotCreated:
     295            argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
     296            break;
     297        case ClonedArgumentsThatWereNotCreated:
     298            argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
     299            break;
     300        default:
     301            RELEASE_ASSERT_NOT_REACHED();
     302            break;
     303        }
     304        frame.setOperand(operand, JSValue(argumentsObject));
     305
     306        alreadyAllocatedArguments.add(id, operand);
     307    }
     308}
     309
     310// The following is a list of extra initializations that need to be done in order
     311// of most likely needed (lower enum value) to least likely needed (higher enum value).
     312// Each level initialization includes the previous lower enum value (see use of the
     313// extraInitializationLevel value below).
     314enum class ExtraInitializationLevel {
     315    None,
     316    SpeculationRecovery,
     317    ValueProfileUpdate,
     318    ArrayProfileUpdate,
     319    Other
     320};
     321
     322void OSRExit::executeOSRExit(Context& context)
     323{
     324    VM& vm = *context.arg<VM*>();
     325    auto scope = DECLARE_THROW_SCOPE(vm);
     326
     327    ExecState* exec = context.fp<ExecState*>();
     328    ASSERT(&exec->vm() == &vm);
     329    auto& cpu = context.cpu;
     330
     331    if (vm.callFrameForCatch) {
     332        exec = vm.callFrameForCatch;
     333        context.fp() = exec;
     334    }
     335
     336    CodeBlock* codeBlock = exec->codeBlock();
     337    ASSERT(codeBlock);
     338    ASSERT(codeBlock->jitType() == JITCode::DFGJIT);
     339
     340    // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
     341    // really be profitable.
     342    DeferGCForAWhile deferGC(vm.heap);
     343
     344    uint32_t exitIndex = vm.osrExitIndex;
     345    DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
     346    OSRExit& exit = dfgJITCode->osrExit[exitIndex];
     347
     348    ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
     349    EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler());
     350
     351    if (UNLIKELY(!exit.exitState)) {
     352        ExtraInitializationLevel extraInitializationLevel = ExtraInitializationLevel::None;
     353
     354        // We only need to execute this block once for each OSRExit record. The computed
     355        // results will be cached in the OSRExitState record for use of the rest of the
     356        // exit ramp code.
     357
     358        // Ensure we have baseline codeBlocks to OSR exit to.
     359        prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
     360
     361        CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
     362        ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
     363
     364        SpeculationRecovery* recovery = nullptr;
     365        if (exit.m_recoveryIndex != UINT_MAX) {
     366            recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
     367            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::SpeculationRecovery);
     368        }
     369
     370        if (UNLIKELY(exit.m_kind == GenericUnwind))
     371            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
     372
     373        ArrayProfile* arrayProfile = nullptr;
     374        if (!!exit.m_jsValueSource) {
     375            if (exit.m_valueProfile)
     376                extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ValueProfileUpdate);
     377            if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
     378                CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
     379                CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
     380                arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex);
     381                if (arrayProfile)
     382                    extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ArrayProfileUpdate);
     383            }
     384        }
     385
     386        int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
     387        double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
     388        ASSERT(adjustedThreshold > 0);
     389        adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
     390
     391        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
     392        Vector<BytecodeAndMachineOffset> decodedCodeMap;
     393        codeBlockForExit->jitCodeMap()->decode(decodedCodeMap);
     394
     395        BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
     396
     397        ASSERT(mapping);
     398        ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex);
     399
     400        void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
     401
     402        // Compute the value recoveries.
     403        Operands<ValueRecovery> operands;
     404        Vector<UndefinedOperandSpan> undefinedOperandSpans;
     405        unsigned numVariables = dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands, &undefinedOperandSpans);
     406        ptrdiff_t stackPointerOffset = -static_cast<ptrdiff_t>(numVariables) * sizeof(Register);
     407
     408        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile));
     409
     410        if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
     411            Profiler::Database& database = *vm.m_perBytecodeProfiler;
     412            Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
     413
     414            Profiler::OSRExit* profilerExit = compilation->addOSRExit(
     415                exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
     416                exit.m_kind, exit.m_kind == UncountableInvalidation);
     417            exit.exitState->profilerExit = profilerExit;
     418            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
     419        }
     420
     421        if (UNLIKELY(Options::printEachOSRExit()))
     422            extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::Other);
     423
     424        exit.exitState->extraInitializationLevel = extraInitializationLevel;
     425
     426        if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
     427            dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
     428                exitIndex, toCString(exit.m_codeOrigin).data(),
     429                exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
     430                toCString(ignoringContext<DumpContext>(operands)).data());
     431        }
     432    }
     433
     434    OSRExitState& exitState = *exit.exitState.get();
     435    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
     436    ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
     437
     438    Operands<ValueRecovery>& operands = exitState.operands;
     439    Vector<UndefinedOperandSpan>& undefinedOperandSpans = exitState.undefinedOperandSpans;
     440
     441    context.sp() = context.fp<uint8_t*>() + exitState.stackPointerOffset;
     442
     443    // The only reason for using this do while look is so we can break out midway when appropriate.
     444    do {
     445        auto extraInitializationLevel = static_cast<ExtraInitializationLevel>(exitState.extraInitializationLevel);
     446
     447        if (extraInitializationLevel == ExtraInitializationLevel::None) {
     448            context.sp() = context.fp<uint8_t*>() + exitState.stackPointerOffset;
     449            break;
     450        }
     451
     452        // Begin extra initilization level: SpeculationRecovery
     453
     454        // We need to do speculation recovery first because array profiling and value profiling
     455        // may rely on a value that it recovers. However, that doesn't mean that it is likely
     456        // to have a recovery value. So, we'll decorate it as UNLIKELY.
     457        SpeculationRecovery* recovery = exitState.recovery;
     458        if (UNLIKELY(recovery)) {
     459            switch (recovery->type()) {
     460            case SpeculativeAdd:
     461                cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
     462#if USE(JSVALUE64)
     463                ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
     464                cpu.gpr(recovery->dest()) |= TagTypeNumber;
     465#endif
     466                break;
     467
     468            case SpeculativeAddImmediate:
     469                cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
     470#if USE(JSVALUE64)
     471                ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
     472                cpu.gpr(recovery->dest()) |= TagTypeNumber;
     473#endif
     474                break;
     475
     476            case BooleanSpeculationCheck:
     477#if USE(JSVALUE64)
     478                cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse;
     479#endif
     480                break;
     481
     482            default:
     483                break;
     484            }
     485        }
     486        if (extraInitializationLevel <= ExtraInitializationLevel::SpeculationRecovery)
     487            break;
     488
     489        // Begin extra initilization level: ValueProfileUpdate
     490        JSValue profiledValue;
     491        if (!!exit.m_jsValueSource) {
     492            profiledValue = jsValueFor(cpu, exit.m_jsValueSource);
     493            if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
     494                profile.reportValue(profiledValue);
     495        }
     496        if (extraInitializationLevel <= ExtraInitializationLevel::ValueProfileUpdate)
     497            break;
     498
     499        // Begin extra initilization level: ArrayProfileUpdate
     500        ArrayProfile* arrayProfile = exitState.arrayProfile;
     501        if (arrayProfile) {
     502            ASSERT(!!exit.m_jsValueSource);
     503            ASSERT(exit.m_kind == BadCache || exit.m_kind == BadIndexingType);
     504            Structure* structure = profiledValue.asCell()->structure(vm);
     505            arrayProfile->observeStructure(structure);
     506            arrayProfile->observeArrayMode(asArrayModes(structure->indexingType()));
     507        }
     508        if (extraInitializationLevel <= ExtraInitializationLevel::ArrayProfileUpdate)
     509            break;
     510
     511        // Begin Extra initilization level: Other
     512        if (UNLIKELY(exit.m_kind == GenericUnwind)) {
     513            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
     514            // So, we must restore our call frame and stack pointer.
     515            restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
     516            ASSERT(context.fp() == vm.callFrameForCatch);
     517        }
     518
     519        if (exitState.profilerExit)
     520            exitState.profilerExit->incCount();
     521
     522        if (UNLIKELY(Options::printEachOSRExit()))
     523            printOSRExit(context, vm.osrExitIndex, exit);
     524
     525    } while (false); // End extra initialization.
     526
     527    Frame frame(cpu.fp(), context.stack());
     528    ASSERT(!(context.fp<uintptr_t>() & 0x7));
     529
     530#if USE(JSVALUE64)
     531    ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber);
     532    ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask);
     533#endif
     534
     535    // Do all data format conversions and store the results into the stack.
     536    // Note: we need to recover values before restoring callee save registers below
     537    // because the recovery may rely on values in some of callee save registers.
     538
     539    int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
     540    size_t numberOfOperands = operands.size();
     541    size_t numUndefinedOperandSpans = undefinedOperandSpans.size();
     542
     543    size_t nextUndefinedSpanIndex = 0;
     544    size_t nextUndefinedOperandIndex = numberOfOperands;
     545    if (numUndefinedOperandSpans)
     546        nextUndefinedOperandIndex = undefinedOperandSpans[nextUndefinedSpanIndex].firstIndex;
     547
     548    JSValue undefined = jsUndefined();
     549    for (size_t spanIndex = 0; spanIndex < numUndefinedOperandSpans; ++spanIndex) {
     550        auto& span = undefinedOperandSpans[spanIndex];
     551        int firstOffset = span.minOffset;
     552        int lastOffset = firstOffset + span.numberOfRegisters;
     553
     554        for (int offset = firstOffset; offset < lastOffset; ++offset)
     555            frame.setOperand(offset, undefined);
     556    }
     557
     558    for (size_t index = 0; index < numberOfOperands; ++index) {
     559        const ValueRecovery& recovery = operands[index];
     560        VirtualRegister reg = operands.virtualRegisterForIndex(index);
     561
     562        if (UNLIKELY(index == nextUndefinedOperandIndex)) {
     563            index += undefinedOperandSpans[nextUndefinedSpanIndex++].numberOfRegisters - 1;
     564            if (nextUndefinedSpanIndex < numUndefinedOperandSpans)
     565                nextUndefinedOperandIndex = undefinedOperandSpans[nextUndefinedSpanIndex].firstIndex;
     566            else
     567                nextUndefinedOperandIndex = numberOfOperands;
     568            continue;
     569        }
     570
     571        if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
     572            continue;
     573
     574        int operand = reg.offset();
     575
     576        switch (recovery.technique()) {
     577        case DisplacedInJSStack:
     578            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
     579            break;
     580
     581        case InFPR:
     582            frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
     583            break;
     584
     585#if USE(JSVALUE64)
     586        case InGPR:
     587            frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
     588            break;
     589#else
     590        case InPair:
     591            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
     592            break;
     593#endif
     594
     595        case UnboxedCellInGPR:
     596            frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
     597            break;
     598
     599        case CellDisplacedInJSStack:
     600            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell()));
     601            break;
     602
     603#if USE(JSVALUE32_64)
     604        case UnboxedBooleanInGPR:
     605            frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
     606            break;
     607#endif
     608
     609        case BooleanDisplacedInJSStack:
     610#if USE(JSVALUE64)
     611            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
     612#else
     613            frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload()));
     614#endif
     615            break;
     616
     617        case UnboxedInt32InGPR:
     618            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
     619            break;
     620
     621        case Int32DisplacedInJSStack:
     622            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32()));
     623            break;
     624
     625#if USE(JSVALUE64)
     626        case UnboxedInt52InGPR:
     627            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
     628            break;
     629
     630        case Int52DisplacedInJSStack:
     631            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52()));
     632            break;
     633
     634        case UnboxedStrictInt52InGPR:
     635            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
     636            break;
     637
     638        case StrictInt52DisplacedInJSStack:
     639            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52()));
     640            break;
     641#endif
     642
     643        case UnboxedDoubleInFPR:
     644            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
     645            break;
     646
     647        case DoubleDisplacedInJSStack:
     648            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble())));
     649            break;
     650
     651        case Constant:
     652            frame.setOperand(operand, recovery.constant());
     653            break;
     654
     655        case DirectArgumentsThatWereNotCreated:
     656        case ClonedArgumentsThatWereNotCreated:
     657            // Don't do this, yet.
     658            break;
     659
     660        default:
     661            RELEASE_ASSERT_NOT_REACHED();
     662            break;
     663        }
     664    }
     665
     666    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
     667    restoreCalleeSavesFor(context, codeBlock);
     668    saveCalleeSavesFor(context, baselineCodeBlock);
     669
     670#if USE(JSVALUE64)
     671    cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber;
     672    cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther;
     673#endif
     674
     675    if (exit.isExceptionHandler())
     676        copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
     677
     678    // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
     679    // recoveries don't recursively refer to each other. But, we don't try to assume that they only
     680    // refer to certain ranges of locals. Hence why we need to do this here, once the stack is sensible.
     681    // Note that we also roughly assume that the arguments might still be materialized outside of its
     682    // inline call frame scope - but for now the DFG wouldn't do that.
     683
     684    DFG::emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
     685
     686    // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
     687    // that all new calls into this code will go to the new JIT, so the execute
     688    // counter only affects call frames that performed OSR exit and call frames
     689    // that were still executing the old JIT at the time of another call frame's
     690    // OSR exit. We want to ensure that the following is true:
     691    //
     692    // (a) Code the performs an OSR exit gets a chance to reenter optimized
     693    //     code eventually, since optimized code is faster. But we don't
     694    //     want to do such reentery too aggressively (see (c) below).
     695    //
     696    // (b) If there is code on the call stack that is still running the old
     697    //     JIT's code and has never OSR'd, then it should get a chance to
     698    //     perform OSR entry despite the fact that we've exited.
     699    //
     700    // (c) Code the performs an OSR exit should not immediately retry OSR
     701    //     entry, since both forms of OSR are expensive. OSR entry is
     702    //     particularly expensive.
     703    //
     704    // (d) Frequent OSR failures, even those that do not result in the code
     705    //     running in a hot loop, result in recompilation getting triggered.
     706    //
     707    // To ensure (c), we'd like to set the execute counter to
     708    // counterValueForOptimizeAfterWarmUp(). This seems like it would endanger
     709    // (a) and (b), since then every OSR exit would delay the opportunity for
     710    // every call frame to perform OSR entry. Essentially, if OSR exit happens
     711    // frequently and the function has few loops, then the counter will never
     712    // become non-negative and OSR entry will never be triggered. OSR entry
     713    // will only happen if a loop gets hot in the old JIT, which does a pretty
     714    // good job of ensuring (a) and (b). But that doesn't take care of (d),
     715    // since each speculation failure would reset the execute counter.
     716    // So we check here if the number of speculation failures is significantly
     717    // larger than the number of successes (we want 90% success rate), and if
     718    // there have been a large enough number of failures. If so, we set the
     719    // counter to 0; otherwise we set the counter to
     720    // counterValueForOptimizeAfterWarmUp().
     721
     722    if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
     723        triggerReoptimizationNow(baselineCodeBlock, &exit);
     724
     725    reifyInlinedCallFrames(context, baselineCodeBlock, exit);
     726    adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
     727}
     728
     729static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
     730{
     731    auto& cpu = context.cpu;
     732    Frame frame(cpu.fp(), context.stack());
     733
     734    // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
     735    // in presence of inlined tail calls.
     736    // https://bugs.webkit.org/show_bug.cgi?id=147511
     737    ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT);
     738    frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
     739
     740    const CodeOrigin* codeOrigin;
     741    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
     742        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     743        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
     744        InlineCallFrame::Kind trueCallerCallKind;
     745        CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
     746        void* callerFrame = cpu.fp();
     747
     748        if (!trueCaller) {
     749            ASSERT(inlineCallFrame->isTail());
     750            void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
     751            frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
     752            callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
     753        } else {
     754            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
     755            unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
     756            void* jumpTarget = nullptr;
     757
     758            switch (trueCallerCallKind) {
     759            case InlineCallFrame::Call:
     760            case InlineCallFrame::Construct:
     761            case InlineCallFrame::CallVarargs:
     762            case InlineCallFrame::ConstructVarargs:
     763            case InlineCallFrame::TailCall:
     764            case InlineCallFrame::TailCallVarargs: {
     765                CallLinkInfo* callLinkInfo =
     766                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
     767                RELEASE_ASSERT(callLinkInfo);
     768
     769                jumpTarget = callLinkInfo->callReturnLocation().executableAddress();
     770                break;
     771            }
     772
     773            case InlineCallFrame::GetterCall:
     774            case InlineCallFrame::SetterCall: {
     775                StructureStubInfo* stubInfo =
     776                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
     777                RELEASE_ASSERT(stubInfo);
     778
     779                jumpTarget = stubInfo->doneLocation().executableAddress();
     780                break;
     781            }
     782
     783            default:
     784                RELEASE_ASSERT_NOT_REACHED();
     785            }
     786
     787            if (trueCaller->inlineCallFrame)
     788                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     789
     790            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
     791        }
     792
     793        frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
     794
     795        // Restore the inline call frame's callee save registers.
     796        // If this inlined frame is a tail call that will return back to the original caller, we need to
     797        // copy the prior contents of the tag registers already saved for the outer frame to this frame.
     798        saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
     799
     800        if (!inlineCallFrame->isVarargs())
     801            frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
     802        ASSERT(callerFrame);
     803        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
     804#if USE(JSVALUE64)
     805        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     806        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
     807        if (!inlineCallFrame->isClosureCall)
     808            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
     809#else // USE(JSVALUE64) // so this is the 32-bit part
     810        Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
     811        uint32_t locationBits = CallSiteIndex(instruction).bits();
     812        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
     813        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
     814        if (!inlineCallFrame->isClosureCall)
     815            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
     816#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
     817    }
     818
     819    // Don't need to set the toplevel code origin if we only did inline tail calls
     820    if (codeOrigin) {
     821#if USE(JSVALUE64)
     822        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     823#else
     824        Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
     825        uint32_t locationBits = CallSiteIndex(instruction).bits();
     826#endif
     827        frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits);
     828    }
     829}
     830
     831static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
     832{
     833    OSRExitState* exitState = exit.exitState.get();
     834
     835    WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
     836    vm.heap.writeBarrier(baselineCodeBlock);
     837
     838    // We barrier all inlined frames -- and not just the current inline stack --
     839    // because we don't know which inlined function owns the value profile that
     840    // we'll update when we exit. In the case of "f() { a(); b(); }", if both
     841    // a and b are inlined, we might exit inside b due to a bad value loaded
     842    // from a.
     843    // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
     844    // the value profile.
     845    InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
     846    if (inlineCallFrames) {
     847        for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
     848            vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
     849    }
     850
     851    if (exit.m_codeOrigin.inlineCallFrame)
     852        context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     853
     854    void* jumpTarget = exitState->jumpTarget;
     855    ASSERT(jumpTarget);
     856
     857    if (exit.isExceptionHandler()) {
     858        // Since we're jumping to op_catch, we need to set callFrameForCatch.
     859        vm.callFrameForCatch = context.fp<ExecState*>();
     860    }
     861
     862    vm.topCallFrame = context.fp<ExecState*>();
     863    context.pc() = jumpTarget;
     864}
     865
     866static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
     867{
     868    ExecState* exec = context.fp<ExecState*>();
     869    CodeBlock* codeBlock = exec->codeBlock();
     870    CodeBlock* alternative = codeBlock->alternative();
     871    ExitKind kind = exit.m_kind;
     872    unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
     873
     874    dataLog("Speculation failure in ", *codeBlock);
     875    dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
     876    if (alternative) {
     877        dataLog(
     878            "executeCounter = ", alternative->jitExecuteCounter(),
     879            ", reoptimizationRetryCounter = ", alternative->reoptimizationRetryCounter(),
     880            ", optimizationDelayCounter = ", alternative->optimizationDelayCounter());
     881    } else
     882        dataLog("no alternative code block (i.e. we've been jettisoned)");
     883    dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
     884    dataLog("    GPRs at time of exit:");
     885    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
     886        GPRReg gpr = GPRInfo::toRegister(i);
     887        dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
     888    }
     889    dataLog("\n");
     890    dataLog("    FPRs at time of exit:");
     891    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
     892        FPRReg fpr = FPRInfo::toRegister(i);
     893        dataLog(" ", context.fprName(fpr), ":");
     894        uint64_t bits = context.fpr<uint64_t>(fpr);
     895        double value = context.fpr(fpr);
     896        dataLogF("%llx:%lf", static_cast<long long>(bits), value);
     897    }
     898    dataLog("\n");
     899}
     900
     901// JIT based OSR Exit.
    43902
    44903OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.h

    r222009 r222871  
    2929
    3030#include "DFGOSRExitBase.h"
     31#include "DFGVariableEventStream.h"
    3132#include "GPRInfo.h"
    3233#include "MacroAssembler.h"
     
    3435#include "Operands.h"
    3536#include "ValueRecovery.h"
     37#include <wtf/RefPtr.h>
    3638
    3739namespace JSC {
    3840
     41class ArrayProfile;
    3942class CCallHelpers;
     43
     44namespace Probe {
     45class Context;
     46} // namespace Probe
     47
     48namespace Profiler {
     49class OSRExit;
     50} // namespace Profiler
    4051
    4152namespace DFG {
     
    92103};
    93104
     105enum class ExtraInitializationLevel;
     106
     107struct OSRExitState : RefCounted<OSRExitState> {
     108    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile)
     109        : exit(exit)
     110        , codeBlock(codeBlock)
     111        , baselineCodeBlock(baselineCodeBlock)
     112        , operands(operands)
     113        , undefinedOperandSpans(undefinedOperandSpans)
     114        , recovery(recovery)
     115        , stackPointerOffset(stackPointerOffset)
     116        , activeThreshold(activeThreshold)
     117        , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
     118        , jumpTarget(jumpTarget)
     119        , arrayProfile(arrayProfile)
     120    { }
     121
     122    OSRExitBase& exit;
     123    CodeBlock* codeBlock;
     124    CodeBlock* baselineCodeBlock;
     125    Operands<ValueRecovery> operands;
     126    Vector<UndefinedOperandSpan> undefinedOperandSpans;
     127    SpeculationRecovery* recovery;
     128    ptrdiff_t stackPointerOffset;
     129    uint32_t activeThreshold;
     130    double memoryUsageAdjustedThreshold;
     131    void* jumpTarget;
     132    ArrayProfile* arrayProfile;
     133
     134    ExtraInitializationLevel extraInitializationLevel;
     135    Profiler::OSRExit* profilerExit { nullptr };
     136};
     137
    94138// === OSRExit ===
    95139//
     
    100144
    101145    static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;
     146    static void executeOSRExit(Probe::Context&);
    102147
    103148    unsigned m_patchableCodeOffset { 0 };
    104149   
    105150    MacroAssemblerCodeRef m_code;
     151
     152    RefPtr<OSRExitState> exitState;
    106153   
    107154    JSValueSource m_jsValueSource;
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp

    r222791 r222871  
    4040
    4141namespace JSC { namespace DFG {
     42
     43MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm)
     44{
     45    MacroAssembler jit;
     46    jit.probe(OSRExit::executeOSRExit, vm);
     47    LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
     48    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk"));
     49}
    4250
    4351MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.h

    r222009 r222871  
    11/*
    2  * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3636namespace DFG {
    3737
     38MacroAssemblerCodeRef osrExitThunkGenerator(VM*);
    3839MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
    3940MacroAssemblerCodeRef osrEntryThunkGenerator(VM*);
  • trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp

    r209764 r222871  
    11/*
    2  * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    9292} // namespace
    9393
    94 bool VariableEventStream::tryToSetConstantRecovery(ValueRecovery& recovery, MinifiedNode* node) const
     94static bool tryToSetConstantRecovery(ValueRecovery& recovery, MinifiedNode* node)
    9595{
    9696    if (!node)
     
    115115}
    116116
    117 void VariableEventStream::reconstruct(
     117template<VariableEventStream::ReconstructionStyle style>
     118unsigned VariableEventStream::reconstruct(
    118119    CodeBlock* codeBlock, CodeOrigin codeOrigin, MinifiedGraph& graph,
    119     unsigned index, Operands<ValueRecovery>& valueRecoveries) const
     120    unsigned index, Operands<ValueRecovery>& valueRecoveries, Vector<UndefinedOperandSpan>* undefinedOperandSpans) const
    120121{
    121122    ASSERT(codeBlock->jitType() == JITCode::DFGJIT);
    122123    CodeBlock* baselineCodeBlock = codeBlock->baselineVersion();
    123    
     124
    124125    unsigned numVariables;
     126    static const unsigned invalidIndex = std::numeric_limits<unsigned>::max();
     127    unsigned firstUndefined = invalidIndex;
     128    bool firstUndefinedIsArgument = false;
     129
     130    auto flushUndefinedOperandSpan = [&] (unsigned i) {
     131        if (firstUndefined == invalidIndex)
     132            return;
     133        int firstOffset = valueRecoveries.virtualRegisterForIndex(firstUndefined).offset();
     134        int lastOffset = valueRecoveries.virtualRegisterForIndex(i - 1).offset();
     135        int minOffset = std::min(firstOffset, lastOffset);
     136        undefinedOperandSpans->append({ firstUndefined, minOffset, i - firstUndefined });
     137        firstUndefined = invalidIndex;
     138    };
     139    auto recordUndefinedOperand = [&] (unsigned i) {
     140        // We want to separate the span of arguments from the span of locals even if they have adjacent operands indexes.
     141        if (firstUndefined != invalidIndex && firstUndefinedIsArgument != valueRecoveries.isArgument(i))
     142            flushUndefinedOperandSpan(i);
     143
     144        if (firstUndefined == invalidIndex) {
     145            firstUndefined = i;
     146            firstUndefinedIsArgument = valueRecoveries.isArgument(i);
     147        }
     148    };
     149
    125150    if (codeOrigin.inlineCallFrame)
    126151        numVariables = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame)->m_numCalleeLocals + VirtualRegister(codeOrigin.inlineCallFrame->stackOffset).toLocal() + 1;
     
    137162                VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
    138163        }
    139         return;
     164        return numVariables;
    140165    }
    141166   
     
    192217        if (source.isTriviallyRecoverable()) {
    193218            valueRecoveries[i] = source.valueRecovery();
     219            if (style == ReconstructionStyle::Separated) {
     220                if (valueRecoveries[i].isConstant() && valueRecoveries[i].constant() == jsUndefined())
     221                    recordUndefinedOperand(i);
     222                else
     223                    flushUndefinedOperandSpan(i);
     224            }
    194225            continue;
    195226        }
     
    200231        if (!info.alive) {
    201232            valueRecoveries[i] = ValueRecovery::constant(jsUndefined());
    202             continue;
    203         }
    204 
    205         if (tryToSetConstantRecovery(valueRecoveries[i], node))
    206             continue;
     233            if (style == ReconstructionStyle::Separated)
     234                recordUndefinedOperand(i);
     235            continue;
     236        }
     237
     238        if (tryToSetConstantRecovery(valueRecoveries[i], node)) {
     239            if (style == ReconstructionStyle::Separated) {
     240                if (node->hasConstant() && node->constant() == jsUndefined())
     241                    recordUndefinedOperand(i);
     242                else
     243                    flushUndefinedOperandSpan(i);
     244            }
     245            continue;
     246        }
    207247       
    208248        ASSERT(info.format != DataFormatNone);
    209        
     249        if (style == ReconstructionStyle::Separated)
     250            flushUndefinedOperandSpan(i);
     251
    210252        if (info.filled) {
    211253            if (info.format == DataFormatDouble) {
     
    226268            ValueRecovery::displacedInJSStack(static_cast<VirtualRegister>(info.u.virtualReg), info.format);
    227269    }
     270    if (style == ReconstructionStyle::Separated)
     271        flushUndefinedOperandSpan(operandSources.size());
     272
     273    return numVariables;
     274}
     275
     276unsigned VariableEventStream::reconstruct(
     277    CodeBlock* codeBlock, CodeOrigin codeOrigin, MinifiedGraph& graph,
     278    unsigned index, Operands<ValueRecovery>& valueRecoveries) const
     279{
     280    return reconstruct<ReconstructionStyle::Combined>(codeBlock, codeOrigin, graph, index, valueRecoveries, nullptr);
     281}
     282
     283unsigned VariableEventStream::reconstruct(
     284    CodeBlock* codeBlock, CodeOrigin codeOrigin, MinifiedGraph& graph,
     285    unsigned index, Operands<ValueRecovery>& valueRecoveries, Vector<UndefinedOperandSpan>* undefinedOperandSpans) const
     286{
     287    return reconstruct<ReconstructionStyle::Separated>(codeBlock, codeOrigin, graph, index, valueRecoveries, undefinedOperandSpans);
    228288}
    229289
  • trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.h

    r218794 r222871  
    11/*
    2  * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3636namespace JSC { namespace DFG {
    3737
     38struct UndefinedOperandSpan {
     39    unsigned firstIndex;
     40    int minOffset;
     41    unsigned numberOfRegisters;
     42};
     43
    3844class VariableEventStream : public Vector<VariableEvent> {
    3945public:
     
    4349    }
    4450   
    45     void reconstruct(
    46         CodeBlock*, CodeOrigin, MinifiedGraph&,
    47         unsigned index, Operands<ValueRecovery>&) const;
     51    unsigned reconstruct(CodeBlock*, CodeOrigin, MinifiedGraph&, unsigned index, Operands<ValueRecovery>&) const;
     52    unsigned reconstruct(CodeBlock*, CodeOrigin, MinifiedGraph&, unsigned index, Operands<ValueRecovery>&, Vector<UndefinedOperandSpan>*) const;
    4853
    4954private:
    50     bool tryToSetConstantRecovery(ValueRecovery&, MinifiedNode*) const;
    51    
     55    enum class ReconstructionStyle {
     56        Combined,
     57        Separated
     58    };
     59    template<ReconstructionStyle style>
     60    unsigned reconstruct(
     61        CodeBlock*, CodeOrigin, MinifiedGraph&,
     62        unsigned index, Operands<ValueRecovery>&, Vector<UndefinedOperandSpan>*) const;
     63
    5264    void logEvent(const VariableEvent&);
    5365};
  • trunk/Source/JavaScriptCore/profiler/ProfilerOSRExit.h

    r222009 r222871  
    11/*
    2  * Copyright (C) 2012 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4444    uint64_t* counterAddress() { return &m_counter; }
    4545    uint64_t count() const { return m_counter; }
    46    
     46    void incCount() { m_counter++; }
     47
    4748    JSValue toJS(ExecState*) const;
    4849
  • trunk/Source/JavaScriptCore/runtime/JSCJSValue.h

    r222009 r222871  
    22 *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
    4  *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
     4 *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
    55 *
    66 *  This library is free software; you can redistribute it and/or
     
    345345    int32_t payload() const;
    346346
    347 #if !ENABLE(JIT)
    348     // This should only be used by the LLInt C Loop interpreter who needs
    349     // synthesize JSValue from its "register"s holding tag and payload
    350     // values.
     347    // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs
     348    // synthesize JSValue from its "register"s holding tag and payload values.
    351349    explicit JSValue(int32_t tag, int32_t payload);
    352 #endif
    353350
    354351#elif USE(JSVALUE64)
  • trunk/Source/JavaScriptCore/runtime/JSCJSValueInlines.h

    r222009 r222871  
    342342}
    343343
    344 #if !ENABLE(JIT)
     344#if USE(JSVALUE32_64)
    345345inline JSValue::JSValue(int32_t tag, int32_t payload)
    346346{
  • trunk/Source/JavaScriptCore/runtime/Options.h

    r222827 r222871  
    173173    v(bool, verboseFTLCompilation, false, Normal, nullptr) \
    174174    v(bool, logCompilationChanges, false, Normal, nullptr) \
     175    v(bool, useProbeOSRExit, false, Normal, nullptr) \
    175176    v(bool, printEachOSRExit, false, Normal, nullptr) \
    176177    v(bool, validateGraph, false, Normal, nullptr) \
  • trunk/Tools/ChangeLog

    r222856 r222871  
     12017-10-04  Mark Lam  <mark.lam@apple.com>
     2
     3        Add support for using Probe DFG OSR Exit behind a runtime flag.
     4        https://bugs.webkit.org/show_bug.cgi?id=177844
     5        <rdar://problem/34801425>
     6
     7        Reviewed by Saam Barati.
     8
     9        Enable --useProbeOSrExit=true for dfg-eager and ftl-no-cjit-validate-sampling-profiler
     10        test configurations.
     11
     12        * Scripts/run-jsc-stress-tests:
     13
    1142017-10-04  Jonathan Bedard  <jbedard@apple.com>
    215
  • trunk/Tools/Scripts/run-jsc-stress-tests

    r222827 r222871  
    458458B3O1_OPTIONS = ["--defaultB3OptLevel=1"]
    459459FTL_OPTIONS = ["--useFTLJIT=true"]
     460PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"]
    460461
    461462require_relative "webkitruby/jsc-stress-test-writer-#{$testWriter}"
     
    624625
    625626def runFTLNoCJITValidate(*optionalTestSpecificOptions)
    626     run("ftl-no-cjit-validate-sampling-profiler", "--validateGraph=true", "--useSamplingProfiler=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + optionalTestSpecificOptions))
     627    run("ftl-no-cjit-validate-sampling-profiler", "--validateGraph=true", "--useSamplingProfiler=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions))
    627628end
    628629
     
    640641
    641642def runDFGEager(*optionalTestSpecificOptions)
    642     run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + optionalTestSpecificOptions))
     643    run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions))
    643644end
    644645
Note: See TracChangeset for help on using the changeset viewer.