Changeset 222009 in webkit


Ignore:
Timestamp:
Sep 13, 2017 9:21:05 PM (7 years ago)
Author:
mark.lam@apple.com
Message:

Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
https://bugs.webkit.org/show_bug.cgi?id=176888
<rdar://problem/34381832>

Not reviewed.

JSTests:

  • stress/op_mod-ConstVar.js:
  • stress/op_mod-VarConst.js:
  • stress/op_mod-VarVar.js:

Source/JavaScriptCore:

  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/MacroAssembler.cpp:

(JSC::stdFunctionCallback):

  • assembler/MacroAssemblerPrinter.cpp:

(JSC::Printer::printCallback):

  • assembler/ProbeContext.h:

(JSC::Probe:: const):
(JSC::Probe::Context::Context):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):
(JSC::Probe::Context::fpr):
(JSC::Probe::Context::gprName):
(JSC::Probe::Context::sprName):
(JSC::Probe::Context::fprName):
(JSC::Probe::Context::pc):
(JSC::Probe::Context::fp):
(JSC::Probe::Context::sp):
(JSC::Probe::CPUState::gpr const): Deleted.
(JSC::Probe::CPUState::spr const): Deleted.
(JSC::Probe::Context::arg): Deleted.
(JSC::Probe::Context::gpr const): Deleted.
(JSC::Probe::Context::spr const): Deleted.
(JSC::Probe::Context::fpr const): Deleted.

  • assembler/ProbeFrame.h: Removed.
  • assembler/ProbeStack.cpp:

(JSC::Probe::Page::Page):

  • assembler/ProbeStack.h:

(JSC::Probe::Page::get):
(JSC::Probe::Page::set):
(JSC::Probe::Page::physicalAddressFor):
(JSC::Probe::Stack::lowWatermark):
(JSC::Probe::Stack::get):
(JSC::Probe::Stack::set):

  • bytecode/ArithProfile.cpp:
  • bytecode/ArithProfile.h:
  • bytecode/ArrayProfile.h:

(JSC::ArrayProfile::observeArrayMode): Deleted.

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): Deleted.

  • bytecode/CodeBlock.h:

(JSC::CodeBlock::addressOfOSRExitCounter):

  • bytecode/ExecutionCounter.h:

(JSC::ExecutionCounter::hasCrossedThreshold const): Deleted.
(JSC::ExecutionCounter::setNewThresholdForOSRExit): Deleted.

  • bytecode/MethodOfGettingAValueProfile.cpp:

(JSC::MethodOfGettingAValueProfile::reportValue): Deleted.

  • bytecode/MethodOfGettingAValueProfile.h:
  • dfg/DFGDriver.cpp:

(JSC::DFG::compileImpl):

  • dfg/DFGJITCode.cpp:

(JSC::DFG::JITCode::findPC):

  • dfg/DFGJITCode.h:
  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::link):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::OSRExit::setPatchableCodeOffset):
(JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const):
(JSC::DFG::OSRExit::codeLocationForRepatch const):
(JSC::DFG::OSRExit::correctJump):
(JSC::DFG::OSRExit::emitRestoreArguments):
(JSC::DFG::OSRExit::compileOSRExit):
(JSC::DFG::OSRExit::compileExit):
(JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure):
(JSC::DFG::jsValueFor): Deleted.
(JSC::DFG::restoreCalleeSavesFor): Deleted.
(JSC::DFG::saveCalleeSavesFor): Deleted.
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
(JSC::DFG::createDirectArgumentsDuringExit): Deleted.
(JSC::DFG::createClonedArgumentsDuringExit): Deleted.
(JSC::DFG::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::executeOSRExit): Deleted.
(JSC::DFG::reifyInlinedCallFrames): Deleted.
(JSC::DFG::adjustAndJumpToTarget): Deleted.
(JSC::DFG::printOSRExit): Deleted.

  • dfg/DFGOSRExit.h:

(JSC::DFG::OSRExitState::OSRExitState): Deleted.

  • dfg/DFGOSRExitCompilerCommon.cpp:
  • dfg/DFGOSRExitCompilerCommon.h:
  • dfg/DFGOperations.cpp:
  • dfg/DFGOperations.h:
  • dfg/DFGThunks.cpp:

(JSC::DFG::osrExitGenerationThunkGenerator):
(JSC::DFG::osrExitThunkGenerator): Deleted.

  • dfg/DFGThunks.h:
  • jit/AssemblyHelpers.cpp:

(JSC::AssemblyHelpers::debugCall):

  • jit/AssemblyHelpers.h:
  • jit/JITOperations.cpp:
  • jit/JITOperations.h:
  • profiler/ProfilerOSRExit.h:

(JSC::Profiler::OSRExit::incCount): Deleted.

  • runtime/JSCJSValue.h:
  • runtime/JSCJSValueInlines.h:
  • runtime/VM.h:
Location:
trunk
Files:
1 deleted
39 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r221981 r222009  
     12017-09-13  Mark Lam  <mark.lam@apple.com>
     2
     3        Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
     4        https://bugs.webkit.org/show_bug.cgi?id=176888
     5        <rdar://problem/34381832>
     6
     7        Not reviewed.
     8
     9        * stress/op_mod-ConstVar.js:
     10        * stress/op_mod-VarConst.js:
     11        * stress/op_mod-VarVar.js:
     12
    1132017-09-13  Ryan Haddad  <ryanhaddad@apple.com>
    214
  • trunk/JSTests/stress/op_mod-ConstVar.js

    r221981 r222009  
    1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
     1//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
    22
    33// If all goes well, this test module will terminate silently. If not, it will print
  • trunk/JSTests/stress/op_mod-VarConst.js

    r221981 r222009  
    1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
     1//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
    22
    33// If all goes well, this test module will terminate silently. If not, it will print
  • trunk/JSTests/stress/op_mod-VarVar.js

    r221981 r222009  
    1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
     1//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
    22
    33// If all goes well, this test module will terminate silently. If not, it will print
  • trunk/Source/JavaScriptCore/ChangeLog

    r222007 r222009  
     12017-09-13  Mark Lam  <mark.lam@apple.com>
     2
     3        Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
     4        https://bugs.webkit.org/show_bug.cgi?id=176888
     5        <rdar://problem/34381832>
     6
     7        Not reviewed.
     8
     9        * JavaScriptCore.xcodeproj/project.pbxproj:
     10        * assembler/MacroAssembler.cpp:
     11        (JSC::stdFunctionCallback):
     12        * assembler/MacroAssemblerPrinter.cpp:
     13        (JSC::Printer::printCallback):
     14        * assembler/ProbeContext.h:
     15        (JSC::Probe:: const):
     16        (JSC::Probe::Context::Context):
     17        (JSC::Probe::Context::gpr):
     18        (JSC::Probe::Context::spr):
     19        (JSC::Probe::Context::fpr):
     20        (JSC::Probe::Context::gprName):
     21        (JSC::Probe::Context::sprName):
     22        (JSC::Probe::Context::fprName):
     23        (JSC::Probe::Context::pc):
     24        (JSC::Probe::Context::fp):
     25        (JSC::Probe::Context::sp):
     26        (JSC::Probe::CPUState::gpr const): Deleted.
     27        (JSC::Probe::CPUState::spr const): Deleted.
     28        (JSC::Probe::Context::arg): Deleted.
     29        (JSC::Probe::Context::gpr const): Deleted.
     30        (JSC::Probe::Context::spr const): Deleted.
     31        (JSC::Probe::Context::fpr const): Deleted.
     32        * assembler/ProbeFrame.h: Removed.
     33        * assembler/ProbeStack.cpp:
     34        (JSC::Probe::Page::Page):
     35        * assembler/ProbeStack.h:
     36        (JSC::Probe::Page::get):
     37        (JSC::Probe::Page::set):
     38        (JSC::Probe::Page::physicalAddressFor):
     39        (JSC::Probe::Stack::lowWatermark):
     40        (JSC::Probe::Stack::get):
     41        (JSC::Probe::Stack::set):
     42        * bytecode/ArithProfile.cpp:
     43        * bytecode/ArithProfile.h:
     44        * bytecode/ArrayProfile.h:
     45        (JSC::ArrayProfile::observeArrayMode): Deleted.
     46        * bytecode/CodeBlock.cpp:
     47        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): Deleted.
     48        * bytecode/CodeBlock.h:
     49        (JSC::CodeBlock::addressOfOSRExitCounter):
     50        * bytecode/ExecutionCounter.h:
     51        (JSC::ExecutionCounter::hasCrossedThreshold const): Deleted.
     52        (JSC::ExecutionCounter::setNewThresholdForOSRExit): Deleted.
     53        * bytecode/MethodOfGettingAValueProfile.cpp:
     54        (JSC::MethodOfGettingAValueProfile::reportValue): Deleted.
     55        * bytecode/MethodOfGettingAValueProfile.h:
     56        * dfg/DFGDriver.cpp:
     57        (JSC::DFG::compileImpl):
     58        * dfg/DFGJITCode.cpp:
     59        (JSC::DFG::JITCode::findPC):
     60        * dfg/DFGJITCode.h:
     61        * dfg/DFGJITCompiler.cpp:
     62        (JSC::DFG::JITCompiler::linkOSRExits):
     63        (JSC::DFG::JITCompiler::link):
     64        * dfg/DFGOSRExit.cpp:
     65        (JSC::DFG::OSRExit::setPatchableCodeOffset):
     66        (JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const):
     67        (JSC::DFG::OSRExit::codeLocationForRepatch const):
     68        (JSC::DFG::OSRExit::correctJump):
     69        (JSC::DFG::OSRExit::emitRestoreArguments):
     70        (JSC::DFG::OSRExit::compileOSRExit):
     71        (JSC::DFG::OSRExit::compileExit):
     72        (JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure):
     73        (JSC::DFG::jsValueFor): Deleted.
     74        (JSC::DFG::restoreCalleeSavesFor): Deleted.
     75        (JSC::DFG::saveCalleeSavesFor): Deleted.
     76        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
     77        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
     78        (JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
     79        (JSC::DFG::createDirectArgumentsDuringExit): Deleted.
     80        (JSC::DFG::createClonedArgumentsDuringExit): Deleted.
     81        (JSC::DFG::emitRestoreArguments): Deleted.
     82        (JSC::DFG::OSRExit::executeOSRExit): Deleted.
     83        (JSC::DFG::reifyInlinedCallFrames): Deleted.
     84        (JSC::DFG::adjustAndJumpToTarget): Deleted.
     85        (JSC::DFG::printOSRExit): Deleted.
     86        * dfg/DFGOSRExit.h:
     87        (JSC::DFG::OSRExitState::OSRExitState): Deleted.
     88        * dfg/DFGOSRExitCompilerCommon.cpp:
     89        * dfg/DFGOSRExitCompilerCommon.h:
     90        * dfg/DFGOperations.cpp:
     91        * dfg/DFGOperations.h:
     92        * dfg/DFGThunks.cpp:
     93        (JSC::DFG::osrExitGenerationThunkGenerator):
     94        (JSC::DFG::osrExitThunkGenerator): Deleted.
     95        * dfg/DFGThunks.h:
     96        * jit/AssemblyHelpers.cpp:
     97        (JSC::AssemblyHelpers::debugCall):
     98        * jit/AssemblyHelpers.h:
     99        * jit/JITOperations.cpp:
     100        * jit/JITOperations.h:
     101        * profiler/ProfilerOSRExit.h:
     102        (JSC::Profiler::OSRExit::incCount): Deleted.
     103        * runtime/JSCJSValue.h:
     104        * runtime/JSCJSValueInlines.h:
     105        * runtime/VM.h:
     106
    11072017-09-13  Yusuke Suzuki  <utatane.tea@gmail.com>
    2108
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r221832 r222009  
    24452445                FE10AAEE1F44D954009DEDC5 /* ProbeContext.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAED1F44D946009DEDC5 /* ProbeContext.h */; settings = {ATTRIBUTES = (Private, ); }; };
    24462446                FE10AAF41F468396009DEDC5 /* ProbeContext.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */; };
    2447                 FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */; };
    24482447                FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */; };
    24492448                FE1220281BE7F5910039E6F2 /* JITAddGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */; };
     
    51405139                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeContext.h; sourceTree = "<group>"; };
    51415140                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProbeContext.cpp; sourceTree = "<group>"; };
    5142                 FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; };
    51435141                FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITAddGenerator.cpp; sourceTree = "<group>"; };
    51445142                FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAddGenerator.h; sourceTree = "<group>"; };
     
    77257723                                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */,
    77267724                                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */,
    7727                                 FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */,
    77287725                                FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */,
    77297726                                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */,
     
    96939690                                AD4937C81DDD0AAE0077C807 /* WebAssemblyModuleRecord.h in Headers */,
    96949691                                AD2FCC2D1DB838FD00B3E736 /* WebAssemblyPrototype.h in Headers */,
    9695                                 FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */,
    96969692                                AD2FCBF91DB58DAD00B3E736 /* WebAssemblyRuntimeErrorConstructor.h in Headers */,
    96979693                                AD2FCC1E1DB59CB200B3E736 /* WebAssemblyRuntimeErrorConstructor.lut.h in Headers */,
  • trunk/Source/JavaScriptCore/assembler/MacroAssembler.cpp

    r221832 r222009  
    3939static void stdFunctionCallback(Probe::Context& context)
    4040{
    41     auto func = context.arg<const std::function<void(Probe::Context&)>*>();
     41    auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);
    4242    (*func)(context);
    4343}
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp

    r221832 r222009  
    176176{
    177177    auto& out = WTF::dataFile();
    178     PrintRecordList& list = *probeContext.arg<PrintRecordList*>();
     178    PrintRecordList& list = *reinterpret_cast<PrintRecordList*>(probeContext.arg);
    179179    for (size_t i = 0; i < list.size(); i++) {
    180180        auto& record = list[i];
  • trunk/Source/JavaScriptCore/assembler/ProbeContext.h

    r221832 r222009  
    4646    inline double& fpr(FPRegisterID);
    4747
    48     template<typename T> T gpr(RegisterID) const;
    49     template<typename T> T spr(SPRegisterID) const;
     48    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
     49    T gpr(RegisterID) const;
     50    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
     51    T gpr(RegisterID) const;
     52    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
     53    T spr(SPRegisterID) const;
     54    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
     55    T spr(SPRegisterID) const;
    5056    template<typename T> T fpr(FPRegisterID) const;
    5157
     
    8086}
    8187
    82 template<typename T>
     88template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
    8389T CPUState::gpr(RegisterID id) const
    8490{
    8591    CPUState* cpu = const_cast<CPUState*>(this);
    86     auto& from = cpu->gpr(id);
    87     typename std::remove_const<T>::type to { };
    88     std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
    89     return to;
    90 }
    91 
    92 template<typename T>
     92    return static_cast<T>(cpu->gpr(id));
     93}
     94
     95template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
     96T CPUState::gpr(RegisterID id) const
     97{
     98    CPUState* cpu = const_cast<CPUState*>(this);
     99    return reinterpret_cast<T>(cpu->gpr(id));
     100}
     101
     102template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
    93103T CPUState::spr(SPRegisterID id) const
    94104{
    95105    CPUState* cpu = const_cast<CPUState*>(this);
    96     auto& from = cpu->spr(id);
    97     typename std::remove_const<T>::type to { };
    98     std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
    99     return to;
     106    return static_cast<T>(cpu->spr(id));
     107}
     108
     109template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
     110T CPUState::spr(SPRegisterID id) const
     111{
     112    CPUState* cpu = const_cast<CPUState*>(this);
     113    return reinterpret_cast<T>(cpu->spr(id));
    100114}
    101115
     
    192206
    193207    Context(State* state)
    194         : cpu(state->cpu)
    195         , m_state(state)
     208        : m_state(state)
     209        , arg(state->arg)
     210        , cpu(state->cpu)
    196211    { }
    197212
    198     template<typename T>
    199     T arg() { return reinterpret_cast<T>(m_state->arg); }
    200 
    201     uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
    202     uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
    203     double& fpr(FPRegisterID id) { return cpu.fpr(id); }
    204     const char* gprName(RegisterID id) { return cpu.gprName(id); }
    205     const char* sprName(SPRegisterID id) { return cpu.sprName(id); }
    206     const char* fprName(FPRegisterID id) { return cpu.fprName(id); }
    207 
    208     template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); }
    209     template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); }
    210     template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); }
    211 
    212     void*& pc() { return cpu.pc(); }
    213     void*& fp() { return cpu.fp(); }
    214     void*& sp() { return cpu.sp(); }
    215 
    216     template<typename T> T pc() { return cpu.pc<T>(); }
    217     template<typename T> T fp() { return cpu.fp<T>(); }
    218     template<typename T> T sp() { return cpu.sp<T>(); }
     213    uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); }
     214    uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); }
     215    double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); }
     216    const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); }
     217    const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); }
     218    const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); }
     219
     220    void*& pc() { return m_state->cpu.pc(); }
     221    void*& fp() { return m_state->cpu.fp(); }
     222    void*& sp() { return m_state->cpu.sp(); }
     223
     224    template<typename T> T pc() { return m_state->cpu.pc<T>(); }
     225    template<typename T> T fp() { return m_state->cpu.fp<T>(); }
     226    template<typename T> T sp() { return m_state->cpu.sp<T>(); }
    219227
    220228    Stack& stack()
     
    227235    Stack* releaseStack() { return new Stack(WTFMove(m_stack)); }
    228236
    229     CPUState& cpu;
    230 
    231237private:
    232238    State* m_state;
     239public:
     240    void* arg;
     241    CPUState& cpu;
     242
     243private:
    233244    Stack m_stack;
    234245
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.cpp

    r221832 r222009  
    3636Page::Page(void* baseAddress)
    3737    : m_baseLogicalAddress(baseAddress)
    38     , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress))
    3938{
    4039    memcpy(&m_buffer, baseAddress, s_pageSize);
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.h

    r221832 r222009  
    5757    T get(void* logicalAddress)
    5858    {
    59         void* from = physicalAddressFor(logicalAddress);
    60         typename std::remove_const<T>::type to { };
    61         std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
    62         return to;
    63     }
    64     template<typename T>
    65     T get(void* logicalBaseAddress, ptrdiff_t offset)
    66     {
    67         return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
     59        return *physicalAddressFor<T*>(logicalAddress);
    6860    }
    6961
     
    7264    {
    7365        m_dirtyBits |= dirtyBitFor(logicalAddress);
    74         void* to = physicalAddressFor(logicalAddress);
    75         std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues.
    76     }
    77     template<typename T>
    78     void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
    79     {
    80         set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
     66        *physicalAddressFor<T*>(logicalAddress) = value;
    8167    }
    8268
     
    9581    }
    9682
    97     void* physicalAddressFor(void* logicalAddress)
     83    template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type>
     84    T physicalAddressFor(void* logicalAddress)
    9885    {
    99         return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset;
     86        uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask;
     87        void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset;
     88        return reinterpret_cast<T>(physicalAddress);
    10089    }
    10190
     
    10493    void* m_baseLogicalAddress { nullptr };
    10594    uintptr_t m_dirtyBits { 0 };
    106     ptrdiff_t m_physicalAddressOffset;
    10795
    10896    static constexpr size_t s_pageSize = 1024;
     
    133121    Stack(Stack&& other);
    134122
    135     void* lowWatermark()
    136     {
    137         // We use the chunkAddress for the low watermark because we'll be doing write backs
    138         // to the stack in increments of chunks. Hence, we'll treat the lowest address of
    139         // the chunk as the low watermark of any given set address.
    140         return Page::chunkAddressFor(m_lowWatermark);
    141     }
     123    void* lowWatermark() { return m_lowWatermark; }
    142124
    143125    template<typename T>
    144     T get(void* address)
     126    typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
    145127    {
    146128        Page* page = pageFor(address);
    147129        return page->get<T>(address);
    148130    }
    149     template<typename T>
    150     T get(void* logicalBaseAddress, ptrdiff_t offset)
    151     {
    152         return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
    153     }
    154131
    155     template<typename T>
     132    template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
    156133    void set(void* address, T value)
    157134    {
     
    159136        page->set<T>(address, value);
    160137
    161         if (address < m_lowWatermark)
    162             m_lowWatermark = address;
     138        // We use the chunkAddress for the low watermark because we'll be doing write backs
     139        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
     140        // the chunk as the low watermark of any given set address.
     141        void* chunkAddress = Page::chunkAddressFor(address);
     142        if (chunkAddress < m_lowWatermark)
     143            m_lowWatermark = chunkAddress;
    163144    }
     145
    164146    template<typename T>
    165     void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
     147    typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
    166148    {
    167         set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
     149        Page* page = pageFor(address);
     150        return bitwise_cast<double>(page->get<uint64_t>(address));
     151    }
     152
     153    template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
     154    void set(void* address, double value)
     155    {
     156        set<uint64_t>(address, bitwise_cast<uint64_t>(value));
    168157    }
    169158
  • trunk/Source/JavaScriptCore/bytecode/ArithProfile.cpp

    r221832 r222009  
    11/*
    2  * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3333
    3434#if ENABLE(JIT)
    35 // FIXME: This is being supplanted by observeResult(). Remove this one
    36 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    3735void ArithProfile::emitObserveResult(CCallHelpers& jit, JSValueRegs regs, TagRegistersMode mode)
    3836{
  • trunk/Source/JavaScriptCore/bytecode/ArithProfile.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    212212    // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble) if it sees a
    213213    // double. Sets NonNumber if it sees a non-number.
    214     // FIXME: This is being supplanted by observeResult(). Remove this one
    215     // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    216214    void emitObserveResult(CCallHelpers&, JSValueRegs, TagRegistersMode = HaveTagRegisters);
    217215   
  • trunk/Source/JavaScriptCore/bytecode/ArrayProfile.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    219219    void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure);
    220220   
    221     void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; }
    222221    ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; }
    223222    bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; }
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r221849 r222009  
    23212321}
    23222322
    2323 auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction
    2324 {
    2325     DFG::OSRExitBase& exit = exitState.exit;
    2326     if (!exitKindMayJettison(exit.m_kind)) {
    2327         // FIXME: We may want to notice that we're frequently exiting
    2328         // at an op_catch that we didn't compile an entrypoint for, and
    2329         // then trigger a reoptimization of this CodeBlock:
    2330         // https://bugs.webkit.org/show_bug.cgi?id=175842
    2331         return OptimizeAction::None;
    2332     }
    2333 
    2334     exit.m_count++;
    2335     m_osrExitCounter++;
    2336 
    2337     CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
    2338     ASSERT(baselineCodeBlock == baselineAlternative());
    2339     if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold()))
    2340         return OptimizeAction::ReoptimizeNow;
    2341 
    2342     // We want to figure out if there's a possibility that we're in a loop. For the outermost
    2343     // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
    2344     // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
    2345     // problem is the inlined functions, which might also have loops, but whose baseline versions
    2346     // don't know where to look for the exit count. Figure out if those loops are severe enough
    2347     // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
    2348     // Otherwise, we should use the normal reoptimization trigger.
    2349 
    2350     bool didTryToEnterInLoop = false;
    2351     for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
    2352         if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) {
    2353             didTryToEnterInLoop = true;
    2354             break;
    2355         }
    2356     }
    2357 
    2358     uint32_t exitCountThreshold = didTryToEnterInLoop
    2359         ? exitCountThresholdForReoptimizationFromLoop()
    2360         : exitCountThresholdForReoptimization();
    2361 
    2362     if (m_osrExitCounter > exitCountThreshold)
    2363         return OptimizeAction::ReoptimizeNow;
    2364 
    2365     // Too few fails. Adjust the execution counter such that the target is to only optimize after a while.
    2366     baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold);
    2367     return OptimizeAction::None;
    2368 }
    2369 
    23702323void CodeBlock::optimizeNextInvocation()
    23712324{
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.h

    r221832 r222009  
    7878namespace JSC {
    7979
    80 namespace DFG {
    81 struct OSRExitState;
    82 } // namespace DFG
    83 
    8480class BytecodeLivenessAnalysis;
    8581class CodeBlockSet;
     
    767763    void countOSRExit() { m_osrExitCounter++; }
    768764
    769     enum class OptimizeAction { None, ReoptimizeNow };
    770     OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&);
    771 
    772     // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     765    uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; }
     766
    773767    static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
    774768
  • trunk/Source/JavaScriptCore/bytecode/ExecutionCounter.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4242int32_t applyMemoryUsageHeuristicsAndConvertToInt(int32_t value, CodeBlock*);
    4343
    44 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4544inline int32_t formattedTotalExecutionCount(float value)
    4645{
     
    5958    void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
    6059    bool checkIfThresholdCrossedAndSet(CodeBlock*);
    61     bool hasCrossedThreshold() const { return m_counter >= 0; }
    6260    void setNewThreshold(int32_t threshold, CodeBlock*);
    6361    void deferIndefinitely();
     
    6563    void dump(PrintStream&) const;
    6664   
    67     void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold)
    68     {
    69         m_activeThreshold = activeThreshold;
    70         m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold);
    71         m_totalCount = memoryUsageAdjustedThreshold;
    72     }
    73 
    7465    static int32_t maximumExecutionCountsBetweenCheckpoints()
    7566    {
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp

    r221832 r222009  
    11/*
    2  * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4747}
    4848
    49 // FIXME: This is being supplanted by reportValue(). Remove this one
    50 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    5149void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueRegs regs) const
    5250{
     
    7775}
    7876
    79 void MethodOfGettingAValueProfile::reportValue(JSValue value)
    80 {
    81     switch (m_kind) {
    82     case None:
    83         return;
    84 
    85     case Ready:
    86         *u.profile->specFailBucket(0) = JSValue::encode(value);
    87         return;
    88 
    89     case LazyOperand: {
    90         LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
    91 
    92         ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
    93         LazyOperandValueProfile* profile =
    94             u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key);
    95         *profile->specFailBucket(0) = JSValue::encode(value);
    96         return;
    97     }
    98 
    99     case ArithProfileReady: {
    100         u.arithProfile->observeResult(value);
    101         return;
    102     } }
    103 
    104     RELEASE_ASSERT_NOT_REACHED();
    105 }
    106 
    10777} // namespace JSC
    10878
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    7171   
    7272    explicit operator bool() const { return m_kind != None; }
    73 
    74     // FIXME: emitReportValue is being supplanted by reportValue(). Remove this one
    75     // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
     73   
    7674    void emitReportValue(CCallHelpers&, JSValueRegs) const;
    77     void reportValue(JSValue);
    78 
     75   
    7976private:
    8077    enum Kind {
  • trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp

    r221832 r222009  
    11/*
    2  * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    9090    // Make sure that any stubs that the DFG is going to use are initialized. We want to
    9191    // make sure that all JIT code generation does finalization on the main thread.
    92     vm.getCTIStub(osrExitThunkGenerator);
     92    vm.getCTIStub(osrExitGenerationThunkGenerator);
    9393    vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
    9494    vm.getCTIStub(linkCallThunkGenerator);
  • trunk/Source/JavaScriptCore/dfg/DFGJITCode.cpp

    r221832 r222009  
    226226}
    227227
     228std::optional<CodeOrigin> JITCode::findPC(CodeBlock*, void* pc)
     229{
     230    for (OSRExit& exit : osrExit) {
     231        if (ExecutableMemoryHandle* handle = exit.m_code.executableMemory()) {
     232            if (handle->start() <= pc && pc < handle->end())
     233                return std::optional<CodeOrigin>(exit.m_codeOriginForExitProfile);
     234        }
     235    }
     236
     237    return std::nullopt;
     238}
     239
    228240void JITCode::finalizeOSREntrypoints()
    229241{
  • trunk/Source/JavaScriptCore/dfg/DFGJITCode.h

    r221832 r222009  
    127127    static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
    128128
     129    std::optional<CodeOrigin> findPC(CodeBlock*, void* pc) override;
     130   
    129131private:
    130132    friend class JITCompiler; // Allow JITCompiler to call setCodeRef().
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

    r221832 r222009  
    8686    }
    8787   
    88     MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator);
    89     CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code());
    9088    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
     89        OSRExit& exit = m_jitCode->osrExit[i];
    9190        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
    9291        JumpList& failureJumps = info.m_failureJumps;
     
    9897        jitAssertHasValidCallFrame();
    9998        store32(TrustedImm32(i), &vm()->osrExitIndex);
    100         Jump target = jump();
    101         addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
    102             linkBuffer.link(target, osrExitThunkLabel);
    103         });
     99        exit.setPatchableCodeOffset(patchableJump());
    104100    }
    105101}
     
    308304    }
    309305   
     306    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitGenerationThunkGenerator);
     307    CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());
    310308    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
     309        OSRExit& exit = m_jitCode->osrExit[i];
    311310        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
     311        linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
     312        exit.correctJump(linkBuffer);
    312313        if (info.m_replacementSource.isSet()) {
    313314            m_jitCode->common.jumpReplacements.append(JumpReplacement(
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r221849 r222009  
    11/*
    2  * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3030
    3131#include "AssemblyHelpers.h"
    32 #include "ClonedArguments.h"
    3332#include "DFGGraph.h"
    3433#include "DFGMayExit.h"
     34#include "DFGOSRExitCompilerCommon.h"
    3535#include "DFGOSRExitPreparation.h"
    3636#include "DFGOperations.h"
    3737#include "DFGSpeculativeJIT.h"
    38 #include "DirectArguments.h"
    39 #include "InlineCallFrame.h"
     38#include "FrameTracers.h"
    4039#include "JSCInlines.h"
    41 #include "JSCJSValue.h"
    4240#include "OperandsInlines.h"
    43 #include "ProbeContext.h"
    44 #include "ProbeFrame.h"
    4541
    4642namespace JSC { namespace DFG {
    47 
    48 using CPUState = Probe::CPUState;
    49 using Context = Probe::Context;
    50 using Frame = Probe::Frame;
    51 
    52 static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
    53 static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
    54 static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
    55 
    56 static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
    57 {
    58     if (source.isAddress()) {
    59         JSValue result;
    60         std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
    61         return result;
    62     }
    63 #if USE(JSVALUE64)
    64     return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
    65 #else
    66     if (source.hasKnownTag())
    67         return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
    68     return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
    69 #endif
    70 }
    71 
    72 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
    73 
    74 static_assert(is64Bit(), "we only support callee save registers on 64-bit");
    75 
    76 // Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
    77 static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
    78 {
    79     ASSERT(codeBlock);
    80 
    81     RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
    82     RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
    83     unsigned registerCount = calleeSaves->size();
    84 
    85     uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
    86     for (unsigned i = 0; i < registerCount; i++) {
    87         RegisterAtOffset entry = calleeSaves->at(i);
    88         if (dontRestoreRegisters.get(entry.reg()))
    89             continue;
    90         // The callee saved values come from the original stack, not the recovered stack.
    91         // Hence, we read the values directly from the physical stack memory instead of
    92         // going through context.stack().
    93         ASSERT(!(entry.offset() % sizeof(uintptr_t)));
    94         context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
    95     }
    96 }
    97 
    98 // Based on AssemblyHelpers::emitSaveCalleeSavesFor().
    99 static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
    100 {
    101     auto& stack = context.stack();
    102     ASSERT(codeBlock);
    103 
    104     RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
    105     RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
    106     unsigned registerCount = calleeSaves->size();
    107 
    108     for (unsigned i = 0; i < registerCount; i++) {
    109         RegisterAtOffset entry = calleeSaves->at(i);
    110         if (dontSaveRegisters.get(entry.reg()))
    111             continue;
    112         stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
    113     }
    114 }
    115 
    116 // Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
    117 static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
    118 {
    119     VM& vm = *context.arg<VM*>();
    120 
    121     RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
    122     RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
    123     unsigned registerCount = allCalleeSaves->size();
    124 
    125     VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
    126     uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
    127 
    128     // Restore all callee saves.
    129     for (unsigned i = 0; i < registerCount; i++) {
    130         RegisterAtOffset entry = allCalleeSaves->at(i);
    131         if (dontRestoreRegisters.get(entry.reg()))
    132             continue;
    133         size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
    134         if (entry.reg().isGPR())
    135             context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
    136         else
    137             context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
    138     }
    139 }
    140 
    141 // Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
    142 static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
    143 {
    144     VM& vm = *context.arg<VM*>();
    145     auto& stack = context.stack();
    146 
    147     VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
    148     void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
    149 
    150     RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
    151     RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
    152     unsigned registerCount = allCalleeSaves->size();
    153 
    154     for (unsigned i = 0; i < registerCount; i++) {
    155         RegisterAtOffset entry = allCalleeSaves->at(i);
    156         if (dontCopyRegisters.get(entry.reg()))
    157             continue;
    158         if (entry.reg().isGPR())
    159             stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
    160         else
    161             stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
    162     }
    163 }
    164 
    165 // Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
    166 static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
    167 {
    168     Frame frame(context.fp(), context.stack());
    169     ASSERT(codeBlock);
    170 
    171     RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
    172     RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
    173     unsigned registerCount = calleeSaves->size();
    174 
    175     RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
    176 
    177     for (unsigned i = 0; i < registerCount; i++) {
    178         RegisterAtOffset entry = calleeSaves->at(i);
    179         if (dontSaveRegisters.get(entry.reg()))
    180             continue;
    181 
    182         uintptr_t savedRegisterValue;
    183 
    184         if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
    185             savedRegisterValue = frame.get<uintptr_t>(entry.offset());
    186         else
    187             savedRegisterValue = context.gpr(entry.reg().gpr());
    188 
    189         frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
    190     }
    191 }
    192 #else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
    193 
    194 static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
    195 static void saveCalleeSavesFor(Context&, CodeBlock*) { }
    196 static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
    197 static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
    198 static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
    199 
    200 #endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
    201 
    202 static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
    203 {
    204     VM& vm = *context.arg<VM*>();
    205 
    206     ASSERT(vm.heap.isDeferred());
    207 
    208     if (inlineCallFrame)
    209         codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
    210 
    211     unsigned length = argumentCount - 1;
    212     unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
    213     DirectArguments* result = DirectArguments::create(
    214         vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
    215 
    216     result->callee().set(vm, result, callee);
    217 
    218     void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
    219     Frame frame(frameBase, context.stack());
    220     for (unsigned i = length; i--;)
    221         result->setIndexQuickly(vm, i, frame.argument(i));
    222 
    223     return result;
    224 }
    225 
    226 static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
    227 {
    228     VM& vm = *context.arg<VM*>();
    229     ExecState* exec = context.fp<ExecState*>();
    230 
    231     ASSERT(vm.heap.isDeferred());
    232 
    233     if (inlineCallFrame)
    234         codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
    235 
    236     unsigned length = argumentCount - 1;
    237     ClonedArguments* result = ClonedArguments::createEmpty(
    238         vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
    239 
    240     void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
    241     Frame frame(frameBase, context.stack());
    242     for (unsigned i = length; i--;)
    243         result->putDirectIndex(exec, i, frame.argument(i));
    244     return result;
    245 }
    24643
    24744OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
     
    26057}
    26158
    262 static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
    263 {
    264     Frame frame(context.fp(), context.stack());
    265 
     59void OSRExit::setPatchableCodeOffset(MacroAssembler::PatchableJump check)
     60{
     61    m_patchableCodeOffset = check.m_jump.m_label.m_offset;
     62}
     63
     64MacroAssembler::Jump OSRExit::getPatchableCodeOffsetAsJump() const
     65{
     66    return MacroAssembler::Jump(AssemblerLabel(m_patchableCodeOffset));
     67}
     68
     69CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* dfgCodeBlock) const
     70{
     71    return CodeLocationJump(dfgCodeBlock->jitCode()->dataAddressAtOffset(m_patchableCodeOffset));
     72}
     73
     74void OSRExit::correctJump(LinkBuffer& linkBuffer)
     75{
     76    MacroAssembler::Label label;
     77    label.m_label.m_offset = m_patchableCodeOffset;
     78    m_patchableCodeOffset = linkBuffer.offsetOf(label);
     79}
     80
     81void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecovery>& operands)
     82{
    26683    HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
    26784    for (size_t index = 0; index < operands.size(); ++index) {
     
    27693        auto iter = alreadyAllocatedArguments.find(id);
    27794        if (iter != alreadyAllocatedArguments.end()) {
    278             frame.setOperand(operand, frame.operand(iter->value));
     95            JSValueRegs regs = JSValueRegs::withTwoAvailableRegs(GPRInfo::regT0, GPRInfo::regT1);
     96            jit.loadValue(CCallHelpers::addressFor(iter->value), regs);
     97            jit.storeValue(regs, CCallHelpers::addressFor(operand));
    27998            continue;
    28099        }
    281100
    282101        InlineCallFrame* inlineCallFrame =
    283             dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
     102            jit.codeBlock()->jitCode()->dfg()->minifiedDFG.at(id)->inlineCallFrame();
    284103
    285104        int stackOffset;
     
    289108            stackOffset = 0;
    290109
    291         JSFunction* callee;
    292         if (!inlineCallFrame || inlineCallFrame->isClosureCall)
    293             callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
    294         else
    295             callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
    296 
    297         int32_t argumentCount;
    298         if (!inlineCallFrame || inlineCallFrame->isVarargs())
    299             argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset);
    300         else
    301             argumentCount = inlineCallFrame->argumentCountIncludingThis;
    302 
    303         JSCell* argumentsObject;
     110        if (!inlineCallFrame || inlineCallFrame->isClosureCall) {
     111            jit.loadPtr(
     112                AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee),
     113                GPRInfo::regT0);
     114        } else {
     115            jit.move(
     116                AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeRecovery.constant().asCell()),
     117                GPRInfo::regT0);
     118        }
     119
     120        if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
     121            jit.load32(
     122                AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCount),
     123                GPRInfo::regT1);
     124        } else {
     125            jit.move(
     126                AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis),
     127                GPRInfo::regT1);
     128        }
     129
     130        jit.setupArgumentsWithExecState(
     131            AssemblyHelpers::TrustedImmPtr(inlineCallFrame), GPRInfo::regT0, GPRInfo::regT1);
    304132        switch (recovery.technique()) {
    305133        case DirectArgumentsThatWereNotCreated:
    306             argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
     134            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateDirectArgumentsDuringExit)), GPRInfo::nonArgGPR0);
    307135            break;
    308136        case ClonedArgumentsThatWereNotCreated:
    309             argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
     137            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateClonedArgumentsDuringExit)), GPRInfo::nonArgGPR0);
    310138            break;
    311139        default:
     
    313141            break;
    314142        }
    315         frame.setOperand(operand, JSValue(argumentsObject));
     143        jit.call(GPRInfo::nonArgGPR0);
     144        jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand));
    316145
    317146        alreadyAllocatedArguments.add(id, operand);
     
    319148}
    320149
    321 void OSRExit::executeOSRExit(Context& context)
    322 {
    323     VM& vm = *context.arg<VM*>();
    324     auto scope = DECLARE_THROW_SCOPE(vm);
    325 
    326     ExecState* exec = context.fp<ExecState*>();
    327     ASSERT(&exec->vm() == &vm);
    328 
    329     if (vm.callFrameForCatch) {
    330         exec = vm.callFrameForCatch;
    331         context.fp() = exec;
    332     }
     150void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec)
     151{
     152    VM* vm = &exec->vm();
     153    auto scope = DECLARE_THROW_SCOPE(*vm);
     154
     155    if (vm->callFrameForCatch)
     156        RELEASE_ASSERT(vm->callFrameForCatch == exec);
    333157
    334158    CodeBlock* codeBlock = exec->codeBlock();
     
    338162    // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
    339163    // really be profitable.
    340     DeferGCForAWhile deferGC(vm.heap);
    341 
    342     uint32_t exitIndex = vm.osrExitIndex;
    343     DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
    344     OSRExit& exit = dfgJITCode->osrExit[exitIndex];
    345 
    346     ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
     164    DeferGCForAWhile deferGC(vm->heap);
     165
     166    uint32_t exitIndex = vm->osrExitIndex;
     167    OSRExit& exit = codeBlock->jitCode()->dfg()->osrExit[exitIndex];
     168
     169    ASSERT(!vm->callFrameForCatch || exit.m_kind == GenericUnwind);
    347170    EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler());
    348 
    349     if (UNLIKELY(!exit.exitState)) {
    350         // We only need to execute this block once for each OSRExit record. The computed
    351         // results will be cached in the OSRExitState record for use of the rest of the
    352         // exit ramp code.
    353 
    354         // Ensure we have baseline codeBlocks to OSR exit to.
    355         prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
    356 
    357         CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
    358         ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
    359 
    360         // Compute the value recoveries.
    361         Operands<ValueRecovery> operands;
    362         dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands);
    363 
    364         SpeculationRecovery* recovery = nullptr;
    365         if (exit.m_recoveryIndex != UINT_MAX)
    366             recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
    367 
    368         int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
    369         double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
    370         ASSERT(adjustedThreshold > 0);
    371         adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
    372 
    373         CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
    374         Vector<BytecodeAndMachineOffset> decodedCodeMap;
    375         codeBlockForExit->jitCodeMap()->decode(decodedCodeMap);
    376 
    377         BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
    378 
    379         ASSERT(mapping);
    380         ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex);
    381 
    382         ptrdiff_t finalStackPointerOffset = codeBlockForExit->stackPointerOffset() * sizeof(Register);
    383 
    384         void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
    385 
    386         exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, recovery, finalStackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget));
    387 
    388         if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
    389             Profiler::Database& database = *vm.m_perBytecodeProfiler;
     171   
     172    prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
     173
     174    // Compute the value recoveries.
     175    Operands<ValueRecovery> operands;
     176    codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands);
     177
     178    SpeculationRecovery* recovery = 0;
     179    if (exit.m_recoveryIndex != UINT_MAX)
     180        recovery = &codeBlock->jitCode()->dfg()->speculationRecovery[exit.m_recoveryIndex];
     181
     182    {
     183        CCallHelpers jit(codeBlock);
     184
     185        if (exit.m_kind == GenericUnwind) {
     186            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
     187            // So, we must restore our call frame and stack pointer.
     188            jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(*vm);
     189            jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
     190        }
     191        jit.addPtr(
     192            CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
     193            GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
     194
     195        jit.jitAssertHasValidCallFrame();
     196
     197        if (UNLIKELY(vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
     198            Profiler::Database& database = *vm->m_perBytecodeProfiler;
    390199            Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
    391200
     
    393202                exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
    394203                exit.m_kind, exit.m_kind == UncountableInvalidation);
    395             exit.exitState->profilerExit = profilerExit;
    396         }
    397 
    398         if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
    399             dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
     204            jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(profilerExit->counterAddress()));
     205        }
     206
     207        compileExit(jit, *vm, exit, operands, recovery);
     208
     209        LinkBuffer patchBuffer(jit, codeBlock);
     210        exit.m_code = FINALIZE_CODE_IF(
     211            shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseDFGOSRExit(),
     212            patchBuffer,
     213            ("DFG OSR exit #%u (%s, %s) from %s, with operands = %s",
    400214                exitIndex, toCString(exit.m_codeOrigin).data(),
    401215                exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
    402                 toCString(ignoringContext<DumpContext>(operands)).data());
    403         }
    404     }
    405 
    406     OSRExitState& exitState = *exit.exitState.get();
    407     CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
    408     ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
    409 
    410     Operands<ValueRecovery>& operands = exitState.operands;
    411     SpeculationRecovery* recovery = exitState.recovery;
    412 
    413     if (exit.m_kind == GenericUnwind) {
    414         // We are acting as a defacto op_catch because we arrive here from genericUnwind().
    415         // So, we must restore our call frame and stack pointer.
    416         restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
    417         ASSERT(context.fp() == vm.callFrameForCatch);
    418     }
    419     context.sp() = context.fp<uint8_t*>() + (codeBlock->stackPointerOffset() * sizeof(Register));
    420 
    421     ASSERT(!(context.fp<uintptr_t>() & 0x7));
    422 
    423     if (exitState.profilerExit)
    424         exitState.profilerExit->incCount();
    425 
    426     auto& cpu = context.cpu;
    427     Frame frame(cpu.fp(), context.stack());
    428 
    429 #if USE(JSVALUE64)
    430     ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber);
    431     ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask);
    432 #endif
    433 
    434     if (UNLIKELY(Options::printEachOSRExit()))
    435         printOSRExit(context, vm.osrExitIndex, exit);
     216                toCString(ignoringContext<DumpContext>(operands)).data()));
     217    }
     218
     219    MacroAssembler::repatchJump(exit.codeLocationForRepatch(codeBlock), CodeLocationLabel(exit.m_code.code()));
     220
     221    vm->osrExitJumpDestination = exit.m_code.code().executableAddress();
     222}
     223
     224void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const Operands<ValueRecovery>& operands, SpeculationRecovery* recovery)
     225{
     226    jit.jitAssertTagsInPlace();
     227
     228    // Pro-forma stuff.
     229    if (Options::printEachOSRExit()) {
     230        SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
     231        debugInfo->codeBlock = jit.codeBlock();
     232        debugInfo->kind = exit.m_kind;
     233        debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
     234
     235        jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
     236    }
    436237
    437238    // Perform speculation recovery. This only comes into play when an operation
     
    441242        switch (recovery->type()) {
    442243        case SpeculativeAdd:
    443             cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
    444 #if USE(JSVALUE64)
    445             ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
    446             cpu.gpr(recovery->dest()) |= TagTypeNumber;
     244            jit.sub32(recovery->src(), recovery->dest());
     245#if USE(JSVALUE64)
     246            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
    447247#endif
    448248            break;
    449249
    450250        case SpeculativeAddImmediate:
    451             cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
    452 #if USE(JSVALUE64)
    453             ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
    454             cpu.gpr(recovery->dest()) |= TagTypeNumber;
     251            jit.sub32(AssemblyHelpers::Imm32(recovery->immediate()), recovery->dest());
     252#if USE(JSVALUE64)
     253            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
    455254#endif
    456255            break;
     
    458257        case BooleanSpeculationCheck:
    459258#if USE(JSVALUE64)
    460             cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse;
     259            jit.xor64(AssemblyHelpers::TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());
    461260#endif
    462261            break;
     
    481280
    482281            CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
    483             CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
    484             if (ArrayProfile* arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex)) {
    485                 Structure* structure = jsValueFor(cpu, exit.m_jsValueSource).asCell()->structure(vm);
    486                 arrayProfile->observeStructure(structure);
    487                 // FIXME: We should be able to use arrayModeFromStructure() to determine the observed ArrayMode here.
    488                 // However, currently, doing so would result in a pdfjs preformance regression.
    489                 // https://bugs.webkit.org/show_bug.cgi?id=176473
    490                 arrayProfile->observeArrayMode(asArrayModes(structure->indexingType()));
     282            if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
     283#if USE(JSVALUE64)
     284                GPRReg usedRegister;
     285                if (exit.m_jsValueSource.isAddress())
     286                    usedRegister = exit.m_jsValueSource.base();
     287                else
     288                    usedRegister = exit.m_jsValueSource.gpr();
     289#else
     290                GPRReg usedRegister1;
     291                GPRReg usedRegister2;
     292                if (exit.m_jsValueSource.isAddress()) {
     293                    usedRegister1 = exit.m_jsValueSource.base();
     294                    usedRegister2 = InvalidGPRReg;
     295                } else {
     296                    usedRegister1 = exit.m_jsValueSource.payloadGPR();
     297                    if (exit.m_jsValueSource.hasKnownTag())
     298                        usedRegister2 = InvalidGPRReg;
     299                    else
     300                        usedRegister2 = exit.m_jsValueSource.tagGPR();
     301                }
     302#endif
     303
     304                GPRReg scratch1;
     305                GPRReg scratch2;
     306#if USE(JSVALUE64)
     307                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister);
     308                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister, scratch1);
     309#else
     310                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2);
     311                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2, scratch1);
     312#endif
     313
     314                if (isARM64()) {
     315                    jit.pushToSave(scratch1);
     316                    jit.pushToSave(scratch2);
     317                } else {
     318                    jit.push(scratch1);
     319                    jit.push(scratch2);
     320                }
     321
     322                GPRReg value;
     323                if (exit.m_jsValueSource.isAddress()) {
     324                    value = scratch1;
     325                    jit.loadPtr(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), value);
     326                } else
     327                    value = exit.m_jsValueSource.payloadGPR();
     328
     329                jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
     330                jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID());
     331#if USE(JSVALUE64)
     332                jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeAndMiscOffset()), scratch1);
     333#else
     334                jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeIncludingHistoryOffset()), scratch1);
     335#endif
     336                jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
     337                jit.lshift32(scratch1, scratch2);
     338                jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
     339
     340                if (isARM64()) {
     341                    jit.popToRestore(scratch2);
     342                    jit.popToRestore(scratch1);
     343                } else {
     344                    jit.pop(scratch2);
     345                    jit.pop(scratch1);
     346                }
    491347            }
    492348        }
    493349
    494         if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
    495             profile.reportValue(jsValueFor(cpu, exit.m_jsValueSource));
    496     }
     350        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) {
     351#if USE(JSVALUE64)
     352            if (exit.m_jsValueSource.isAddress()) {
     353                // We can't be sure that we have a spare register. So use the tagTypeNumberRegister,
     354                // since we know how to restore it.
     355                jit.load64(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), GPRInfo::tagTypeNumberRegister);
     356                profile.emitReportValue(jit, JSValueRegs(GPRInfo::tagTypeNumberRegister));
     357                jit.move(AssemblyHelpers::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
     358            } else
     359                profile.emitReportValue(jit, JSValueRegs(exit.m_jsValueSource.gpr()));
     360#else // not USE(JSVALUE64)
     361            if (exit.m_jsValueSource.isAddress()) {
     362                // Save a register so we can use it.
     363                GPRReg scratchPayload = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base());
     364                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base(), scratchPayload);
     365                jit.pushToSave(scratchPayload);
     366                jit.pushToSave(scratchTag);
     367
     368                JSValueRegs scratch(scratchTag, scratchPayload);
     369               
     370                jit.loadValue(exit.m_jsValueSource.asAddress(), scratch);
     371                profile.emitReportValue(jit, scratch);
     372               
     373                jit.popToRestore(scratchTag);
     374                jit.popToRestore(scratchPayload);
     375            } else if (exit.m_jsValueSource.hasKnownTag()) {
     376                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.payloadGPR());
     377                jit.pushToSave(scratchTag);
     378                jit.move(AssemblyHelpers::TrustedImm32(exit.m_jsValueSource.tag()), scratchTag);
     379                JSValueRegs value(scratchTag, exit.m_jsValueSource.payloadGPR());
     380                profile.emitReportValue(jit, value);
     381                jit.popToRestore(scratchTag);
     382            } else
     383                profile.emitReportValue(jit, exit.m_jsValueSource.regs());
     384#endif // USE(JSVALUE64)
     385        }
     386    }
     387
     388    // What follows is an intentionally simple OSR exit implementation that generates
     389    // fairly poor code but is very easy to hack. In particular, it dumps all state that
     390    // needs conversion into a scratch buffer so that in step 6, where we actually do the
     391    // conversions, we know that all temp registers are free to use and the variable is
     392    // definitely in a well-known spot in the scratch buffer regardless of whether it had
     393    // originally been in a register or spilled. This allows us to decouple "where was
     394    // the variable" from "how was it represented". Consider that the
     395    // Int32DisplacedInJSStack recovery: it tells us that the value is in a
     396    // particular place and that that place holds an unboxed int32. We have two different
     397    // places that a value could be (displaced, register) and a bunch of different
     398    // ways of representing a value. The number of recoveries is two * a bunch. The code
     399    // below means that we have to have two + a bunch cases rather than two * a bunch.
     400    // Once we have loaded the value from wherever it was, the reboxing is the same
     401    // regardless of its location. Likewise, before we do the reboxing, the way we get to
     402    // the value (i.e. where we load it from) is the same regardless of its type. Because
     403    // the code below always dumps everything into a scratch buffer first, the two
     404    // questions become orthogonal, which simplifies adding new types and adding new
     405    // locations.
     406    //
     407    // This raises the question: does using such a suboptimal implementation of OSR exit,
     408    // where we always emit code to dump all state into a scratch buffer only to then
     409    // dump it right back into the stack, hurt us in any way? The asnwer is that OSR exits
     410    // are rare. Our tiering strategy ensures this. This is because if an OSR exit is
     411    // taken more than ~100 times, we jettison the DFG code block along with all of its
     412    // exits. It is impossible for an OSR exit - i.e. the code we compile below - to
     413    // execute frequently enough for the codegen to matter that much. It probably matters
     414    // enough that we don't want to turn this into some super-slow function call, but so
     415    // long as we're generating straight-line code, that code can be pretty bad. Also
     416    // because we tend to exit only along one OSR exit from any DFG code block - that's an
     417    // empirical result that we're extremely confident about - the code size of this
     418    // doesn't matter much. Hence any attempt to optimize the codegen here is just purely
     419    // harmful to the system: it probably won't reduce either net memory usage or net
     420    // execution time. It will only prevent us from cleanly decoupling "where was the
     421    // variable" from "how was it represented", which will make it more difficult to add
     422    // features in the future and it will make it harder to reason about bugs.
     423
     424    // Save all state from GPRs into the scratch buffer.
     425
     426    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(sizeof(EncodedJSValue) * operands.size());
     427    EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
     428
     429    for (size_t index = 0; index < operands.size(); ++index) {
     430        const ValueRecovery& recovery = operands[index];
     431
     432        switch (recovery.technique()) {
     433        case UnboxedInt32InGPR:
     434        case UnboxedCellInGPR:
     435#if USE(JSVALUE64)
     436        case InGPR:
     437        case UnboxedInt52InGPR:
     438        case UnboxedStrictInt52InGPR:
     439            jit.store64(recovery.gpr(), scratch + index);
     440            break;
     441#else
     442        case UnboxedBooleanInGPR:
     443            jit.store32(
     444                recovery.gpr(),
     445                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
     446            break;
     447           
     448        case InPair:
     449            jit.store32(
     450                recovery.tagGPR(),
     451                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
     452            jit.store32(
     453                recovery.payloadGPR(),
     454                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
     455            break;
     456#endif
     457
     458        default:
     459            break;
     460        }
     461    }
     462
     463    // And voila, all GPRs are free to reuse.
     464
     465    // Save all state from FPRs into the scratch buffer.
     466
     467    for (size_t index = 0; index < operands.size(); ++index) {
     468        const ValueRecovery& recovery = operands[index];
     469
     470        switch (recovery.technique()) {
     471        case UnboxedDoubleInFPR:
     472        case InFPR:
     473            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
     474            jit.storeDouble(recovery.fpr(), MacroAssembler::Address(GPRInfo::regT0));
     475            break;
     476
     477        default:
     478            break;
     479        }
     480    }
     481
     482    // Now, all FPRs are also free.
     483
     484    // Save all state from the stack into the scratch buffer. For simplicity we
     485    // do this even for state that's already in the right place on the stack.
     486    // It makes things simpler later.
     487
     488    for (size_t index = 0; index < operands.size(); ++index) {
     489        const ValueRecovery& recovery = operands[index];
     490
     491        switch (recovery.technique()) {
     492        case DisplacedInJSStack:
     493        case CellDisplacedInJSStack:
     494        case BooleanDisplacedInJSStack:
     495        case Int32DisplacedInJSStack:
     496        case DoubleDisplacedInJSStack:
     497#if USE(JSVALUE64)
     498        case Int52DisplacedInJSStack:
     499        case StrictInt52DisplacedInJSStack:
     500            jit.load64(AssemblyHelpers::addressFor(recovery.virtualRegister()), GPRInfo::regT0);
     501            jit.store64(GPRInfo::regT0, scratch + index);
     502            break;
     503#else
     504            jit.load32(
     505                AssemblyHelpers::tagFor(recovery.virtualRegister()),
     506                GPRInfo::regT0);
     507            jit.load32(
     508                AssemblyHelpers::payloadFor(recovery.virtualRegister()),
     509                GPRInfo::regT1);
     510            jit.store32(
     511                GPRInfo::regT0,
     512                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
     513            jit.store32(
     514                GPRInfo::regT1,
     515                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
     516            break;
     517#endif
     518
     519        default:
     520            break;
     521        }
     522    }
     523
     524    // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
     525    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
     526    // used by baseline.
     527    jit.addPtr(
     528        CCallHelpers::TrustedImm32(
     529            -jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
     530        CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
     531
     532    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
     533    jit.emitRestoreCalleeSaves();
     534    jit.emitSaveCalleeSavesFor(jit.baselineCodeBlock());
     535
     536    // The tag registers are needed to materialize recoveries below.
     537    jit.emitMaterializeTagCheckRegisters();
     538
     539    if (exit.isExceptionHandler())
     540        jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm);
    497541
    498542    // Do all data format conversions and store the results into the stack.
    499     // Note: we need to recover values before restoring callee save registers below
    500     // because the recovery may rely on values in some of callee save registers.
    501 
    502     int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
    503     size_t numberOfOperands = operands.size();
    504     for (size_t index = 0; index < numberOfOperands; ++index) {
     543
     544    for (size_t index = 0; index < operands.size(); ++index) {
    505545        const ValueRecovery& recovery = operands[index];
    506546        VirtualRegister reg = operands.virtualRegisterForIndex(index);
    507547
    508         if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
     548        if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
    509549            continue;
    510550
     
    513553        switch (recovery.technique()) {
    514554        case DisplacedInJSStack:
    515             frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
    516             break;
    517 
    518555        case InFPR:
    519             frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
    520             break;
    521 
    522556#if USE(JSVALUE64)
    523557        case InGPR:
    524             frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
    525             break;
     558        case UnboxedCellInGPR:
     559        case CellDisplacedInJSStack:
     560        case BooleanDisplacedInJSStack:
     561            jit.load64(scratch + index, GPRInfo::regT0);
     562            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
     563            break;
     564#else // not USE(JSVALUE64)
     565        case InPair:
     566            jit.load32(
     567                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag,
     568                GPRInfo::regT0);
     569            jit.load32(
     570                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
     571                GPRInfo::regT1);
     572            jit.store32(
     573                GPRInfo::regT0,
     574                AssemblyHelpers::tagFor(operand));
     575            jit.store32(
     576                GPRInfo::regT1,
     577                AssemblyHelpers::payloadFor(operand));
     578            break;
     579
     580        case UnboxedCellInGPR:
     581        case CellDisplacedInJSStack:
     582            jit.load32(
     583                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
     584                GPRInfo::regT0);
     585            jit.store32(
     586                AssemblyHelpers::TrustedImm32(JSValue::CellTag),
     587                AssemblyHelpers::tagFor(operand));
     588            jit.store32(
     589                GPRInfo::regT0,
     590                AssemblyHelpers::payloadFor(operand));
     591            break;
     592
     593        case UnboxedBooleanInGPR:
     594        case BooleanDisplacedInJSStack:
     595            jit.load32(
     596                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
     597                GPRInfo::regT0);
     598            jit.store32(
     599                AssemblyHelpers::TrustedImm32(JSValue::BooleanTag),
     600                AssemblyHelpers::tagFor(operand));
     601            jit.store32(
     602                GPRInfo::regT0,
     603                AssemblyHelpers::payloadFor(operand));
     604            break;
     605#endif // USE(JSVALUE64)
     606
     607        case UnboxedInt32InGPR:
     608        case Int32DisplacedInJSStack:
     609#if USE(JSVALUE64)
     610            jit.load64(scratch + index, GPRInfo::regT0);
     611            jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
     612            jit.or64(GPRInfo::tagTypeNumberRegister, GPRInfo::regT0);
     613            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    526614#else
    527         case InPair:
    528             frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
    529             break;
    530 #endif
    531 
    532         case UnboxedCellInGPR:
    533             frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
    534             break;
    535 
    536         case CellDisplacedInJSStack:
    537             frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell()));
    538             break;
    539 
    540 #if USE(JSVALUE32_64)
    541         case UnboxedBooleanInGPR:
    542             frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
    543             break;
    544 #endif
    545 
    546         case BooleanDisplacedInJSStack:
    547 #if USE(JSVALUE64)
    548             frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
     615            jit.load32(
     616                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
     617                GPRInfo::regT0);
     618            jit.store32(
     619                AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
     620                AssemblyHelpers::tagFor(operand));
     621            jit.store32(
     622                GPRInfo::regT0,
     623                AssemblyHelpers::payloadFor(operand));
     624#endif
     625            break;
     626
     627#if USE(JSVALUE64)
     628        case UnboxedInt52InGPR:
     629        case Int52DisplacedInJSStack:
     630            jit.load64(scratch + index, GPRInfo::regT0);
     631            jit.rshift64(
     632                AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), GPRInfo::regT0);
     633            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
     634            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
     635            break;
     636
     637        case UnboxedStrictInt52InGPR:
     638        case StrictInt52DisplacedInJSStack:
     639            jit.load64(scratch + index, GPRInfo::regT0);
     640            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
     641            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
     642            break;
     643#endif
     644
     645        case UnboxedDoubleInFPR:
     646        case DoubleDisplacedInJSStack:
     647            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
     648            jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::fpRegT0);
     649            jit.purifyNaN(FPRInfo::fpRegT0);
     650#if USE(JSVALUE64)
     651            jit.boxDouble(FPRInfo::fpRegT0, GPRInfo::regT0);
     652            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    549653#else
    550             frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload()));
    551 #endif
    552             break;
    553 
    554         case UnboxedInt32InGPR:
    555             frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
    556             break;
    557 
    558         case Int32DisplacedInJSStack:
    559             frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32()));
    560             break;
    561 
    562 #if USE(JSVALUE64)
    563         case UnboxedInt52InGPR:
    564             frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
    565             break;
    566 
    567         case Int52DisplacedInJSStack:
    568             frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52()));
    569             break;
    570 
    571         case UnboxedStrictInt52InGPR:
    572             frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
    573             break;
    574 
    575         case StrictInt52DisplacedInJSStack:
    576             frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52()));
    577             break;
    578 #endif
    579 
    580         case UnboxedDoubleInFPR:
    581             frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
    582             break;
    583 
    584         case DoubleDisplacedInJSStack:
    585             frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble())));
     654            jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
     655#endif
    586656            break;
    587657
    588658        case Constant:
    589             frame.setOperand(operand, recovery.constant());
     659#if USE(JSVALUE64)
     660            jit.store64(
     661                AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())),
     662                AssemblyHelpers::addressFor(operand));
     663#else
     664            jit.store32(
     665                AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
     666                AssemblyHelpers::tagFor(operand));
     667            jit.store32(
     668                AssemblyHelpers::TrustedImm32(recovery.constant().payload()),
     669                AssemblyHelpers::payloadFor(operand));
     670#endif
    590671            break;
    591672
     
    600681        }
    601682    }
    602 
    603     // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
    604     // could toast some stack that the DFG used. We need to do it before storing to stack offsets
    605     // used by baseline.
    606     cpu.sp() = cpu.fp<uint8_t*>() - (codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register));
    607 
    608     // Restore the DFG callee saves and then save the ones the baseline JIT uses.
    609     restoreCalleeSavesFor(context, codeBlock);
    610     saveCalleeSavesFor(context, baselineCodeBlock);
    611 
    612     // The tag registers are needed to materialize recoveries below.
    613 #if USE(JSVALUE64)
    614     cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber;
    615     cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther;
    616 #endif
    617 
    618     if (exit.isExceptionHandler())
    619         copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
    620683
    621684    // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
     
    625688    // inline call frame scope - but for now the DFG wouldn't do that.
    626689
    627     emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
     690    emitRestoreArguments(jit, operands);
    628691
    629692    // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
     
    663726    // counterValueForOptimizeAfterWarmUp().
    664727
    665     if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
    666         triggerReoptimizationNow(baselineCodeBlock, &exit);
    667 
    668     reifyInlinedCallFrames(context, baselineCodeBlock, exit);
    669     adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
    670 }
    671 
    672 static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
    673 {
    674     auto& cpu = context.cpu;
    675     Frame frame(cpu.fp(), context.stack());
    676 
    677     // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
    678     // in presence of inlined tail calls.
    679     // https://bugs.webkit.org/show_bug.cgi?id=147511
    680     ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT);
    681     frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
    682 
    683     const CodeOrigin* codeOrigin;
    684     for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
    685         InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
    686         CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
    687         InlineCallFrame::Kind trueCallerCallKind;
    688         CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
    689         void* callerFrame = cpu.fp();
    690 
    691         if (!trueCaller) {
    692             ASSERT(inlineCallFrame->isTail());
    693             void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
    694             frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
    695             callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
    696         } else {
    697             CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
    698             unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
    699             void* jumpTarget = nullptr;
    700 
    701             switch (trueCallerCallKind) {
    702             case InlineCallFrame::Call:
    703             case InlineCallFrame::Construct:
    704             case InlineCallFrame::CallVarargs:
    705             case InlineCallFrame::ConstructVarargs:
    706             case InlineCallFrame::TailCall:
    707             case InlineCallFrame::TailCallVarargs: {
    708                 CallLinkInfo* callLinkInfo =
    709                     baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
    710                 RELEASE_ASSERT(callLinkInfo);
    711 
    712                 jumpTarget = callLinkInfo->callReturnLocation().executableAddress();
    713                 break;
    714             }
    715 
    716             case InlineCallFrame::GetterCall:
    717             case InlineCallFrame::SetterCall: {
    718                 StructureStubInfo* stubInfo =
    719                     baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
    720                 RELEASE_ASSERT(stubInfo);
    721 
    722                 jumpTarget = stubInfo->doneLocation().executableAddress();
    723                 break;
    724             }
    725 
    726             default:
    727                 RELEASE_ASSERT_NOT_REACHED();
    728             }
    729 
    730             if (trueCaller->inlineCallFrame)
    731                 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
    732 
    733             frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
    734         }
    735 
    736         frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
    737 
    738         // Restore the inline call frame's callee save registers.
    739         // If this inlined frame is a tail call that will return back to the original caller, we need to
    740         // copy the prior contents of the tag registers already saved for the outer frame to this frame.
    741         saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
    742 
    743         if (!inlineCallFrame->isVarargs())
    744             frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
    745         ASSERT(callerFrame);
    746         frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
    747 #if USE(JSVALUE64)
    748         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
    749         frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
    750         if (!inlineCallFrame->isClosureCall)
    751             frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
    752 #else // USE(JSVALUE64) // so this is the 32-bit part
    753         Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
    754         uint32_t locationBits = CallSiteIndex(instruction).bits();
    755         frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
    756         frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
    757         if (!inlineCallFrame->isClosureCall)
    758             frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
    759 #endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
    760     }
    761 
    762     // Don't need to set the toplevel code origin if we only did inline tail calls
    763     if (codeOrigin) {
    764 #if USE(JSVALUE64)
    765         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
    766 #else
    767         Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
    768         uint32_t locationBits = CallSiteIndex(instruction).bits();
    769 #endif
    770         frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits);
    771     }
    772 }
    773 
    774 static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
    775 {
    776     OSRExitState* exitState = exit.exitState.get();
    777 
    778     WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
    779     vm.heap.writeBarrier(baselineCodeBlock);
    780 
    781     // We barrier all inlined frames -- and not just the current inline stack --
    782     // because we don't know which inlined function owns the value profile that
    783     // we'll update when we exit. In the case of "f() { a(); b(); }", if both
    784     // a and b are inlined, we might exit inside b due to a bad value loaded
    785     // from a.
    786     // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
    787     // the value profile.
    788     InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
    789     if (inlineCallFrames) {
    790         for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
    791             vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
    792     }
    793 
    794     if (exit.m_codeOrigin.inlineCallFrame)
    795         context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
    796 
    797     void* jumpTarget = exitState->jumpTarget;
    798     ASSERT(jumpTarget);
    799 
    800     context.sp() = context.fp<uint8_t*>() + exitState->stackPointerOffset;
    801     if (exit.isExceptionHandler()) {
    802         // Since we're jumping to op_catch, we need to set callFrameForCatch.
    803         vm.callFrameForCatch = context.fp<ExecState*>();
    804     }
    805 
    806     vm.topCallFrame = context.fp<ExecState*>();
    807     context.pc() = jumpTarget;
    808 }
    809 
    810 static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
    811 {
    812     ExecState* exec = context.fp<ExecState*>();
    813     CodeBlock* codeBlock = exec->codeBlock();
     728    handleExitCounts(jit, exit);
     729
     730    // Reify inlined call frames.
     731
     732    reifyInlinedCallFrames(jit, exit);
     733
     734    // And finish.
     735    adjustAndJumpToTarget(vm, jit, exit);
     736}
     737
     738void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exec, void* debugInfoRaw, void* scratch)
     739{
     740    VM* vm = &exec->vm();
     741    NativeCallFrameTracer tracer(vm, exec);
     742
     743    SpeculationFailureDebugInfo* debugInfo = static_cast<SpeculationFailureDebugInfo*>(debugInfoRaw);
     744    CodeBlock* codeBlock = debugInfo->codeBlock;
    814745    CodeBlock* alternative = codeBlock->alternative();
    815     ExitKind kind = exit.m_kind;
    816     unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
    817 
    818746    dataLog("Speculation failure in ", *codeBlock);
    819     dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
     747    dataLog(" @ exit #", vm->osrExitIndex, " (bc#", debugInfo->bytecodeOffset, ", ", exitKindToString(debugInfo->kind), ") with ");
    820748    if (alternative) {
    821749        dataLog(
     
    827755    dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
    828756    dataLog("    GPRs at time of exit:");
     757    char* scratchPointer = static_cast<char*>(scratch);
    829758    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    830759        GPRReg gpr = GPRInfo::toRegister(i);
    831         dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
     760        dataLog(" ", GPRInfo::debugName(gpr), ":", RawPointer(*reinterpret_cast_ptr<void**>(scratchPointer)));
     761        scratchPointer += sizeof(EncodedJSValue);
    832762    }
    833763    dataLog("\n");
     
    835765    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    836766        FPRReg fpr = FPRInfo::toRegister(i);
    837         dataLog(" ", context.fprName(fpr), ":");
    838         uint64_t bits = context.fpr<uint64_t>(fpr);
    839         double value = context.fpr(fpr);
     767        dataLog(" ", FPRInfo::debugName(fpr), ":");
     768        uint64_t bits = *reinterpret_cast_ptr<uint64_t*>(scratchPointer);
     769        double value = *reinterpret_cast_ptr<double*>(scratchPointer);
    840770        dataLogF("%llx:%lf", static_cast<long long>(bits), value);
     771        scratchPointer += sizeof(EncodedJSValue);
    841772    }
    842773    dataLog("\n");
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.h

    r221832 r222009  
    3434#include "Operands.h"
    3535#include "ValueRecovery.h"
    36 #include <wtf/RefPtr.h>
    3736
    3837namespace JSC {
    3938
    40 namespace Probe {
    41 class Context;
    42 } // namespace Probe
    43 
    44 namespace Profiler {
    45 class OSRExit;
    46 } // namespace Profiler
     39class CCallHelpers;
    4740
    4841namespace DFG {
     
    9992};
    10093
    101 struct OSRExitState : RefCounted<OSRExitState> {
    102     OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget)
    103         : exit(exit)
    104         , codeBlock(codeBlock)
    105         , baselineCodeBlock(baselineCodeBlock)
    106         , operands(operands)
    107         , recovery(recovery)
    108         , stackPointerOffset(stackPointerOffset)
    109         , activeThreshold(activeThreshold)
    110         , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
    111         , jumpTarget(jumpTarget)
    112     { }
    113 
    114     OSRExitBase& exit;
    115     CodeBlock* codeBlock;
    116     CodeBlock* baselineCodeBlock;
    117     Operands<ValueRecovery> operands;
    118     SpeculationRecovery* recovery;
    119     ptrdiff_t stackPointerOffset;
    120     uint32_t activeThreshold;
    121     double memoryUsageAdjustedThreshold;
    122     void* jumpTarget;
    123 
    124     Profiler::OSRExit* profilerExit { nullptr };
    125 };
    126 
    12794// === OSRExit ===
    12895//
     
    13299    OSRExit(ExitKind, JSValueSource, MethodOfGettingAValueProfile, SpeculativeJIT*, unsigned streamIndex, unsigned recoveryIndex = UINT_MAX);
    133100
    134     static void executeOSRExit(Probe::Context&);
     101    static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;
    135102
    136     RefPtr<OSRExitState> exitState;
     103    unsigned m_patchableCodeOffset { 0 };
     104   
     105    MacroAssemblerCodeRef m_code;
    137106   
    138107    JSValueSource m_jsValueSource;
     
    141110    unsigned m_recoveryIndex;
    142111
     112    void setPatchableCodeOffset(MacroAssembler::PatchableJump);
     113    MacroAssembler::Jump getPatchableCodeOffsetAsJump() const;
     114    CodeLocationJump codeLocationForRepatch(CodeBlock*) const;
     115    void correctJump(LinkBuffer&);
     116
    143117    unsigned m_streamIndex;
    144118    void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock)
     
    146120        OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromDFG);
    147121    }
     122
     123private:
     124    static void compileExit(CCallHelpers&, VM&, const OSRExit&, const Operands<ValueRecovery>&, SpeculationRecovery*);
     125    static void emitRestoreArguments(CCallHelpers&, const Operands<ValueRecovery>&);
     126    static void JIT_OPERATION debugOperationPrintSpeculationFailure(ExecState*, void*, void*) WTF_INTERNAL;
    148127};
    149128
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r221832 r222009  
    11/*
    2  * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3838namespace JSC { namespace DFG {
    3939
    40 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4140void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
    4241{
     
    145144}
    146145
    147 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    148146void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
    149147{
     
    255253}
    256254
    257 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    258255static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
    259256{
     
    276273}
    277274
    278 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    279275void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
    280276{
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4141void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&);
    4242
    43 // FIXME: This won't be needed once we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4443template <typename JITCodeType>
    4544void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType)
  • trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp

    r221959 r222009  
    14881488}
    14891489
     1490JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     1491{
     1492    VM& vm = exec->vm();
     1493    NativeCallFrameTracer target(&vm, exec);
     1494   
     1495    DeferGCForAWhile deferGC(vm.heap);
     1496   
     1497    CodeBlock* codeBlock;
     1498    if (inlineCallFrame)
     1499        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     1500    else
     1501        codeBlock = exec->codeBlock();
     1502   
     1503    unsigned length = argumentCount - 1;
     1504    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
     1505    DirectArguments* result = DirectArguments::create(
     1506        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
     1507   
     1508    result->callee().set(vm, result, callee);
     1509   
     1510    Register* arguments =
     1511        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
     1512        CallFrame::argumentOffset(0);
     1513    for (unsigned i = length; i--;)
     1514        result->setIndexQuickly(vm, i, arguments[i].jsValue());
     1515   
     1516    return result;
     1517}
     1518
     1519JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     1520{
     1521    VM& vm = exec->vm();
     1522    NativeCallFrameTracer target(&vm, exec);
     1523   
     1524    DeferGCForAWhile deferGC(vm.heap);
     1525   
     1526    CodeBlock* codeBlock;
     1527    if (inlineCallFrame)
     1528        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     1529    else
     1530        codeBlock = exec->codeBlock();
     1531   
     1532    unsigned length = argumentCount - 1;
     1533    ClonedArguments* result = ClonedArguments::createEmpty(
     1534        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
     1535   
     1536    Register* arguments =
     1537        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
     1538        CallFrame::argumentOffset(0);
     1539    for (unsigned i = length; i--;)
     1540        result->putDirectIndex(exec, i, arguments[i].jsValue());
     1541
     1542   
     1543    return result;
     1544}
     1545
    14901546JSCell* JIT_OPERATION operationCreateRest(ExecState* exec, Register* argumentStart, unsigned numberOfParamsToSkip, unsigned arraySize)
    14911547{
  • trunk/Source/JavaScriptCore/dfg/DFGOperations.h

    r221959 r222009  
    151151JSCell* JIT_OPERATION operationCreateActivationDirect(ExecState*, Structure*, JSScope*, SymbolTable*, EncodedJSValue);
    152152JSCell* JIT_OPERATION operationCreateDirectArguments(ExecState*, Structure*, int32_t length, int32_t minCapacity);
     153JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
    153154JSCell* JIT_OPERATION operationCreateScopedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee, JSLexicalEnvironment*);
     155JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
    154156JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee);
    155157JSCell* JIT_OPERATION operationCreateRest(ExecState*, Register* argumentStart, unsigned numberOfArgumentsToSkip, unsigned arraySize);
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp

    r221832 r222009  
    4141namespace JSC { namespace DFG {
    4242
    43 MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm)
     43MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
    4444{
    4545    MacroAssembler jit;
    46     jit.probe(OSRExit::executeOSRExit, vm);
     46
     47    // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
     48    adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT);
     49   
     50    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
     51    ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
     52    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
     53   
     54    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
     55#if USE(JSVALUE64)
     56        jit.store64(GPRInfo::toRegister(i), buffer + i);
     57#else
     58        jit.store32(GPRInfo::toRegister(i), buffer + i);
     59#endif
     60    }
     61    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
     62        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
     63        jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0));
     64    }
     65   
     66    // Tell GC mark phase how much of the scratch buffer is active during call.
     67    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
     68    jit.storePtr(MacroAssembler::TrustedImmPtr(scratchSize), MacroAssembler::Address(GPRInfo::regT0));
     69
     70    // Set up one argument.
     71#if CPU(X86)
     72    jit.poke(GPRInfo::callFrameRegister, 0);
     73#else
     74    jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
     75#endif
     76
     77    MacroAssembler::Call functionCall = jit.call();
     78
     79    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
     80    jit.storePtr(MacroAssembler::TrustedImmPtr(0), MacroAssembler::Address(GPRInfo::regT0));
     81
     82    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
     83        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
     84        jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i));
     85    }
     86    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
     87#if USE(JSVALUE64)
     88        jit.load64(buffer + i, GPRInfo::toRegister(i));
     89#else
     90        jit.load32(buffer + i, GPRInfo::toRegister(i));
     91#endif
     92    }
     93   
     94    jit.jump(MacroAssembler::AbsoluteAddress(&vm->osrExitJumpDestination));
     95   
    4796    LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
    48     return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk"));
     97   
     98    patchBuffer.link(functionCall, OSRExit::compileOSRExit);
     99   
     100    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit generation thunk"));
    49101}
    50102
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3636namespace DFG {
    3737
    38 MacroAssemblerCodeRef osrExitThunkGenerator(VM*);
     38MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
    3939MacroAssemblerCodeRef osrEntryThunkGenerator(VM*);
    4040
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp

    r221832 r222009  
    5151}
    5252
    53 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    5453Vector<BytecodeAndMachineOffset>& AssemblyHelpers::decodedCodeMapFor(CodeBlock* codeBlock)
    5554{
     
    822821#endif // ENABLE(WEBASSEMBLY)
    823822
     823void AssemblyHelpers::debugCall(VM& vm, V_DebugOperation_EPP function, void* argument)
     824{
     825    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
     826    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize);
     827    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
     828
     829    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
     830#if USE(JSVALUE64)
     831        store64(GPRInfo::toRegister(i), buffer + i);
     832#else
     833        store32(GPRInfo::toRegister(i), buffer + i);
     834#endif
     835    }
     836
     837    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
     838        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
     839        storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);
     840    }
     841
     842    // Tell GC mark phase how much of the scratch buffer is active during call.
     843    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
     844    storePtr(TrustedImmPtr(scratchSize), GPRInfo::regT0);
     845
     846#if CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)
     847    move(TrustedImmPtr(buffer), GPRInfo::argumentGPR2);
     848    move(TrustedImmPtr(argument), GPRInfo::argumentGPR1);
     849    move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
     850    GPRReg scratch = selectScratchGPR(GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2);
     851#elif CPU(X86)
     852    poke(GPRInfo::callFrameRegister, 0);
     853    poke(TrustedImmPtr(argument), 1);
     854    poke(TrustedImmPtr(buffer), 2);
     855    GPRReg scratch = GPRInfo::regT0;
     856#else
     857#error "JIT not supported on this platform."
     858#endif
     859    move(TrustedImmPtr(reinterpret_cast<void*>(function)), scratch);
     860    call(scratch);
     861
     862    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
     863    storePtr(TrustedImmPtr(0), GPRInfo::regT0);
     864
     865    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
     866        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
     867        loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));
     868    }
     869    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
     870#if USE(JSVALUE64)
     871        load64(buffer + i, GPRInfo::toRegister(i));
     872#else
     873        load32(buffer + i, GPRInfo::toRegister(i));
     874#endif
     875    }
     876}
     877
    824878void AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBufferImpl(GPRReg calleeSavesBuffer)
    825879{
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h

    r221832 r222009  
    992992        return GPRInfo::regT5;
    993993    }
     994
     995    // Add a debug call. This call has no effect on JIT code execution state.
     996    void debugCall(VM&, V_DebugOperation_EPP function, void* argument);
    994997
    995998    // These methods JIT generate dynamic, debug-only checks - akin to ASSERTs.
     
    14631466    void emitDumbVirtualCall(VM&, CallLinkInfo*);
    14641467   
    1465     // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    14661468    Vector<BytecodeAndMachineOffset>& decodedCodeMapFor(CodeBlock*);
    14671469
     
    16551657    CodeBlock* m_baselineCodeBlock;
    16561658
    1657     // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    16581659    HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps;
    16591660};
  • trunk/Source/JavaScriptCore/jit/JITOperations.cpp

    r221849 r222009  
    23302330}
    23312331
    2332 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    23332332void JIT_OPERATION operationOSRWriteBarrier(ExecState* exec, JSCell* cell)
    23342333{
  • trunk/Source/JavaScriptCore/jit/JITOperations.h

    r221959 r222009  
    449449
    450450void JIT_OPERATION operationWriteBarrierSlowPath(ExecState*, JSCell*);
    451 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    452451void JIT_OPERATION operationOSRWriteBarrier(ExecState*, JSCell*);
    453452
  • trunk/Source/JavaScriptCore/profiler/ProfilerOSRExit.h

    r221832 r222009  
    11/*
    2  * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4444    uint64_t* counterAddress() { return &m_counter; }
    4545    uint64_t count() const { return m_counter; }
    46     void incCount() { m_counter++; }
    47 
     46   
    4847    JSValue toJS(ExecState*) const;
    4948
  • trunk/Source/JavaScriptCore/runtime/JSCJSValue.h

    r221832 r222009  
    22 *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
    4  *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
     4 *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
    55 *
    66 *  This library is free software; you can redistribute it and/or
     
    345345    int32_t payload() const;
    346346
    347     // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs
    348     // synthesize JSValue from its "register"s holding tag and payload values.
     347#if !ENABLE(JIT)
     348    // This should only be used by the LLInt C Loop interpreter who needs
     349    // synthesize JSValue from its "register"s holding tag and payload
     350    // values.
    349351    explicit JSValue(int32_t tag, int32_t payload);
     352#endif
    350353
    351354#elif USE(JSVALUE64)
  • trunk/Source/JavaScriptCore/runtime/JSCJSValueInlines.h

    r221849 r222009  
    342342}
    343343
    344 #if USE(JSVALUE32_64)
     344#if !ENABLE(JIT)
    345345inline JSValue::JSValue(int32_t tag, int32_t payload)
    346346{
  • trunk/Source/JavaScriptCore/runtime/VM.h

    r221849 r222009  
    572572    Instruction* targetInterpreterPCForThrow;
    573573    uint32_t osrExitIndex;
     574    void* osrExitJumpDestination;
    574575    bool isExecutingInRegExpJIT { false };
    575576
Note: See TracChangeset for help on using the changeset viewer.