Changeset 221774 in webkit


Ignore:
Timestamp:
Sep 7, 2017 6:14:58 PM (7 years ago)
Author:
mark.lam@apple.com
Message:

Use JIT probes for DFG OSR exit.
https://bugs.webkit.org/show_bug.cgi?id=175144
<rdar://problem/33437050>

Reviewed by Saam Barati.

This patch does the following:

  1. Replaces osrExitGenerationThunkGenerator() with osrExitThunkGenerator(). While osrExitGenerationThunkGenerator() generates a thunk that compiles a unique OSR offramp for each DFG OSR exit site, osrExitThunkGenerator() generates a thunk that just executes the OSR exit.

The osrExitThunkGenerator() generated thunk works by using a single JIT probe
to call OSRExit::executeOSRExit(). The JIT probe takes care of preserving
CPU registers, and providing the Probe::Stack mechanism for modifying the
stack frame.

OSRExit::executeOSRExit() replaces OSRExit::compileOSRExit() and
OSRExit::compileExit(). It is basically a re-write of those functions to
execute the OSR exit work instead of compiling code to execute the work.

As a result, we get the following savings:

  1. no more OSR exit ramp compilation time.
  2. no use of JIT executable memory for storing each unique OSR exit ramp.

On the negative side, we incur these costs:

  1. the OSRExit::executeOSRExit() ramp may be a little slower than the compiled version of the ramp. However, OSR exits are rare. Hence, this small difference should not matter much. It is also offset by the savings from (a).
  1. the Probe::Stack allocates 1K pages for memory for buffering stack modifcations. The number of these pages depends on the span of stack memory that the OSR exit ramp reads from and writes to. Since the OSR exit ramp tends to only modify values in the current DFG frame and the current VMEntryRecord, the number of pages tends to only be 1 or 2.

Using the jsc tests as a workload, the vast majority of tests that do OSR
exit, uses 3 or less 1K pages (with the overwhelming number using just 1 page).
A few tests that are pathological uses up to 14 pages, and one particularly
bad test (function-apply-many-args.js) uses 513 pages.

Similar to the old code, the OSR exit ramp still has 2 parts: 1 part that is
only executed once to compute some values for the exit site that is used by
all exit operations from that site, and a 2nd part to execute the exit. The
1st part is protected by a checking if exit.exitState has already been
initialized. The computed values are cached in exit.exitState.

Because the OSR exit thunk no longer compiles an OSR exit off-ramp, we no
longer need the facility to patch the site that jumps to the OSR exit ramp.
The DFG::JITCompiler has been modified to remove this patching code.

  1. Fixed the bottom most Probe::Context and Probe::Stack get/set methods to use std::memcpy to avoid strict aliasing issues.

Also optimized the implementation of Probe::Stack::physicalAddressFor().

  1. Miscellaneous convenience methods added to make the Probe::Context easier of use.
  1. Added a Probe::Frame class that makes it easier to get/set operands and arguments in a given frame using the deferred write properties of the Probe::Stack. Probe::Frame makes it easier to do some of the recovery work in the OSR exit ramp.
  1. Cloned or converted some functions needed by the OSR exit ramp. The original JIT versions of these functions are still left in place because they are still needed for FTL OSR exit. A FIXME comment has been added to remove them later. These functions include:

DFGOSRExitCompilerCommon.cpp's handleExitCounts() ==>

CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize()

DFGOSRExitCompilerCommon.cpp's reifyInlinedCallFrames() ==>

DFGOSRExit.cpp's reifyInlinedCallFrames()

DFGOSRExitCompilerCommon.cpp's adjustAndJumpToTarget() ==>

DFGOSRExit.cpp's adjustAndJumpToTarget()

MethodOfGettingAValueProfile::emitReportValue() ==>

MethodOfGettingAValueProfile::reportValue()

DFGOperations.cpp's operationCreateDirectArgumentsDuringExit() ==>

DFGOSRExit.cpp's createDirectArgumentsDuringExit()

DFGOperations.cpp's operationCreateClonedArgumentsDuringExit() ==>

DFGOSRExit.cpp's createClonedArgumentsDuringExit()

  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/MacroAssembler.cpp:

(JSC::stdFunctionCallback):

  • assembler/MacroAssemblerPrinter.cpp:

(JSC::Printer::printCallback):

  • assembler/ProbeContext.h:

(JSC::Probe::CPUState::gpr const):
(JSC::Probe::CPUState::spr const):
(JSC::Probe::Context::Context):
(JSC::Probe::Context::arg):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):
(JSC::Probe::Context::fpr):
(JSC::Probe::Context::gprName):
(JSC::Probe::Context::sprName):
(JSC::Probe::Context::fprName):
(JSC::Probe::Context::gpr const):
(JSC::Probe::Context::spr const):
(JSC::Probe::Context::fpr const):
(JSC::Probe::Context::pc):
(JSC::Probe::Context::fp):
(JSC::Probe::Context::sp):
(JSC::Probe:: const): Deleted.

  • assembler/ProbeFrame.h: Added.

(JSC::Probe::Frame::Frame):
(JSC::Probe::Frame::getArgument):
(JSC::Probe::Frame::getOperand):
(JSC::Probe::Frame::get):
(JSC::Probe::Frame::setArgument):
(JSC::Probe::Frame::setOperand):
(JSC::Probe::Frame::set):

  • assembler/ProbeStack.cpp:

(JSC::Probe::Page::Page):

  • assembler/ProbeStack.h:

(JSC::Probe::Page::get):
(JSC::Probe::Page::set):
(JSC::Probe::Page::physicalAddressFor):
(JSC::Probe::Stack::lowWatermark):
(JSC::Probe::Stack::get):
(JSC::Probe::Stack::set):

  • bytecode/ArithProfile.cpp:
  • bytecode/ArithProfile.h:
  • bytecode/ArrayProfile.h:

(JSC::ArrayProfile::observeArrayMode):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):

  • bytecode/CodeBlock.h:

(JSC::CodeBlock::addressOfOSRExitCounter): Deleted.

  • bytecode/ExecutionCounter.h:

(JSC::ExecutionCounter::hasCrossedThreshold const):
(JSC::ExecutionCounter::setNewThresholdForOSRExit):

  • bytecode/MethodOfGettingAValueProfile.cpp:

(JSC::MethodOfGettingAValueProfile::reportValue):

  • bytecode/MethodOfGettingAValueProfile.h:
  • dfg/DFGDriver.cpp:

(JSC::DFG::compileImpl):

  • dfg/DFGJITCode.cpp:

(JSC::DFG::JITCode::findPC): Deleted.

  • dfg/DFGJITCode.h:
  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::link):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::jsValueFor):
(JSC::DFG::restoreCalleeSavesFor):
(JSC::DFG::saveCalleeSavesFor):
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::saveOrCopyCalleeSavesFor):
(JSC::DFG::createDirectArgumentsDuringExit):
(JSC::DFG::createClonedArgumentsDuringExit):
(JSC::DFG::OSRExit::OSRExit):
(JSC::DFG::emitRestoreArguments):
(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::printOSRExit):
(JSC::DFG::OSRExit::setPatchableCodeOffset): Deleted.
(JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const): Deleted.
(JSC::DFG::OSRExit::codeLocationForRepatch const): Deleted.
(JSC::DFG::OSRExit::correctJump): Deleted.
(JSC::DFG::OSRExit::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::compileOSRExit): Deleted.
(JSC::DFG::OSRExit::compileExit): Deleted.
(JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure): Deleted.

  • dfg/DFGOSRExit.h:

(JSC::DFG::OSRExitState::OSRExitState):
(JSC::DFG::OSRExit::considerAddingAsFrequentExitSite):

  • dfg/DFGOSRExitCompilerCommon.cpp:
  • dfg/DFGOSRExitCompilerCommon.h:
  • dfg/DFGOperations.cpp:
  • dfg/DFGOperations.h:
  • dfg/DFGThunks.cpp:

(JSC::DFG::osrExitThunkGenerator):
(JSC::DFG::osrExitGenerationThunkGenerator): Deleted.

  • dfg/DFGThunks.h:
  • jit/AssemblyHelpers.cpp:

(JSC::AssemblyHelpers::debugCall): Deleted.

  • jit/AssemblyHelpers.h:
  • jit/JITOperations.cpp:
  • jit/JITOperations.h:
  • profiler/ProfilerOSRExit.h:

(JSC::Profiler::OSRExit::incCount):

  • runtime/JSCJSValue.h:
  • runtime/JSCJSValueInlines.h:
  • runtime/VM.h:
Location:
trunk/Source/JavaScriptCore
Files:
1 added
35 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r221769 r221774  
     12017-09-07  Mark Lam  <mark.lam@apple.com>
     2
     3        Use JIT probes for DFG OSR exit.
     4        https://bugs.webkit.org/show_bug.cgi?id=175144
     5        <rdar://problem/33437050>
     6
     7        Reviewed by Saam Barati.
     8
     9        This patch does the following:
     10        1. Replaces osrExitGenerationThunkGenerator() with osrExitThunkGenerator().
     11           While osrExitGenerationThunkGenerator() generates a thunk that compiles a
     12           unique OSR offramp for each DFG OSR exit site, osrExitThunkGenerator()
     13           generates a thunk that just executes the OSR exit.
     14
     15           The osrExitThunkGenerator() generated thunk works by using a single JIT probe
     16           to call OSRExit::executeOSRExit().  The JIT probe takes care of preserving
     17           CPU registers, and providing the Probe::Stack mechanism for modifying the
     18           stack frame.
     19
     20           OSRExit::executeOSRExit() replaces OSRExit::compileOSRExit() and
     21           OSRExit::compileExit().  It is basically a re-write of those functions to
     22           execute the OSR exit work instead of compiling code to execute the work.
     23
     24           As a result, we get the following savings:
     25           a. no more OSR exit ramp compilation time.
     26           b. no use of JIT executable memory for storing each unique OSR exit ramp.
     27
     28           On the negative side, we incur these costs:
     29
     30           c. the OSRExit::executeOSRExit() ramp may be a little slower than the compiled
     31              version of the ramp.  However, OSR exits are rare.  Hence, this small
     32              difference should not matter much.  It is also offset by the savings from
     33              (a).
     34
     35           d. the Probe::Stack allocates 1K pages for memory for buffering stack
     36              modifcations.  The number of these pages depends on the span of stack memory
     37              that the OSR exit ramp reads from and writes to.  Since the OSR exit ramp
     38              tends to only modify values in the current DFG frame and the current
     39              VMEntryRecord, the number of pages tends to only be 1 or 2.
     40
     41              Using the jsc tests as a workload, the vast majority of tests that do OSR
     42              exit, uses 3 or less 1K pages (with the overwhelming number using just 1 page).
     43              A few tests that are pathological uses up to 14 pages, and one particularly
     44              bad test (function-apply-many-args.js) uses 513 pages.
     45
     46           Similar to the old code, the OSR exit ramp still has 2 parts: 1 part that is
     47           only executed once to compute some values for the exit site that is used by
     48           all exit operations from that site, and a 2nd part to execute the exit.  The
     49           1st part is protected by a checking if exit.exitState has already been
     50           initialized.  The computed values are cached in exit.exitState.
     51
     52           Because the OSR exit thunk no longer compiles an OSR exit off-ramp, we no
     53           longer need the facility to patch the site that jumps to the OSR exit ramp.
     54           The DFG::JITCompiler has been modified to remove this patching code.
     55
     56        2. Fixed the bottom most Probe::Context and Probe::Stack get/set methods to use
     57           std::memcpy to avoid strict aliasing issues.
     58
     59           Also optimized the implementation of Probe::Stack::physicalAddressFor().
     60
     61        3. Miscellaneous convenience methods added to make the Probe::Context easier of
     62           use.
     63
     64        4. Added a Probe::Frame class that makes it easier to get/set operands and
     65           arguments in a given frame using the deferred write properties of the
     66           Probe::Stack.  Probe::Frame makes it easier to do some of the recovery work in
     67           the OSR exit ramp.
     68
     69        5. Cloned or converted some functions needed by the OSR exit ramp.  The original
     70           JIT versions of these functions are still left in place because they are still
     71           needed for FTL OSR exit.  A FIXME comment has been added to remove them later.
     72           These functions include:
     73
     74           DFGOSRExitCompilerCommon.cpp's handleExitCounts() ==>
     75               CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize()
     76           DFGOSRExitCompilerCommon.cpp's reifyInlinedCallFrames() ==>
     77               DFGOSRExit.cpp's reifyInlinedCallFrames()
     78           DFGOSRExitCompilerCommon.cpp's adjustAndJumpToTarget() ==>
     79               DFGOSRExit.cpp's adjustAndJumpToTarget()
     80
     81           MethodOfGettingAValueProfile::emitReportValue() ==>
     82               MethodOfGettingAValueProfile::reportValue()
     83
     84           DFGOperations.cpp's operationCreateDirectArgumentsDuringExit() ==>
     85               DFGOSRExit.cpp's createDirectArgumentsDuringExit()
     86           DFGOperations.cpp's operationCreateClonedArgumentsDuringExit() ==>
     87               DFGOSRExit.cpp's createClonedArgumentsDuringExit()
     88
     89        * JavaScriptCore.xcodeproj/project.pbxproj:
     90        * assembler/MacroAssembler.cpp:
     91        (JSC::stdFunctionCallback):
     92        * assembler/MacroAssemblerPrinter.cpp:
     93        (JSC::Printer::printCallback):
     94        * assembler/ProbeContext.h:
     95        (JSC::Probe::CPUState::gpr const):
     96        (JSC::Probe::CPUState::spr const):
     97        (JSC::Probe::Context::Context):
     98        (JSC::Probe::Context::arg):
     99        (JSC::Probe::Context::gpr):
     100        (JSC::Probe::Context::spr):
     101        (JSC::Probe::Context::fpr):
     102        (JSC::Probe::Context::gprName):
     103        (JSC::Probe::Context::sprName):
     104        (JSC::Probe::Context::fprName):
     105        (JSC::Probe::Context::gpr const):
     106        (JSC::Probe::Context::spr const):
     107        (JSC::Probe::Context::fpr const):
     108        (JSC::Probe::Context::pc):
     109        (JSC::Probe::Context::fp):
     110        (JSC::Probe::Context::sp):
     111        (JSC::Probe:: const): Deleted.
     112        * assembler/ProbeFrame.h: Added.
     113        (JSC::Probe::Frame::Frame):
     114        (JSC::Probe::Frame::getArgument):
     115        (JSC::Probe::Frame::getOperand):
     116        (JSC::Probe::Frame::get):
     117        (JSC::Probe::Frame::setArgument):
     118        (JSC::Probe::Frame::setOperand):
     119        (JSC::Probe::Frame::set):
     120        * assembler/ProbeStack.cpp:
     121        (JSC::Probe::Page::Page):
     122        * assembler/ProbeStack.h:
     123        (JSC::Probe::Page::get):
     124        (JSC::Probe::Page::set):
     125        (JSC::Probe::Page::physicalAddressFor):
     126        (JSC::Probe::Stack::lowWatermark):
     127        (JSC::Probe::Stack::get):
     128        (JSC::Probe::Stack::set):
     129        * bytecode/ArithProfile.cpp:
     130        * bytecode/ArithProfile.h:
     131        * bytecode/ArrayProfile.h:
     132        (JSC::ArrayProfile::observeArrayMode):
     133        * bytecode/CodeBlock.cpp:
     134        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
     135        * bytecode/CodeBlock.h:
     136        (JSC::CodeBlock::addressOfOSRExitCounter): Deleted.
     137        * bytecode/ExecutionCounter.h:
     138        (JSC::ExecutionCounter::hasCrossedThreshold const):
     139        (JSC::ExecutionCounter::setNewThresholdForOSRExit):
     140        * bytecode/MethodOfGettingAValueProfile.cpp:
     141        (JSC::MethodOfGettingAValueProfile::reportValue):
     142        * bytecode/MethodOfGettingAValueProfile.h:
     143        * dfg/DFGDriver.cpp:
     144        (JSC::DFG::compileImpl):
     145        * dfg/DFGJITCode.cpp:
     146        (JSC::DFG::JITCode::findPC): Deleted.
     147        * dfg/DFGJITCode.h:
     148        * dfg/DFGJITCompiler.cpp:
     149        (JSC::DFG::JITCompiler::linkOSRExits):
     150        (JSC::DFG::JITCompiler::link):
     151        * dfg/DFGOSRExit.cpp:
     152        (JSC::DFG::jsValueFor):
     153        (JSC::DFG::restoreCalleeSavesFor):
     154        (JSC::DFG::saveCalleeSavesFor):
     155        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
     156        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
     157        (JSC::DFG::saveOrCopyCalleeSavesFor):
     158        (JSC::DFG::createDirectArgumentsDuringExit):
     159        (JSC::DFG::createClonedArgumentsDuringExit):
     160        (JSC::DFG::OSRExit::OSRExit):
     161        (JSC::DFG::emitRestoreArguments):
     162        (JSC::DFG::OSRExit::executeOSRExit):
     163        (JSC::DFG::reifyInlinedCallFrames):
     164        (JSC::DFG::adjustAndJumpToTarget):
     165        (JSC::DFG::printOSRExit):
     166        (JSC::DFG::OSRExit::setPatchableCodeOffset): Deleted.
     167        (JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const): Deleted.
     168        (JSC::DFG::OSRExit::codeLocationForRepatch const): Deleted.
     169        (JSC::DFG::OSRExit::correctJump): Deleted.
     170        (JSC::DFG::OSRExit::emitRestoreArguments): Deleted.
     171        (JSC::DFG::OSRExit::compileOSRExit): Deleted.
     172        (JSC::DFG::OSRExit::compileExit): Deleted.
     173        (JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure): Deleted.
     174        * dfg/DFGOSRExit.h:
     175        (JSC::DFG::OSRExitState::OSRExitState):
     176        (JSC::DFG::OSRExit::considerAddingAsFrequentExitSite):
     177        * dfg/DFGOSRExitCompilerCommon.cpp:
     178        * dfg/DFGOSRExitCompilerCommon.h:
     179        * dfg/DFGOperations.cpp:
     180        * dfg/DFGOperations.h:
     181        * dfg/DFGThunks.cpp:
     182        (JSC::DFG::osrExitThunkGenerator):
     183        (JSC::DFG::osrExitGenerationThunkGenerator): Deleted.
     184        * dfg/DFGThunks.h:
     185        * jit/AssemblyHelpers.cpp:
     186        (JSC::AssemblyHelpers::debugCall): Deleted.
     187        * jit/AssemblyHelpers.h:
     188        * jit/JITOperations.cpp:
     189        * jit/JITOperations.h:
     190        * profiler/ProfilerOSRExit.h:
     191        (JSC::Profiler::OSRExit::incCount):
     192        * runtime/JSCJSValue.h:
     193        * runtime/JSCJSValueInlines.h:
     194        * runtime/VM.h:
     195
    11962017-09-07  Michael Saboff  <msaboff@apple.com>
    2197
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r221223 r221774  
    24452445                FE10AAEE1F44D954009DEDC5 /* ProbeContext.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAED1F44D946009DEDC5 /* ProbeContext.h */; settings = {ATTRIBUTES = (Private, ); }; };
    24462446                FE10AAF41F468396009DEDC5 /* ProbeContext.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */; };
     2447                FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */; };
    24472448                FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */; };
    24482449                FE1220281BE7F5910039E6F2 /* JITAddGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */; };
     
    51395140                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeContext.h; sourceTree = "<group>"; };
    51405141                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProbeContext.cpp; sourceTree = "<group>"; };
     5142                FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; };
    51415143                FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITAddGenerator.cpp; sourceTree = "<group>"; };
    51425144                FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAddGenerator.h; sourceTree = "<group>"; };
     
    77237725                                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */,
    77247726                                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */,
     7727                                FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */,
    77257728                                FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */,
    77267729                                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */,
     
    96909693                                AD4937C81DDD0AAE0077C807 /* WebAssemblyModuleRecord.h in Headers */,
    96919694                                AD2FCC2D1DB838FD00B3E736 /* WebAssemblyPrototype.h in Headers */,
     9695                                FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */,
    96929696                                AD2FCBF91DB58DAD00B3E736 /* WebAssemblyRuntimeErrorConstructor.h in Headers */,
    96939697                                AD2FCC1E1DB59CB200B3E736 /* WebAssemblyRuntimeErrorConstructor.lut.h in Headers */,
  • trunk/Source/JavaScriptCore/assembler/MacroAssembler.cpp

    r220958 r221774  
    3939static void stdFunctionCallback(Probe::Context& context)
    4040{
    41     auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);
     41    auto func = context.arg<const std::function<void(Probe::Context&)>*>();
    4242    (*func)(context);
    4343}
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp

    r220958 r221774  
    176176{
    177177    auto& out = WTF::dataFile();
    178     PrintRecordList& list = *reinterpret_cast<PrintRecordList*>(probeContext.arg);
     178    PrintRecordList& list = *probeContext.arg<PrintRecordList*>();
    179179    for (size_t i = 0; i < list.size(); i++) {
    180180        auto& record = list[i];
  • trunk/Source/JavaScriptCore/assembler/ProbeContext.h

    r220960 r221774  
    4646    inline double& fpr(FPRegisterID);
    4747
    48     template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
    49     T gpr(RegisterID) const;
    50     template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
    51     T gpr(RegisterID) const;
    52     template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
    53     T spr(SPRegisterID) const;
    54     template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
    55     T spr(SPRegisterID) const;
     48    template<typename T> T gpr(RegisterID) const;
     49    template<typename T> T spr(SPRegisterID) const;
    5650    template<typename T> T fpr(FPRegisterID) const;
    5751
     
    8680}
    8781
    88 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
     82template<typename T>
    8983T CPUState::gpr(RegisterID id) const
    9084{
    9185    CPUState* cpu = const_cast<CPUState*>(this);
    92     return static_cast<T>(cpu->gpr(id));
    93 }
    94 
    95 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
    96 T CPUState::gpr(RegisterID id) const
    97 {
    98     CPUState* cpu = const_cast<CPUState*>(this);
    99     return reinterpret_cast<T>(cpu->gpr(id));
    100 }
    101 
    102 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
     86    auto& from = cpu->gpr(id);
     87    typename std::remove_const<T>::type to { };
     88    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
     89    return to;
     90}
     91
     92template<typename T>
    10393T CPUState::spr(SPRegisterID id) const
    10494{
    10595    CPUState* cpu = const_cast<CPUState*>(this);
    106     return static_cast<T>(cpu->spr(id));
    107 }
    108 
    109 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
    110 T CPUState::spr(SPRegisterID id) const
    111 {
    112     CPUState* cpu = const_cast<CPUState*>(this);
    113     return reinterpret_cast<T>(cpu->spr(id));
     96    auto& from = cpu->spr(id);
     97    typename std::remove_const<T>::type to { };
     98    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
     99    return to;
    114100}
    115101
     
    206192
    207193    Context(State* state)
    208         : m_state(state)
    209         , arg(state->arg)
    210         , cpu(state->cpu)
     194        : cpu(state->cpu)
     195        , m_state(state)
    211196    { }
    212197
    213     uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); }
    214     uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); }
    215     double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); }
    216     const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); }
    217     const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); }
    218     const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); }
    219 
    220     void*& pc() { return m_state->cpu.pc(); }
    221     void*& fp() { return m_state->cpu.fp(); }
    222     void*& sp() { return m_state->cpu.sp(); }
    223 
    224     template<typename T> T pc() { return m_state->cpu.pc<T>(); }
    225     template<typename T> T fp() { return m_state->cpu.fp<T>(); }
    226     template<typename T> T sp() { return m_state->cpu.sp<T>(); }
     198    template<typename T>
     199    T arg() { return reinterpret_cast<T>(m_state->arg); }
     200
     201    uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
     202    uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
     203    double& fpr(FPRegisterID id) { return cpu.fpr(id); }
     204    const char* gprName(RegisterID id) { return cpu.gprName(id); }
     205    const char* sprName(SPRegisterID id) { return cpu.sprName(id); }
     206    const char* fprName(FPRegisterID id) { return cpu.fprName(id); }
     207
     208    template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); }
     209    template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); }
     210    template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); }
     211
     212    void*& pc() { return cpu.pc(); }
     213    void*& fp() { return cpu.fp(); }
     214    void*& sp() { return cpu.sp(); }
     215
     216    template<typename T> T pc() { return cpu.pc<T>(); }
     217    template<typename T> T fp() { return cpu.fp<T>(); }
     218    template<typename T> T sp() { return cpu.sp<T>(); }
    227219
    228220    Stack& stack()
     
    235227    Stack* releaseStack() { return new Stack(WTFMove(m_stack)); }
    236228
     229    CPUState& cpu;
     230
    237231private:
    238232    State* m_state;
    239 public:
    240     void* arg;
    241     CPUState& cpu;
    242 
    243 private:
    244233    Stack m_stack;
    245234
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.cpp

    r220960 r221774  
    3636Page::Page(void* baseAddress)
    3737    : m_baseLogicalAddress(baseAddress)
     38    , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress))
    3839{
    3940    memcpy(&m_buffer, baseAddress, s_pageSize);
  • trunk/Source/JavaScriptCore/assembler/ProbeStack.h

    r220960 r221774  
    5757    T get(void* logicalAddress)
    5858    {
    59         return *physicalAddressFor<T*>(logicalAddress);
     59        void* from = physicalAddressFor(logicalAddress);
     60        typename std::remove_const<T>::type to { };
     61        std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
     62        return to;
     63    }
     64    template<typename T>
     65    T get(void* logicalBaseAddress, ptrdiff_t offset)
     66    {
     67        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
    6068    }
    6169
     
    6472    {
    6573        m_dirtyBits |= dirtyBitFor(logicalAddress);
    66         *physicalAddressFor<T*>(logicalAddress) = value;
     74        void* to = physicalAddressFor(logicalAddress);
     75        std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues.
     76    }
     77    template<typename T>
     78    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
     79    {
     80        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
    6781    }
    6882
     
    8195    }
    8296
    83     template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type>
    84     T physicalAddressFor(void* logicalAddress)
    85     {
    86         uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask;
    87         void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset;
    88         return reinterpret_cast<T>(physicalAddress);
     97    void* physicalAddressFor(void* logicalAddress)
     98    {
     99        return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset;
    89100    }
    90101
     
    93104    void* m_baseLogicalAddress { nullptr };
    94105    uintptr_t m_dirtyBits { 0 };
     106    ptrdiff_t m_physicalAddressOffset;
    95107
    96108    static constexpr size_t s_pageSize = 1024;
     
    121133    Stack(Stack&& other);
    122134
    123     void* lowWatermark() { return m_lowWatermark; }
    124 
    125     template<typename T>
    126     typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
    127     {
    128         Page* page = pageFor(address);
    129         return page->get<T>(address);
    130     }
    131 
    132     template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
    133     void set(void* address, T value)
    134     {
    135         Page* page = pageFor(address);
    136         page->set<T>(address, value);
    137 
     135    void* lowWatermark()
     136    {
    138137        // We use the chunkAddress for the low watermark because we'll be doing write backs
    139138        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
    140139        // the chunk as the low watermark of any given set address.
    141         void* chunkAddress = Page::chunkAddressFor(address);
    142         if (chunkAddress < m_lowWatermark)
    143             m_lowWatermark = chunkAddress;
    144     }
    145 
    146     template<typename T>
    147     typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
     140        return Page::chunkAddressFor(m_lowWatermark);
     141    }
     142
     143    template<typename T>
     144    T get(void* address)
    148145    {
    149146        Page* page = pageFor(address);
    150         return bitwise_cast<double>(page->get<uint64_t>(address));
    151     }
    152 
    153     template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
    154     void set(void* address, double value)
    155     {
    156         set<uint64_t>(address, bitwise_cast<uint64_t>(value));
     147        return page->get<T>(address);
     148    }
     149    template<typename T>
     150    T get(void* logicalBaseAddress, ptrdiff_t offset)
     151    {
     152        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
     153    }
     154
     155    template<typename T>
     156    void set(void* address, T value)
     157    {
     158        Page* page = pageFor(address);
     159        page->set<T>(address, value);
     160
     161        if (address < m_lowWatermark)
     162            m_lowWatermark = address;
     163    }
     164    template<typename T>
     165    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
     166    {
     167        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
    157168    }
    158169
  • trunk/Source/JavaScriptCore/bytecode/ArithProfile.cpp

    r206392 r221774  
    11/*
    2  * Copyright (C) 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3333
    3434#if ENABLE(JIT)
     35// FIXME: This is being supplanted by observeResult(). Remove this one
     36// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    3537void ArithProfile::emitObserveResult(CCallHelpers& jit, JSValueRegs regs, TagRegistersMode mode)
    3638{
  • trunk/Source/JavaScriptCore/bytecode/ArithProfile.h

    r206392 r221774  
    11/*
    2  * Copyright (C) 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    212212    // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble) if it sees a
    213213    // double. Sets NonNumber if it sees a non-number.
     214    // FIXME: This is being supplanted by observeResult(). Remove this one
     215    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    214216    void emitObserveResult(CCallHelpers&, JSValueRegs, TagRegistersMode = HaveTagRegisters);
    215217   
  • trunk/Source/JavaScriptCore/bytecode/ArrayProfile.h

    r218794 r221774  
    11/*
    2  * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    219219    void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure);
    220220   
     221    void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; }
    221222    ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; }
    222223    bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; }
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r221196 r221774  
    23162316}
    23172317
     2318auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction
     2319{
     2320    DFG::OSRExitBase& exit = exitState.exit;
     2321    if (!exitKindMayJettison(exit.m_kind)) {
     2322        // FIXME: We may want to notice that we're frequently exiting
     2323        // at an op_catch that we didn't compile an entrypoint for, and
     2324        // then trigger a reoptimization of this CodeBlock:
     2325        // https://bugs.webkit.org/show_bug.cgi?id=175842
     2326        return OptimizeAction::None;
     2327    }
     2328
     2329    exit.m_count++;
     2330    m_osrExitCounter++;
     2331
     2332    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
     2333    ASSERT(baselineCodeBlock == baselineAlternative());
     2334    if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold()))
     2335        return OptimizeAction::ReoptimizeNow;
     2336
     2337    // We want to figure out if there's a possibility that we're in a loop. For the outermost
     2338    // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
     2339    // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
     2340    // problem is the inlined functions, which might also have loops, but whose baseline versions
     2341    // don't know where to look for the exit count. Figure out if those loops are severe enough
     2342    // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
     2343    // Otherwise, we should use the normal reoptimization trigger.
     2344
     2345    bool didTryToEnterInLoop = false;
     2346    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     2347        if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) {
     2348            didTryToEnterInLoop = true;
     2349            break;
     2350        }
     2351    }
     2352
     2353    uint32_t exitCountThreshold = didTryToEnterInLoop
     2354        ? exitCountThresholdForReoptimizationFromLoop()
     2355        : exitCountThresholdForReoptimization();
     2356
     2357    if (m_osrExitCounter > exitCountThreshold)
     2358        return OptimizeAction::ReoptimizeNow;
     2359
     2360    // Too few fails. Adjust the execution counter such that the target is to only optimize after a while.
     2361    baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold);
     2362    return OptimizeAction::None;
     2363}
     2364
    23182365void CodeBlock::optimizeNextInvocation()
    23192366{
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.h

    r221196 r221774  
    7878namespace JSC {
    7979
     80namespace DFG {
     81struct OSRExitState;
     82} // namespace DFG
     83
    8084class BytecodeLivenessAnalysis;
    8185class CodeBlockSet;
     
    763767    void countOSRExit() { m_osrExitCounter++; }
    764768
    765     uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; }
    766 
     769    enum class OptimizeAction { None, ReoptimizeNow };
     770    OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&);
     771
     772    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    767773    static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
    768774
  • trunk/Source/JavaScriptCore/bytecode/ExecutionCounter.h

    r206525 r221774  
    11/*
    2  * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4242int32_t applyMemoryUsageHeuristicsAndConvertToInt(int32_t value, CodeBlock*);
    4343
     44// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4445inline int32_t formattedTotalExecutionCount(float value)
    4546{
     
    5859    void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
    5960    bool checkIfThresholdCrossedAndSet(CodeBlock*);
     61    bool hasCrossedThreshold() const { return m_counter >= 0; }
    6062    void setNewThreshold(int32_t threshold, CodeBlock*);
    6163    void deferIndefinitely();
     
    6365    void dump(PrintStream&) const;
    6466   
     67    void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold)
     68    {
     69        m_activeThreshold = activeThreshold;
     70        m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold);
     71        m_totalCount = memoryUsageAdjustedThreshold;
     72    }
     73
    6574    static int32_t maximumExecutionCountsBetweenCheckpoints()
    6675    {
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp

    r208761 r221774  
    11/*
    2  * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4747}
    4848
     49// FIXME: This is being supplanted by reportValue(). Remove this one
     50// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    4951void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueRegs regs) const
    5052{
     
    7577}
    7678
     79void MethodOfGettingAValueProfile::reportValue(JSValue value)
     80{
     81    switch (m_kind) {
     82    case None:
     83        return;
     84
     85    case Ready:
     86        *u.profile->specFailBucket(0) = JSValue::encode(value);
     87        return;
     88
     89    case LazyOperand: {
     90        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
     91
     92        ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
     93        LazyOperandValueProfile* profile =
     94            u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key);
     95        *profile->specFailBucket(0) = JSValue::encode(value);
     96        return;
     97    }
     98
     99    case ArithProfileReady: {
     100        u.arithProfile->observeResult(value);
     101        return;
     102    } }
     103
     104    RELEASE_ASSERT_NOT_REACHED();
     105}
     106
    77107} // namespace JSC
    78108
  • trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h

    r218794 r221774  
    11/*
    2  * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    7171   
    7272    explicit operator bool() const { return m_kind != None; }
    73    
     73
     74    // FIXME: emitReportValue is being supplanted by reportValue(). Remove this one
     75    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
    7476    void emitReportValue(CCallHelpers&, JSValueRegs) const;
    75    
     77    void reportValue(JSValue);
     78
    7679private:
    7780    enum Kind {
  • trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp

    r218794 r221774  
    11/*
    2  * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    9090    // Make sure that any stubs that the DFG is going to use are initialized. We want to
    9191    // make sure that all JIT code generation does finalization on the main thread.
    92     vm.getCTIStub(osrExitGenerationThunkGenerator);
     92    vm.getCTIStub(osrExitThunkGenerator);
    9393    vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
    9494    vm.getCTIStub(linkCallThunkGenerator);
  • trunk/Source/JavaScriptCore/dfg/DFGJITCode.cpp

    r221602 r221774  
    226226}
    227227
    228 std::optional<CodeOrigin> JITCode::findPC(CodeBlock*, void* pc)
    229 {
    230     for (OSRExit& exit : osrExit) {
    231         if (ExecutableMemoryHandle* handle = exit.m_code.executableMemory()) {
    232             if (handle->start() <= pc && pc < handle->end())
    233                 return std::optional<CodeOrigin>(exit.m_codeOriginForExitProfile);
    234         }
    235     }
    236 
    237     return std::nullopt;
    238 }
    239 
    240228void JITCode::finalizeOSREntrypoints()
    241229{
  • trunk/Source/JavaScriptCore/dfg/DFGJITCode.h

    r221602 r221774  
    127127    static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
    128128
    129     std::optional<CodeOrigin> findPC(CodeBlock*, void* pc) override;
    130    
    131129private:
    132130    friend class JITCompiler; // Allow JITCompiler to call setCodeRef().
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

    r221602 r221774  
    8686    }
    8787   
     88    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator);
     89    CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code());
    8890    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
    89         OSRExit& exit = m_jitCode->osrExit[i];
    9091        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
    9192        JumpList& failureJumps = info.m_failureJumps;
     
    9798        jitAssertHasValidCallFrame();
    9899        store32(TrustedImm32(i), &vm()->osrExitIndex);
    99         exit.setPatchableCodeOffset(patchableJump());
     100        Jump target = jump();
     101        addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
     102            linkBuffer.link(target, osrExitThunkLabel);
     103        });
    100104    }
    101105}
     
    304308    }
    305309   
    306     MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitGenerationThunkGenerator);
    307     CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());
    308310    for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
    309         OSRExit& exit = m_jitCode->osrExit[i];
    310311        OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
    311         linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
    312         exit.correctJump(linkBuffer);
    313312        if (info.m_replacementSource.isSet()) {
    314313            m_jitCode->common.jumpReplacements.append(JumpReplacement(
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r221528 r221774  
    11/*
    2  * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3030
    3131#include "AssemblyHelpers.h"
     32#include "ClonedArguments.h"
    3233#include "DFGGraph.h"
    3334#include "DFGMayExit.h"
    34 #include "DFGOSRExitCompilerCommon.h"
    3535#include "DFGOSRExitPreparation.h"
    3636#include "DFGOperations.h"
    3737#include "DFGSpeculativeJIT.h"
    38 #include "FrameTracers.h"
     38#include "DirectArguments.h"
     39#include "InlineCallFrame.h"
    3940#include "JSCInlines.h"
     41#include "JSCJSValue.h"
    4042#include "OperandsInlines.h"
     43#include "ProbeContext.h"
     44#include "ProbeFrame.h"
    4145
    4246namespace JSC { namespace DFG {
     47
     48using CPUState = Probe::CPUState;
     49using Context = Probe::Context;
     50using Frame = Probe::Frame;
     51
     52static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
     53static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
     54static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
     55
     56static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
     57{
     58    if (source.isAddress()) {
     59        JSValue result;
     60        std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
     61        return result;
     62    }
     63#if USE(JSVALUE64)
     64    return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
     65#else
     66    if (source.hasKnownTag())
     67        return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
     68    return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
     69#endif
     70}
     71
     72#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     73
     74static_assert(is64Bit(), "we only support callee save registers on 64-bit");
     75
     76// Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
     77static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
     78{
     79    ASSERT(codeBlock);
     80
     81    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     82    RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     83    unsigned registerCount = calleeSaves->size();
     84
     85    uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
     86    for (unsigned i = 0; i < registerCount; i++) {
     87        RegisterAtOffset entry = calleeSaves->at(i);
     88        if (dontRestoreRegisters.get(entry.reg()))
     89            continue;
     90        // The callee saved values come from the original stack, not the recovered stack.
     91        // Hence, we read the values directly from the physical stack memory instead of
     92        // going through context.stack().
     93        ASSERT(!(entry.offset() % sizeof(uintptr_t)));
     94        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
     95    }
     96}
     97
     98// Based on AssemblyHelpers::emitSaveCalleeSavesFor().
     99static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
     100{
     101    auto& stack = context.stack();
     102    ASSERT(codeBlock);
     103
     104    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     105    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     106    unsigned registerCount = calleeSaves->size();
     107
     108    for (unsigned i = 0; i < registerCount; i++) {
     109        RegisterAtOffset entry = calleeSaves->at(i);
     110        if (dontSaveRegisters.get(entry.reg()))
     111            continue;
     112        stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     113    }
     114}
     115
     116// Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
     117static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
     118{
     119    VM& vm = *context.arg<VM*>();
     120
     121    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
     122    RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
     123    unsigned registerCount = allCalleeSaves->size();
     124
     125    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
     126    uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
     127
     128    // Restore all callee saves.
     129    for (unsigned i = 0; i < registerCount; i++) {
     130        RegisterAtOffset entry = allCalleeSaves->at(i);
     131        if (dontRestoreRegisters.get(entry.reg()))
     132            continue;
     133        size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
     134        if (entry.reg().isGPR())
     135            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
     136        else
     137            context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
     138    }
     139}
     140
     141// Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
     142static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
     143{
     144    VM& vm = *context.arg<VM*>();
     145    auto& stack = context.stack();
     146
     147    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
     148    void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
     149
     150    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
     151    RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
     152    unsigned registerCount = allCalleeSaves->size();
     153
     154    for (unsigned i = 0; i < registerCount; i++) {
     155        RegisterAtOffset entry = allCalleeSaves->at(i);
     156        if (dontCopyRegisters.get(entry.reg()))
     157            continue;
     158        if (entry.reg().isGPR())
     159            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     160        else
     161            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
     162    }
     163}
     164
     165// Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
     166static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
     167{
     168    Frame frame(context.fp(), context.stack());
     169    ASSERT(codeBlock);
     170
     171    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
     172    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     173    unsigned registerCount = calleeSaves->size();
     174
     175    RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
     176
     177    for (unsigned i = 0; i < registerCount; i++) {
     178        RegisterAtOffset entry = calleeSaves->at(i);
     179        if (dontSaveRegisters.get(entry.reg()))
     180            continue;
     181
     182        uintptr_t savedRegisterValue;
     183
     184        if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
     185            savedRegisterValue = frame.get<uintptr_t>(entry.offset());
     186        else
     187            savedRegisterValue = context.gpr(entry.reg().gpr());
     188
     189        frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
     190    }
     191}
     192#else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     193
     194static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
     195static void saveCalleeSavesFor(Context&, CodeBlock*) { }
     196static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
     197static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
     198static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
     199
     200#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     201
     202static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     203{
     204    VM& vm = *context.arg<VM*>();
     205
     206    ASSERT(vm.heap.isDeferred());
     207
     208    if (inlineCallFrame)
     209        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     210
     211    unsigned length = argumentCount - 1;
     212    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
     213    DirectArguments* result = DirectArguments::create(
     214        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
     215
     216    result->callee().set(vm, result, callee);
     217
     218    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
     219    Frame frame(frameBase, context.stack());
     220    for (unsigned i = length; i--;)
     221        result->setIndexQuickly(vm, i, frame.argument(i));
     222
     223    return result;
     224}
     225
     226static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
     227{
     228    VM& vm = *context.arg<VM*>();
     229    ExecState* exec = context.fp<ExecState*>();
     230
     231    ASSERT(vm.heap.isDeferred());
     232
     233    if (inlineCallFrame)
     234        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
     235
     236    unsigned length = argumentCount - 1;
     237    ClonedArguments* result = ClonedArguments::createEmpty(
     238        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
     239
     240    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
     241    Frame frame(frameBase, context.stack());
     242    for (unsigned i = length; i--;)
     243        result->putDirectIndex(exec, i, frame.argument(i));
     244    return result;
     245}
    43246
    44247OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
     
    57260}
    58261
    59 void OSRExit::setPatchableCodeOffset(MacroAssembler::PatchableJump check)
    60 {
    61     m_patchableCodeOffset = check.m_jump.m_label.m_offset;
    62 }
    63 
    64 MacroAssembler::Jump OSRExit::getPatchableCodeOffsetAsJump() const
    65 {
    66     return MacroAssembler::Jump(AssemblerLabel(m_patchableCodeOffset));
    67 }
    68 
    69 CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* dfgCodeBlock) const
    70 {
    71     return CodeLocationJump(dfgCodeBlock->jitCode()->dataAddressAtOffset(m_patchableCodeOffset));
    72 }
    73 
    74 void OSRExit::correctJump(LinkBuffer& linkBuffer)
    75 {
    76     MacroAssembler::Label label;
    77     label.m_label.m_offset = m_patchableCodeOffset;
    78     m_patchableCodeOffset = linkBuffer.offsetOf(label);
    79 }
    80 
    81 void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecovery>& operands)
    82 {
     262static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
     263{
     264    Frame frame(context.fp(), context.stack());
     265
    83266    HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
    84267    for (size_t index = 0; index < operands.size(); ++index) {
     
    93276        auto iter = alreadyAllocatedArguments.find(id);
    94277        if (iter != alreadyAllocatedArguments.end()) {
    95             JSValueRegs regs = JSValueRegs::withTwoAvailableRegs(GPRInfo::regT0, GPRInfo::regT1);
    96             jit.loadValue(CCallHelpers::addressFor(iter->value), regs);
    97             jit.storeValue(regs, CCallHelpers::addressFor(operand));
     278            frame.setOperand(operand, frame.operand(iter->value));
    98279            continue;
    99280        }
    100281
    101282        InlineCallFrame* inlineCallFrame =
    102             jit.codeBlock()->jitCode()->dfg()->minifiedDFG.at(id)->inlineCallFrame();
     283            dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
    103284
    104285        int stackOffset;
     
    108289            stackOffset = 0;
    109290
    110         if (!inlineCallFrame || inlineCallFrame->isClosureCall) {
    111             jit.loadPtr(
    112                 AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee),
    113                 GPRInfo::regT0);
    114         } else {
    115             jit.move(
    116                 AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeRecovery.constant().asCell()),
    117                 GPRInfo::regT0);
    118         }
    119 
    120         if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
    121             jit.load32(
    122                 AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCount),
    123                 GPRInfo::regT1);
    124         } else {
    125             jit.move(
    126                 AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis),
    127                 GPRInfo::regT1);
    128         }
    129 
    130         jit.setupArgumentsWithExecState(
    131             AssemblyHelpers::TrustedImmPtr(inlineCallFrame), GPRInfo::regT0, GPRInfo::regT1);
     291        JSFunction* callee;
     292        if (!inlineCallFrame || inlineCallFrame->isClosureCall)
     293            callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
     294        else
     295            callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
     296
     297        int32_t argumentCount;
     298        if (!inlineCallFrame || inlineCallFrame->isVarargs())
     299            argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset);
     300        else
     301            argumentCount = inlineCallFrame->argumentCountIncludingThis;
     302
     303        JSCell* argumentsObject;
    132304        switch (recovery.technique()) {
    133305        case DirectArgumentsThatWereNotCreated:
    134             jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateDirectArgumentsDuringExit)), GPRInfo::nonArgGPR0);
     306            argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
    135307            break;
    136308        case ClonedArgumentsThatWereNotCreated:
    137             jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateClonedArgumentsDuringExit)), GPRInfo::nonArgGPR0);
     309            argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
    138310            break;
    139311        default:
     
    141313            break;
    142314        }
    143         jit.call(GPRInfo::nonArgGPR0);
    144         jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand));
     315        frame.setOperand(operand, JSValue(argumentsObject));
    145316
    146317        alreadyAllocatedArguments.add(id, operand);
     
    148319}
    149320
    150 void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec)
    151 {
    152     VM* vm = &exec->vm();
    153     auto scope = DECLARE_THROW_SCOPE(*vm);
    154 
    155     if (vm->callFrameForCatch)
    156         RELEASE_ASSERT(vm->callFrameForCatch == exec);
     321void OSRExit::executeOSRExit(Context& context)
     322{
     323    VM& vm = *context.arg<VM*>();
     324    auto scope = DECLARE_THROW_SCOPE(vm);
     325
     326    ExecState* exec = context.fp<ExecState*>();
     327    ASSERT(&exec->vm() == &vm);
     328
     329    if (vm.callFrameForCatch) {
     330        exec = vm.callFrameForCatch;
     331        context.fp() = exec;
     332    }
    157333
    158334    CodeBlock* codeBlock = exec->codeBlock();
     
    162338    // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
    163339    // really be profitable.
    164     DeferGCForAWhile deferGC(vm->heap);
    165 
    166     uint32_t exitIndex = vm->osrExitIndex;
    167     OSRExit& exit = codeBlock->jitCode()->dfg()->osrExit[exitIndex];
    168 
    169     if (vm->callFrameForCatch)
    170         ASSERT(exit.m_kind == GenericUnwind);
    171     if (exit.isExceptionHandler())
    172         ASSERT_UNUSED(scope, !!scope.exception());
    173    
    174     prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
    175 
    176     // Compute the value recoveries.
    177     Operands<ValueRecovery> operands;
    178     codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands);
    179 
    180     SpeculationRecovery* recovery = 0;
    181     if (exit.m_recoveryIndex != UINT_MAX)
    182         recovery = &codeBlock->jitCode()->dfg()->speculationRecovery[exit.m_recoveryIndex];
    183 
    184     {
    185         CCallHelpers jit(codeBlock);
    186 
    187         if (exit.m_kind == GenericUnwind) {
    188             // We are acting as a defacto op_catch because we arrive here from genericUnwind().
    189             // So, we must restore our call frame and stack pointer.
    190             jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(*vm);
    191             jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
    192         }
    193         jit.addPtr(
    194             CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
    195             GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
    196 
    197         jit.jitAssertHasValidCallFrame();
    198 
    199         if (UNLIKELY(vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
    200             Profiler::Database& database = *vm->m_perBytecodeProfiler;
     340    DeferGCForAWhile deferGC(vm.heap);
     341
     342    uint32_t exitIndex = vm.osrExitIndex;
     343    DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
     344    OSRExit& exit = dfgJITCode->osrExit[exitIndex];
     345
     346    ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
     347    ASSERT_UNUSED(scope, !exit.isExceptionHandler() || !!scope.exception());
     348
     349    if (UNLIKELY(!exit.exitState)) {
     350        // We only need to execute this block once for each OSRExit record. The computed
     351        // results will be cached in the OSRExitState record for use of the rest of the
     352        // exit ramp code.
     353
     354        // Ensure we have baseline codeBlocks to OSR exit to.
     355        prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
     356
     357        CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
     358        ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
     359
     360        // Compute the value recoveries.
     361        Operands<ValueRecovery> operands;
     362        dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands);
     363
     364        SpeculationRecovery* recovery = nullptr;
     365        if (exit.m_recoveryIndex != UINT_MAX)
     366            recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
     367
     368        int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
     369        double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
     370        ASSERT(adjustedThreshold > 0);
     371        adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
     372
     373        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
     374        Vector<BytecodeAndMachineOffset> decodedCodeMap;
     375        codeBlockForExit->jitCodeMap()->decode(decodedCodeMap);
     376
     377        BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
     378
     379        ASSERT(mapping);
     380        ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex);
     381
     382        ptrdiff_t finalStackPointerOffset = codeBlockForExit->stackPointerOffset() * sizeof(Register);
     383
     384        void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
     385
     386        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, recovery, finalStackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget));
     387
     388        if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
     389            Profiler::Database& database = *vm.m_perBytecodeProfiler;
    201390            Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
    202391
     
    204393                exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
    205394                exit.m_kind, exit.m_kind == UncountableInvalidation);
    206             jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(profilerExit->counterAddress()));
     395            exit.exitState->profilerExit = profilerExit;
    207396        }
    208397
    209         compileExit(jit, *vm, exit, operands, recovery);
    210 
    211         LinkBuffer patchBuffer(jit, codeBlock);
    212         exit.m_code = FINALIZE_CODE_IF(
    213             shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseDFGOSRExit(),
    214             patchBuffer,
    215             ("DFG OSR exit #%u (%s, %s) from %s, with operands = %s",
     398        if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
     399            dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
    216400                exitIndex, toCString(exit.m_codeOrigin).data(),
    217401                exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
    218                 toCString(ignoringContext<DumpContext>(operands)).data()));
    219     }
    220 
    221     MacroAssembler::repatchJump(exit.codeLocationForRepatch(codeBlock), CodeLocationLabel(exit.m_code.code()));
    222 
    223     vm->osrExitJumpDestination = exit.m_code.code().executableAddress();
    224 }
    225 
    226 void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const Operands<ValueRecovery>& operands, SpeculationRecovery* recovery)
    227 {
    228     jit.jitAssertTagsInPlace();
    229 
    230     // Pro-forma stuff.
    231     if (Options::printEachOSRExit()) {
    232         SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
    233         debugInfo->codeBlock = jit.codeBlock();
    234         debugInfo->kind = exit.m_kind;
    235         debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
    236 
    237         jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
    238     }
     402                toCString(ignoringContext<DumpContext>(operands)).data());
     403        }
     404    }
     405
     406    OSRExitState& exitState = *exit.exitState.get();
     407    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
     408    ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
     409
     410    Operands<ValueRecovery>& operands = exitState.operands;
     411    SpeculationRecovery* recovery = exitState.recovery;
     412
     413    if (exit.m_kind == GenericUnwind) {
     414        // We are acting as a defacto op_catch because we arrive here from genericUnwind().
     415        // So, we must restore our call frame and stack pointer.
     416        restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
     417        ASSERT(context.fp() == vm.callFrameForCatch);
     418    }
     419    context.sp() = context.fp<uint8_t*>() + (codeBlock->stackPointerOffset() * sizeof(Register));
     420
     421    ASSERT(!(context.fp<uintptr_t>() & 0x7));
     422
     423    if (exitState.profilerExit)
     424        exitState.profilerExit->incCount();
     425
     426    auto& cpu = context.cpu;
     427    Frame frame(cpu.fp(), context.stack());
     428
     429#if USE(JSVALUE64)
     430    ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber);
     431    ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask);
     432#endif
     433
     434    if (UNLIKELY(Options::printEachOSRExit()))
     435        printOSRExit(context, vm.osrExitIndex, exit);
    239436
    240437    // Perform speculation recovery. This only comes into play when an operation
     
    244441        switch (recovery->type()) {
    245442        case SpeculativeAdd:
    246             jit.sub32(recovery->src(), recovery->dest());
    247 #if USE(JSVALUE64)
    248             jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
     443            cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
     444#if USE(JSVALUE64)
     445            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
     446            cpu.gpr(recovery->dest()) |= TagTypeNumber;
    249447#endif
    250448            break;
    251449
    252450        case SpeculativeAddImmediate:
    253             jit.sub32(AssemblyHelpers::Imm32(recovery->immediate()), recovery->dest());
    254 #if USE(JSVALUE64)
    255             jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
     451            cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
     452#if USE(JSVALUE64)
     453            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
     454            cpu.gpr(recovery->dest()) |= TagTypeNumber;
    256455#endif
    257456            break;
     
    259458        case BooleanSpeculationCheck:
    260459#if USE(JSVALUE64)
    261             jit.xor64(AssemblyHelpers::TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());
     460            cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse;
    262461#endif
    263462            break;
     
    282481
    283482            CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
    284             if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
    285 #if USE(JSVALUE64)
    286                 GPRReg usedRegister;
    287                 if (exit.m_jsValueSource.isAddress())
    288                     usedRegister = exit.m_jsValueSource.base();
    289                 else
    290                     usedRegister = exit.m_jsValueSource.gpr();
    291 #else
    292                 GPRReg usedRegister1;
    293                 GPRReg usedRegister2;
    294                 if (exit.m_jsValueSource.isAddress()) {
    295                     usedRegister1 = exit.m_jsValueSource.base();
    296                     usedRegister2 = InvalidGPRReg;
    297                 } else {
    298                     usedRegister1 = exit.m_jsValueSource.payloadGPR();
    299                     if (exit.m_jsValueSource.hasKnownTag())
    300                         usedRegister2 = InvalidGPRReg;
    301                     else
    302                         usedRegister2 = exit.m_jsValueSource.tagGPR();
    303                 }
    304 #endif
    305 
    306                 GPRReg scratch1;
    307                 GPRReg scratch2;
    308 #if USE(JSVALUE64)
    309                 scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister);
    310                 scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister, scratch1);
    311 #else
    312                 scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2);
    313                 scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2, scratch1);
    314 #endif
    315 
    316                 if (isARM64()) {
    317                     jit.pushToSave(scratch1);
    318                     jit.pushToSave(scratch2);
    319                 } else {
    320                     jit.push(scratch1);
    321                     jit.push(scratch2);
    322                 }
    323 
    324                 GPRReg value;
    325                 if (exit.m_jsValueSource.isAddress()) {
    326                     value = scratch1;
    327                     jit.loadPtr(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), value);
    328                 } else
    329                     value = exit.m_jsValueSource.payloadGPR();
    330 
    331                 jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
    332                 jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID());
    333 #if USE(JSVALUE64)
    334                 jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeAndMiscOffset()), scratch1);
    335 #else
    336                 jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeIncludingHistoryOffset()), scratch1);
    337 #endif
    338                 jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
    339                 jit.lshift32(scratch1, scratch2);
    340                 jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
    341 
    342                 if (isARM64()) {
    343                     jit.popToRestore(scratch2);
    344                     jit.popToRestore(scratch1);
    345                 } else {
    346                     jit.pop(scratch2);
    347                     jit.pop(scratch1);
    348                 }
     483            CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
     484            if (ArrayProfile* arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex)) {
     485                Structure* structure = jsValueFor(cpu, exit.m_jsValueSource).asCell()->structure(vm);
     486                arrayProfile->observeStructure(structure);
     487                // FIXME: We should be able to use arrayModeFromStructure() to determine the observed ArrayMode here.
     488                // However, currently, doing so would result in a pdfjs preformance regression.
     489                // https://bugs.webkit.org/show_bug.cgi?id=176473
     490                arrayProfile->observeArrayMode(asArrayModes(structure->indexingType()));
    349491            }
    350492        }
    351493
    352         if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) {
    353 #if USE(JSVALUE64)
    354             if (exit.m_jsValueSource.isAddress()) {
    355                 // We can't be sure that we have a spare register. So use the tagTypeNumberRegister,
    356                 // since we know how to restore it.
    357                 jit.load64(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), GPRInfo::tagTypeNumberRegister);
    358                 profile.emitReportValue(jit, JSValueRegs(GPRInfo::tagTypeNumberRegister));
    359                 jit.move(AssemblyHelpers::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
    360             } else
    361                 profile.emitReportValue(jit, JSValueRegs(exit.m_jsValueSource.gpr()));
    362 #else // not USE(JSVALUE64)
    363             if (exit.m_jsValueSource.isAddress()) {
    364                 // Save a register so we can use it.
    365                 GPRReg scratchPayload = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base());
    366                 GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base(), scratchPayload);
    367                 jit.pushToSave(scratchPayload);
    368                 jit.pushToSave(scratchTag);
    369 
    370                 JSValueRegs scratch(scratchTag, scratchPayload);
    371                
    372                 jit.loadValue(exit.m_jsValueSource.asAddress(), scratch);
    373                 profile.emitReportValue(jit, scratch);
    374                
    375                 jit.popToRestore(scratchTag);
    376                 jit.popToRestore(scratchPayload);
    377             } else if (exit.m_jsValueSource.hasKnownTag()) {
    378                 GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.payloadGPR());
    379                 jit.pushToSave(scratchTag);
    380                 jit.move(AssemblyHelpers::TrustedImm32(exit.m_jsValueSource.tag()), scratchTag);
    381                 JSValueRegs value(scratchTag, exit.m_jsValueSource.payloadGPR());
    382                 profile.emitReportValue(jit, value);
    383                 jit.popToRestore(scratchTag);
    384             } else
    385                 profile.emitReportValue(jit, exit.m_jsValueSource.regs());
    386 #endif // USE(JSVALUE64)
    387         }
    388     }
    389 
    390     // What follows is an intentionally simple OSR exit implementation that generates
    391     // fairly poor code but is very easy to hack. In particular, it dumps all state that
    392     // needs conversion into a scratch buffer so that in step 6, where we actually do the
    393     // conversions, we know that all temp registers are free to use and the variable is
    394     // definitely in a well-known spot in the scratch buffer regardless of whether it had
    395     // originally been in a register or spilled. This allows us to decouple "where was
    396     // the variable" from "how was it represented". Consider that the
    397     // Int32DisplacedInJSStack recovery: it tells us that the value is in a
    398     // particular place and that that place holds an unboxed int32. We have two different
    399     // places that a value could be (displaced, register) and a bunch of different
    400     // ways of representing a value. The number of recoveries is two * a bunch. The code
    401     // below means that we have to have two + a bunch cases rather than two * a bunch.
    402     // Once we have loaded the value from wherever it was, the reboxing is the same
    403     // regardless of its location. Likewise, before we do the reboxing, the way we get to
    404     // the value (i.e. where we load it from) is the same regardless of its type. Because
    405     // the code below always dumps everything into a scratch buffer first, the two
    406     // questions become orthogonal, which simplifies adding new types and adding new
    407     // locations.
    408     //
    409     // This raises the question: does using such a suboptimal implementation of OSR exit,
    410     // where we always emit code to dump all state into a scratch buffer only to then
    411     // dump it right back into the stack, hurt us in any way? The asnwer is that OSR exits
    412     // are rare. Our tiering strategy ensures this. This is because if an OSR exit is
    413     // taken more than ~100 times, we jettison the DFG code block along with all of its
    414     // exits. It is impossible for an OSR exit - i.e. the code we compile below - to
    415     // execute frequently enough for the codegen to matter that much. It probably matters
    416     // enough that we don't want to turn this into some super-slow function call, but so
    417     // long as we're generating straight-line code, that code can be pretty bad. Also
    418     // because we tend to exit only along one OSR exit from any DFG code block - that's an
    419     // empirical result that we're extremely confident about - the code size of this
    420     // doesn't matter much. Hence any attempt to optimize the codegen here is just purely
    421     // harmful to the system: it probably won't reduce either net memory usage or net
    422     // execution time. It will only prevent us from cleanly decoupling "where was the
    423     // variable" from "how was it represented", which will make it more difficult to add
    424     // features in the future and it will make it harder to reason about bugs.
    425 
    426     // Save all state from GPRs into the scratch buffer.
    427 
    428     ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(sizeof(EncodedJSValue) * operands.size());
    429     EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
    430 
    431     for (size_t index = 0; index < operands.size(); ++index) {
     494        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
     495            profile.reportValue(jsValueFor(cpu, exit.m_jsValueSource));
     496    }
     497
     498    // Do all data format conversions and store the results into the stack.
     499    // Note: we need to recover values before restoring callee save registers below
     500    // because the recovery may rely on values in some of callee save registers.
     501
     502    int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
     503    size_t numberOfOperands = operands.size();
     504    for (size_t index = 0; index < numberOfOperands; ++index) {
    432505        const ValueRecovery& recovery = operands[index];
    433 
    434         switch (recovery.technique()) {
    435         case UnboxedInt32InGPR:
    436         case UnboxedCellInGPR:
    437 #if USE(JSVALUE64)
    438         case InGPR:
    439         case UnboxedInt52InGPR:
    440         case UnboxedStrictInt52InGPR:
    441             jit.store64(recovery.gpr(), scratch + index);
    442             break;
    443 #else
    444         case UnboxedBooleanInGPR:
    445             jit.store32(
    446                 recovery.gpr(),
    447                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
    448             break;
    449            
    450         case InPair:
    451             jit.store32(
    452                 recovery.tagGPR(),
    453                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
    454             jit.store32(
    455                 recovery.payloadGPR(),
    456                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
    457             break;
    458 #endif
    459 
    460         default:
    461             break;
    462         }
    463     }
    464 
    465     // And voila, all GPRs are free to reuse.
    466 
    467     // Save all state from FPRs into the scratch buffer.
    468 
    469     for (size_t index = 0; index < operands.size(); ++index) {
    470         const ValueRecovery& recovery = operands[index];
    471 
    472         switch (recovery.technique()) {
    473         case UnboxedDoubleInFPR:
    474         case InFPR:
    475             jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
    476             jit.storeDouble(recovery.fpr(), MacroAssembler::Address(GPRInfo::regT0));
    477             break;
    478 
    479         default:
    480             break;
    481         }
    482     }
    483 
    484     // Now, all FPRs are also free.
    485 
    486     // Save all state from the stack into the scratch buffer. For simplicity we
    487     // do this even for state that's already in the right place on the stack.
    488     // It makes things simpler later.
    489 
    490     for (size_t index = 0; index < operands.size(); ++index) {
    491         const ValueRecovery& recovery = operands[index];
     506        VirtualRegister reg = operands.virtualRegisterForIndex(index);
     507
     508        if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
     509            continue;
     510
     511        int operand = reg.offset();
    492512
    493513        switch (recovery.technique()) {
    494514        case DisplacedInJSStack:
     515            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
     516            break;
     517
     518        case InFPR:
     519            frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
     520            break;
     521
     522#if USE(JSVALUE64)
     523        case InGPR:
     524            frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
     525            break;
     526#else
     527        case InPair:
     528            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
     529            break;
     530#endif
     531
     532        case UnboxedCellInGPR:
     533            frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
     534            break;
     535
    495536        case CellDisplacedInJSStack:
     537            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell()));
     538            break;
     539
     540#if USE(JSVALUE32_64)
     541        case UnboxedBooleanInGPR:
     542            frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
     543            break;
     544#endif
     545
    496546        case BooleanDisplacedInJSStack:
     547#if USE(JSVALUE64)
     548            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
     549#else
     550            frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload()));
     551#endif
     552            break;
     553
     554        case UnboxedInt32InGPR:
     555            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
     556            break;
     557
    497558        case Int32DisplacedInJSStack:
     559            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32()));
     560            break;
     561
     562#if USE(JSVALUE64)
     563        case UnboxedInt52InGPR:
     564            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
     565            break;
     566
     567        case Int52DisplacedInJSStack:
     568            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52()));
     569            break;
     570
     571        case UnboxedStrictInt52InGPR:
     572            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
     573            break;
     574
     575        case StrictInt52DisplacedInJSStack:
     576            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52()));
     577            break;
     578#endif
     579
     580        case UnboxedDoubleInFPR:
     581            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
     582            break;
     583
    498584        case DoubleDisplacedInJSStack:
    499 #if USE(JSVALUE64)
    500         case Int52DisplacedInJSStack:
    501         case StrictInt52DisplacedInJSStack:
    502             jit.load64(AssemblyHelpers::addressFor(recovery.virtualRegister()), GPRInfo::regT0);
    503             jit.store64(GPRInfo::regT0, scratch + index);
    504             break;
    505 #else
    506             jit.load32(
    507                 AssemblyHelpers::tagFor(recovery.virtualRegister()),
    508                 GPRInfo::regT0);
    509             jit.load32(
    510                 AssemblyHelpers::payloadFor(recovery.virtualRegister()),
    511                 GPRInfo::regT1);
    512             jit.store32(
    513                 GPRInfo::regT0,
    514                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
    515             jit.store32(
    516                 GPRInfo::regT1,
    517                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
    518             break;
    519 #endif
     585            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble())));
     586            break;
     587
     588        case Constant:
     589            frame.setOperand(operand, recovery.constant());
     590            break;
     591
     592        case DirectArgumentsThatWereNotCreated:
     593        case ClonedArgumentsThatWereNotCreated:
     594            // Don't do this, yet.
     595            break;
    520596
    521597        default:
     598            RELEASE_ASSERT_NOT_REACHED();
    522599            break;
    523600        }
     
    527604    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
    528605    // used by baseline.
    529     jit.addPtr(
    530         CCallHelpers::TrustedImm32(
    531             -jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
    532         CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
     606    cpu.sp() = cpu.fp<uint8_t*>() - (codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register));
    533607
    534608    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
    535     jit.emitRestoreCalleeSaves();
    536     jit.emitSaveCalleeSavesFor(jit.baselineCodeBlock());
     609    restoreCalleeSavesFor(context, codeBlock);
     610    saveCalleeSavesFor(context, baselineCodeBlock);
    537611
    538612    // The tag registers are needed to materialize recoveries below.
    539     jit.emitMaterializeTagCheckRegisters();
     613#if USE(JSVALUE64)
     614    cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber;
     615    cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther;
     616#endif
    540617
    541618    if (exit.isExceptionHandler())
    542         jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm);
    543 
    544     // Do all data format conversions and store the results into the stack.
    545 
    546     for (size_t index = 0; index < operands.size(); ++index) {
    547         const ValueRecovery& recovery = operands[index];
    548         VirtualRegister reg = operands.virtualRegisterForIndex(index);
    549 
    550         if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
    551             continue;
    552 
    553         int operand = reg.offset();
    554 
    555         switch (recovery.technique()) {
    556         case DisplacedInJSStack:
    557         case InFPR:
    558 #if USE(JSVALUE64)
    559         case InGPR:
    560         case UnboxedCellInGPR:
    561         case CellDisplacedInJSStack:
    562         case BooleanDisplacedInJSStack:
    563             jit.load64(scratch + index, GPRInfo::regT0);
    564             jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    565             break;
    566 #else // not USE(JSVALUE64)
    567         case InPair:
    568             jit.load32(
    569                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag,
    570                 GPRInfo::regT0);
    571             jit.load32(
    572                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
    573                 GPRInfo::regT1);
    574             jit.store32(
    575                 GPRInfo::regT0,
    576                 AssemblyHelpers::tagFor(operand));
    577             jit.store32(
    578                 GPRInfo::regT1,
    579                 AssemblyHelpers::payloadFor(operand));
    580             break;
    581 
    582         case UnboxedCellInGPR:
    583         case CellDisplacedInJSStack:
    584             jit.load32(
    585                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
    586                 GPRInfo::regT0);
    587             jit.store32(
    588                 AssemblyHelpers::TrustedImm32(JSValue::CellTag),
    589                 AssemblyHelpers::tagFor(operand));
    590             jit.store32(
    591                 GPRInfo::regT0,
    592                 AssemblyHelpers::payloadFor(operand));
    593             break;
    594 
    595         case UnboxedBooleanInGPR:
    596         case BooleanDisplacedInJSStack:
    597             jit.load32(
    598                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
    599                 GPRInfo::regT0);
    600             jit.store32(
    601                 AssemblyHelpers::TrustedImm32(JSValue::BooleanTag),
    602                 AssemblyHelpers::tagFor(operand));
    603             jit.store32(
    604                 GPRInfo::regT0,
    605                 AssemblyHelpers::payloadFor(operand));
    606             break;
    607 #endif // USE(JSVALUE64)
    608 
    609         case UnboxedInt32InGPR:
    610         case Int32DisplacedInJSStack:
    611 #if USE(JSVALUE64)
    612             jit.load64(scratch + index, GPRInfo::regT0);
    613             jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
    614             jit.or64(GPRInfo::tagTypeNumberRegister, GPRInfo::regT0);
    615             jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    616 #else
    617             jit.load32(
    618                 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
    619                 GPRInfo::regT0);
    620             jit.store32(
    621                 AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
    622                 AssemblyHelpers::tagFor(operand));
    623             jit.store32(
    624                 GPRInfo::regT0,
    625                 AssemblyHelpers::payloadFor(operand));
    626 #endif
    627             break;
    628 
    629 #if USE(JSVALUE64)
    630         case UnboxedInt52InGPR:
    631         case Int52DisplacedInJSStack:
    632             jit.load64(scratch + index, GPRInfo::regT0);
    633             jit.rshift64(
    634                 AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), GPRInfo::regT0);
    635             jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
    636             jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    637             break;
    638 
    639         case UnboxedStrictInt52InGPR:
    640         case StrictInt52DisplacedInJSStack:
    641             jit.load64(scratch + index, GPRInfo::regT0);
    642             jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
    643             jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    644             break;
    645 #endif
    646 
    647         case UnboxedDoubleInFPR:
    648         case DoubleDisplacedInJSStack:
    649             jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
    650             jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::fpRegT0);
    651             jit.purifyNaN(FPRInfo::fpRegT0);
    652 #if USE(JSVALUE64)
    653             jit.boxDouble(FPRInfo::fpRegT0, GPRInfo::regT0);
    654             jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
    655 #else
    656             jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
    657 #endif
    658             break;
    659 
    660         case Constant:
    661 #if USE(JSVALUE64)
    662             jit.store64(
    663                 AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())),
    664                 AssemblyHelpers::addressFor(operand));
    665 #else
    666             jit.store32(
    667                 AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
    668                 AssemblyHelpers::tagFor(operand));
    669             jit.store32(
    670                 AssemblyHelpers::TrustedImm32(recovery.constant().payload()),
    671                 AssemblyHelpers::payloadFor(operand));
    672 #endif
    673             break;
    674 
    675         case DirectArgumentsThatWereNotCreated:
    676         case ClonedArgumentsThatWereNotCreated:
    677             // Don't do this, yet.
    678             break;
    679 
    680         default:
    681             RELEASE_ASSERT_NOT_REACHED();
    682             break;
    683         }
    684     }
     619        copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
    685620
    686621    // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
     
    690625    // inline call frame scope - but for now the DFG wouldn't do that.
    691626
    692     emitRestoreArguments(jit, operands);
     627    emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
    693628
    694629    // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
     
    728663    // counterValueForOptimizeAfterWarmUp().
    729664
    730     handleExitCounts(jit, exit);
    731 
    732     // Reify inlined call frames.
    733 
    734     reifyInlinedCallFrames(jit, exit);
    735 
    736     // And finish.
    737     adjustAndJumpToTarget(vm, jit, exit);
    738 }
    739 
    740 void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exec, void* debugInfoRaw, void* scratch)
    741 {
    742     VM* vm = &exec->vm();
    743     NativeCallFrameTracer tracer(vm, exec);
    744 
    745     SpeculationFailureDebugInfo* debugInfo = static_cast<SpeculationFailureDebugInfo*>(debugInfoRaw);
    746     CodeBlock* codeBlock = debugInfo->codeBlock;
     665    if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
     666        triggerReoptimizationNow(baselineCodeBlock, &exit);
     667
     668    reifyInlinedCallFrames(context, baselineCodeBlock, exit);
     669    adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
     670}
     671
     672static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
     673{
     674    auto& cpu = context.cpu;
     675    Frame frame(cpu.fp(), context.stack());
     676
     677    // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
     678    // in presence of inlined tail calls.
     679    // https://bugs.webkit.org/show_bug.cgi?id=147511
     680    ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT);
     681    frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
     682
     683    const CodeOrigin* codeOrigin;
     684    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
     685        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     686        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
     687        InlineCallFrame::Kind trueCallerCallKind;
     688        CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
     689        void* callerFrame = cpu.fp();
     690
     691        if (!trueCaller) {
     692            ASSERT(inlineCallFrame->isTail());
     693            void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
     694            frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
     695            callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
     696        } else {
     697            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
     698            unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
     699            void* jumpTarget = nullptr;
     700
     701            switch (trueCallerCallKind) {
     702            case InlineCallFrame::Call:
     703            case InlineCallFrame::Construct:
     704            case InlineCallFrame::CallVarargs:
     705            case InlineCallFrame::ConstructVarargs:
     706            case InlineCallFrame::TailCall:
     707            case InlineCallFrame::TailCallVarargs: {
     708                CallLinkInfo* callLinkInfo =
     709                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
     710                RELEASE_ASSERT(callLinkInfo);
     711
     712                jumpTarget = callLinkInfo->callReturnLocation().executableAddress();
     713                break;
     714            }
     715
     716            case InlineCallFrame::GetterCall:
     717            case InlineCallFrame::SetterCall: {
     718                StructureStubInfo* stubInfo =
     719                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
     720                RELEASE_ASSERT(stubInfo);
     721
     722                jumpTarget = stubInfo->doneLocation().executableAddress();
     723                break;
     724            }
     725
     726            default:
     727                RELEASE_ASSERT_NOT_REACHED();
     728            }
     729
     730            if (trueCaller->inlineCallFrame)
     731                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     732
     733            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
     734        }
     735
     736        frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
     737
     738        // Restore the inline call frame's callee save registers.
     739        // If this inlined frame is a tail call that will return back to the original caller, we need to
     740        // copy the prior contents of the tag registers already saved for the outer frame to this frame.
     741        saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
     742
     743        if (!inlineCallFrame->isVarargs())
     744            frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
     745        ASSERT(callerFrame);
     746        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
     747#if USE(JSVALUE64)
     748        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     749        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
     750        if (!inlineCallFrame->isClosureCall)
     751            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
     752#else // USE(JSVALUE64) // so this is the 32-bit part
     753        Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
     754        uint32_t locationBits = CallSiteIndex(instruction).bits();
     755        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
     756        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
     757        if (!inlineCallFrame->isClosureCall)
     758            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
     759#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
     760    }
     761
     762    // Don't need to set the toplevel code origin if we only did inline tail calls
     763    if (codeOrigin) {
     764#if USE(JSVALUE64)
     765        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     766#else
     767        Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
     768        uint32_t locationBits = CallSiteIndex(instruction).bits();
     769#endif
     770        frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits);
     771    }
     772}
     773
     774static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
     775{
     776    OSRExitState* exitState = exit.exitState.get();
     777
     778    WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
     779    vm.heap.writeBarrier(baselineCodeBlock);
     780
     781    // We barrier all inlined frames -- and not just the current inline stack --
     782    // because we don't know which inlined function owns the value profile that
     783    // we'll update when we exit. In the case of "f() { a(); b(); }", if both
     784    // a and b are inlined, we might exit inside b due to a bad value loaded
     785    // from a.
     786    // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
     787    // the value profile.
     788    InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
     789    if (inlineCallFrames) {
     790        for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
     791            vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
     792    }
     793
     794    if (exit.m_codeOrigin.inlineCallFrame)
     795        context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     796
     797    void* jumpTarget = exitState->jumpTarget;
     798    ASSERT(jumpTarget);
     799
     800    context.sp() = context.fp<uint8_t*>() + exitState->stackPointerOffset;
     801    if (exit.isExceptionHandler()) {
     802        // Since we're jumping to op_catch, we need to set callFrameForCatch.
     803        vm.callFrameForCatch = context.fp<ExecState*>();
     804    }
     805
     806    vm.topCallFrame = context.fp<ExecState*>();
     807    context.pc() = jumpTarget;
     808}
     809
     810static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
     811{
     812    ExecState* exec = context.fp<ExecState*>();
     813    CodeBlock* codeBlock = exec->codeBlock();
    747814    CodeBlock* alternative = codeBlock->alternative();
     815    ExitKind kind = exit.m_kind;
     816    unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
     817
    748818    dataLog("Speculation failure in ", *codeBlock);
    749     dataLog(" @ exit #", vm->osrExitIndex, " (bc#", debugInfo->bytecodeOffset, ", ", exitKindToString(debugInfo->kind), ") with ");
     819    dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
    750820    if (alternative) {
    751821        dataLog(
     
    757827    dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
    758828    dataLog("    GPRs at time of exit:");
    759     char* scratchPointer = static_cast<char*>(scratch);
    760829    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    761830        GPRReg gpr = GPRInfo::toRegister(i);
    762         dataLog(" ", GPRInfo::debugName(gpr), ":", RawPointer(*reinterpret_cast_ptr<void**>(scratchPointer)));
    763         scratchPointer += sizeof(EncodedJSValue);
     831        dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
    764832    }
    765833    dataLog("\n");
     
    767835    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    768836        FPRReg fpr = FPRInfo::toRegister(i);
    769         dataLog(" ", FPRInfo::debugName(fpr), ":");
    770         uint64_t bits = *reinterpret_cast_ptr<uint64_t*>(scratchPointer);
    771         double value = *reinterpret_cast_ptr<double*>(scratchPointer);
     837        dataLog(" ", context.fprName(fpr), ":");
     838        uint64_t bits = context.fpr<uint64_t>(fpr);
     839        double value = context.fpr(fpr);
    772840        dataLogF("%llx:%lf", static_cast<long long>(bits), value);
    773         scratchPointer += sizeof(EncodedJSValue);
    774841    }
    775842    dataLog("\n");
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.h

    r220306 r221774  
    3434#include "Operands.h"
    3535#include "ValueRecovery.h"
     36#include <wtf/RefPtr.h>
    3637
    3738namespace JSC {
    3839
    39 class CCallHelpers;
     40namespace Probe {
     41class Context;
     42} // namespace Probe
     43
     44namespace Profiler {
     45class OSRExit;
     46} // namespace Profiler
    4047
    4148namespace DFG {
     
    9299};
    93100
     101struct OSRExitState : RefCounted<OSRExitState> {
     102    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget)
     103        : exit(exit)
     104        , codeBlock(codeBlock)
     105        , baselineCodeBlock(baselineCodeBlock)
     106        , operands(operands)
     107        , recovery(recovery)
     108        , stackPointerOffset(stackPointerOffset)
     109        , activeThreshold(activeThreshold)
     110        , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
     111        , jumpTarget(jumpTarget)
     112    { }
     113
     114    OSRExitBase& exit;
     115    CodeBlock* codeBlock;
     116    CodeBlock* baselineCodeBlock;
     117    Operands<ValueRecovery> operands;
     118    SpeculationRecovery* recovery;
     119    ptrdiff_t stackPointerOffset;
     120    uint32_t activeThreshold;
     121    double memoryUsageAdjustedThreshold;
     122    void* jumpTarget;
     123
     124    Profiler::OSRExit* profilerExit { nullptr };
     125};
     126
    94127// === OSRExit ===
    95128//
     
    99132    OSRExit(ExitKind, JSValueSource, MethodOfGettingAValueProfile, SpeculativeJIT*, unsigned streamIndex, unsigned recoveryIndex = UINT_MAX);
    100133
    101     static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;
     134    static void executeOSRExit(Probe::Context&);
    102135
    103     unsigned m_patchableCodeOffset { 0 };
    104    
    105     MacroAssemblerCodeRef m_code;
     136    RefPtr<OSRExitState> exitState;
    106137   
    107138    JSValueSource m_jsValueSource;
     
    110141    unsigned m_recoveryIndex;
    111142
    112     void setPatchableCodeOffset(MacroAssembler::PatchableJump);
    113     MacroAssembler::Jump getPatchableCodeOffsetAsJump() const;
    114     CodeLocationJump codeLocationForRepatch(CodeBlock*) const;
    115     void correctJump(LinkBuffer&);
    116 
    117143    unsigned m_streamIndex;
    118144    void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock)
     
    120146        OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromDFG);
    121147    }
    122 
    123 private:
    124     static void compileExit(CCallHelpers&, VM&, const OSRExit&, const Operands<ValueRecovery>&, SpeculationRecovery*);
    125     static void emitRestoreArguments(CCallHelpers&, const Operands<ValueRecovery>&);
    126     static void JIT_OPERATION debugOperationPrintSpeculationFailure(ExecState*, void*, void*) WTF_INTERNAL;
    127148};
    128149
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r221528 r221774  
    11/*
    2  * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3838namespace JSC { namespace DFG {
    3939
     40// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4041void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
    4142{
     
    144145}
    145146
     147// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    146148void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
    147149{
     
    253255}
    254256
     257// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    255258static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
    256259{
     
    273276}
    274277
     278// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    275279void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
    276280{
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h

    r214531 r221774  
    11/*
    2  * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4141void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&);
    4242
     43// FIXME: This won't be needed once we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    4344template <typename JITCodeType>
    4445void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType)
  • trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp

    r221472 r221774  
    13561356}
    13571357
    1358 JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
    1359 {
    1360     VM& vm = exec->vm();
    1361     NativeCallFrameTracer target(&vm, exec);
    1362    
    1363     DeferGCForAWhile deferGC(vm.heap);
    1364    
    1365     CodeBlock* codeBlock;
    1366     if (inlineCallFrame)
    1367         codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
    1368     else
    1369         codeBlock = exec->codeBlock();
    1370    
    1371     unsigned length = argumentCount - 1;
    1372     unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
    1373     DirectArguments* result = DirectArguments::create(
    1374         vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
    1375    
    1376     result->callee().set(vm, result, callee);
    1377    
    1378     Register* arguments =
    1379         exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
    1380         CallFrame::argumentOffset(0);
    1381     for (unsigned i = length; i--;)
    1382         result->setIndexQuickly(vm, i, arguments[i].jsValue());
    1383    
    1384     return result;
    1385 }
    1386 
    1387 JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
    1388 {
    1389     VM& vm = exec->vm();
    1390     NativeCallFrameTracer target(&vm, exec);
    1391    
    1392     DeferGCForAWhile deferGC(vm.heap);
    1393    
    1394     CodeBlock* codeBlock;
    1395     if (inlineCallFrame)
    1396         codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
    1397     else
    1398         codeBlock = exec->codeBlock();
    1399    
    1400     unsigned length = argumentCount - 1;
    1401     ClonedArguments* result = ClonedArguments::createEmpty(
    1402         vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
    1403    
    1404     Register* arguments =
    1405         exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
    1406         CallFrame::argumentOffset(0);
    1407     for (unsigned i = length; i--;)
    1408         result->putDirectIndex(exec, i, arguments[i].jsValue());
    1409 
    1410    
    1411     return result;
    1412 }
    1413 
    14141358JSCell* JIT_OPERATION operationCreateRest(ExecState* exec, Register* argumentStart, unsigned numberOfParamsToSkip, unsigned arraySize)
    14151359{
  • trunk/Source/JavaScriptCore/dfg/DFGOperations.h

    r221472 r221774  
    140140JSCell* JIT_OPERATION operationCreateActivationDirect(ExecState*, Structure*, JSScope*, SymbolTable*, EncodedJSValue);
    141141JSCell* JIT_OPERATION operationCreateDirectArguments(ExecState*, Structure*, int32_t length, int32_t minCapacity);
    142 JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
    143142JSCell* JIT_OPERATION operationCreateScopedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee, JSLexicalEnvironment*);
    144 JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
    145143JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee);
    146144JSCell* JIT_OPERATION operationCreateRest(ExecState*, Register* argumentStart, unsigned numberOfArgumentsToSkip, unsigned arraySize);
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp

    r220306 r221774  
    4141namespace JSC { namespace DFG {
    4242
    43 MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
     43MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm)
    4444{
    4545    MacroAssembler jit;
    46 
    47     // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
    48     adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT);
    49    
    50     size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
    51     ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
    52     EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
    53    
    54     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    55 #if USE(JSVALUE64)
    56         jit.store64(GPRInfo::toRegister(i), buffer + i);
    57 #else
    58         jit.store32(GPRInfo::toRegister(i), buffer + i);
    59 #endif
    60     }
    61     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    62         jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
    63         jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0));
    64     }
    65    
    66     // Tell GC mark phase how much of the scratch buffer is active during call.
    67     jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
    68     jit.storePtr(MacroAssembler::TrustedImmPtr(scratchSize), MacroAssembler::Address(GPRInfo::regT0));
    69 
    70     // Set up one argument.
    71 #if CPU(X86)
    72     jit.poke(GPRInfo::callFrameRegister, 0);
    73 #else
    74     jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
    75 #endif
    76 
    77     MacroAssembler::Call functionCall = jit.call();
    78 
    79     jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
    80     jit.storePtr(MacroAssembler::TrustedImmPtr(0), MacroAssembler::Address(GPRInfo::regT0));
    81 
    82     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    83         jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
    84         jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i));
    85     }
    86     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    87 #if USE(JSVALUE64)
    88         jit.load64(buffer + i, GPRInfo::toRegister(i));
    89 #else
    90         jit.load32(buffer + i, GPRInfo::toRegister(i));
    91 #endif
    92     }
    93    
    94     jit.jump(MacroAssembler::AbsoluteAddress(&vm->osrExitJumpDestination));
    95    
     46    jit.probe(OSRExit::executeOSRExit, vm);
    9647    LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
    97    
    98     patchBuffer.link(functionCall, OSRExit::compileOSRExit);
    99    
    100     return FINALIZE_CODE(patchBuffer, ("DFG OSR exit generation thunk"));
     48    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk"));
    10149}
    10250
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.h

    r206525 r221774  
    11/*
    2  * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3636namespace DFG {
    3737
    38 MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
     38MacroAssemblerCodeRef osrExitThunkGenerator(VM*);
    3939MacroAssemblerCodeRef osrEntryThunkGenerator(VM*);
    4040
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp

    r220322 r221774  
    5151}
    5252
     53// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    5354Vector<BytecodeAndMachineOffset>& AssemblyHelpers::decodedCodeMapFor(CodeBlock* codeBlock)
    5455{
     
    821822#endif // ENABLE(WEBASSEMBLY)
    822823
    823 void AssemblyHelpers::debugCall(VM& vm, V_DebugOperation_EPP function, void* argument)
    824 {
    825     size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
    826     ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize);
    827     EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
    828 
    829     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    830 #if USE(JSVALUE64)
    831         store64(GPRInfo::toRegister(i), buffer + i);
    832 #else
    833         store32(GPRInfo::toRegister(i), buffer + i);
    834 #endif
    835     }
    836 
    837     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    838         move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
    839         storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);
    840     }
    841 
    842     // Tell GC mark phase how much of the scratch buffer is active during call.
    843     move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
    844     storePtr(TrustedImmPtr(scratchSize), GPRInfo::regT0);
    845 
    846 #if CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)
    847     move(TrustedImmPtr(buffer), GPRInfo::argumentGPR2);
    848     move(TrustedImmPtr(argument), GPRInfo::argumentGPR1);
    849     move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
    850     GPRReg scratch = selectScratchGPR(GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2);
    851 #elif CPU(X86)
    852     poke(GPRInfo::callFrameRegister, 0);
    853     poke(TrustedImmPtr(argument), 1);
    854     poke(TrustedImmPtr(buffer), 2);
    855     GPRReg scratch = GPRInfo::regT0;
    856 #else
    857 #error "JIT not supported on this platform."
    858 #endif
    859     move(TrustedImmPtr(reinterpret_cast<void*>(function)), scratch);
    860     call(scratch);
    861 
    862     move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
    863     storePtr(TrustedImmPtr(0), GPRInfo::regT0);
    864 
    865     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
    866         move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
    867         loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));
    868     }
    869     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
    870 #if USE(JSVALUE64)
    871         load64(buffer + i, GPRInfo::toRegister(i));
    872 #else
    873         load32(buffer + i, GPRInfo::toRegister(i));
    874 #endif
    875     }
    876 }
    877 
    878824void AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBufferImpl(GPRReg calleeSavesBuffer)
    879825{
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h

    r221528 r221774  
    992992        return GPRInfo::regT5;
    993993    }
    994 
    995     // Add a debug call. This call has no effect on JIT code execution state.
    996     void debugCall(VM&, V_DebugOperation_EPP function, void* argument);
    997994
    998995    // These methods JIT generate dynamic, debug-only checks - akin to ASSERTs.
     
    14661463    void emitDumbVirtualCall(VM&, CallLinkInfo*);
    14671464   
     1465    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    14681466    Vector<BytecodeAndMachineOffset>& decodedCodeMapFor(CodeBlock*);
    14691467
     
    16571655    CodeBlock* m_baselineCodeBlock;
    16581656
     1657    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    16591658    HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps;
    16601659};
  • trunk/Source/JavaScriptCore/jit/JITOperations.cpp

    r221602 r221774  
    23082308}
    23092309
     2310// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    23102311void JIT_OPERATION operationOSRWriteBarrier(ExecState* exec, JSCell* cell)
    23112312{
  • trunk/Source/JavaScriptCore/jit/JITOperations.h

    r221472 r221774  
    447447
    448448void JIT_OPERATION operationWriteBarrierSlowPath(ExecState*, JSCell*);
     449// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
    449450void JIT_OPERATION operationOSRWriteBarrier(ExecState*, JSCell*);
    450451
  • trunk/Source/JavaScriptCore/profiler/ProfilerOSRExit.h

    r206525 r221774  
    11/*
    2  * Copyright (C) 2012 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4444    uint64_t* counterAddress() { return &m_counter; }
    4545    uint64_t count() const { return m_counter; }
    46    
     46    void incCount() { m_counter++; }
     47
    4748    JSValue toJS(ExecState*) const;
    4849
  • trunk/Source/JavaScriptCore/runtime/JSCJSValue.h

    r218794 r221774  
    22 *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
    33 *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
    4  *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
     4 *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
    55 *
    66 *  This library is free software; you can redistribute it and/or
     
    345345    int32_t payload() const;
    346346
    347 #if !ENABLE(JIT)
    348     // This should only be used by the LLInt C Loop interpreter who needs
    349     // synthesize JSValue from its "register"s holding tag and payload
    350     // values.
     347    // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs
     348    // synthesize JSValue from its "register"s holding tag and payload values.
    351349    explicit JSValue(int32_t tag, int32_t payload);
    352 #endif
    353350
    354351#elif USE(JSVALUE64)
  • trunk/Source/JavaScriptCore/runtime/JSCJSValueInlines.h

    r218794 r221774  
    341341}
    342342
    343 #if !ENABLE(JIT)
     343#if USE(JSVALUE32_64)
    344344inline JSValue::JSValue(int32_t tag, int32_t payload)
    345345{
  • trunk/Source/JavaScriptCore/runtime/VM.h

    r221422 r221774  
    572572    Instruction* targetInterpreterPCForThrow;
    573573    uint32_t osrExitIndex;
    574     void* osrExitJumpDestination;
    575574    bool isExecutingInRegExpJIT { false };
    576575
Note: See TracChangeset for help on using the changeset viewer.