Changeset 221832 in webkit
- Timestamp:
- Sep 9, 2017, 5:21:55 PM (8 years ago)
- Location:
- trunk
- Files:
-
- 39 edited
- 1 copied
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r221807 r221832 1 2017-09-09 Mark Lam <mark.lam@apple.com> 2 3 [Re-landing] Use JIT probes for DFG OSR exit. 4 https://bugs.webkit.org/show_bug.cgi?id=175144 5 <rdar://problem/33437050> 6 7 Not reviewed. Original patch reviewed by Saam Barati. 8 9 Disable these tests for debug builds because they run too slow with the new OSR exit. 10 11 * stress/op_mod-ConstVar.js: 12 * stress/op_mod-VarConst.js: 13 * stress/op_mod-VarVar.js: 14 1 15 2017-09-08 Yusuke Suzuki <utatane.tea@gmail.com> 2 16 -
trunk/JSTests/stress/op_mod-ConstVar.js
r208932 r221832 1 //@ runFTLNoCJIT("--timeoutMultiplier=1.5")1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end 2 2 3 3 // If all goes well, this test module will terminate silently. If not, it will print -
trunk/JSTests/stress/op_mod-VarConst.js
r208932 r221832 1 //@ runFTLNoCJIT("--timeoutMultiplier=1.5")1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end 2 2 3 3 // If all goes well, this test module will terminate silently. If not, it will print -
trunk/JSTests/stress/op_mod-VarVar.js
r208932 r221832 1 //@ runFTLNoCJIT("--timeoutMultiplier=1.5")1 //@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end 2 2 3 3 // If all goes well, this test module will terminate silently. If not, it will print -
trunk/Source/JavaScriptCore/ChangeLog
r221823 r221832 1 2017-09-09 Mark Lam <mark.lam@apple.com> 2 3 [Re-landing] Use JIT probes for DFG OSR exit. 4 https://bugs.webkit.org/show_bug.cgi?id=175144 5 <rdar://problem/33437050> 6 7 Not reviewed. Original patch reviewed by Saam Barati. 8 9 Relanding r221774. 10 11 * JavaScriptCore.xcodeproj/project.pbxproj: 12 * assembler/MacroAssembler.cpp: 13 (JSC::stdFunctionCallback): 14 * assembler/MacroAssemblerPrinter.cpp: 15 (JSC::Printer::printCallback): 16 * assembler/ProbeContext.h: 17 (JSC::Probe::CPUState::gpr const): 18 (JSC::Probe::CPUState::spr const): 19 (JSC::Probe::Context::Context): 20 (JSC::Probe::Context::arg): 21 (JSC::Probe::Context::gpr): 22 (JSC::Probe::Context::spr): 23 (JSC::Probe::Context::fpr): 24 (JSC::Probe::Context::gprName): 25 (JSC::Probe::Context::sprName): 26 (JSC::Probe::Context::fprName): 27 (JSC::Probe::Context::gpr const): 28 (JSC::Probe::Context::spr const): 29 (JSC::Probe::Context::fpr const): 30 (JSC::Probe::Context::pc): 31 (JSC::Probe::Context::fp): 32 (JSC::Probe::Context::sp): 33 (JSC::Probe:: const): Deleted. 34 * assembler/ProbeFrame.h: Copied from Source/JavaScriptCore/assembler/ProbeFrame.h. 35 * assembler/ProbeStack.cpp: 36 (JSC::Probe::Page::Page): 37 * assembler/ProbeStack.h: 38 (JSC::Probe::Page::get): 39 (JSC::Probe::Page::set): 40 (JSC::Probe::Page::physicalAddressFor): 41 (JSC::Probe::Stack::lowWatermark): 42 (JSC::Probe::Stack::get): 43 (JSC::Probe::Stack::set): 44 * bytecode/ArithProfile.cpp: 45 * bytecode/ArithProfile.h: 46 * bytecode/ArrayProfile.h: 47 (JSC::ArrayProfile::observeArrayMode): 48 * bytecode/CodeBlock.cpp: 49 (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): 50 * bytecode/CodeBlock.h: 51 (JSC::CodeBlock::addressOfOSRExitCounter): Deleted. 52 * bytecode/ExecutionCounter.h: 53 (JSC::ExecutionCounter::hasCrossedThreshold const): 54 (JSC::ExecutionCounter::setNewThresholdForOSRExit): 55 * bytecode/MethodOfGettingAValueProfile.cpp: 56 (JSC::MethodOfGettingAValueProfile::reportValue): 57 * bytecode/MethodOfGettingAValueProfile.h: 58 * dfg/DFGDriver.cpp: 59 (JSC::DFG::compileImpl): 60 * dfg/DFGJITCode.cpp: 61 (JSC::DFG::JITCode::findPC): Deleted. 62 * dfg/DFGJITCode.h: 63 * dfg/DFGJITCompiler.cpp: 64 (JSC::DFG::JITCompiler::linkOSRExits): 65 (JSC::DFG::JITCompiler::link): 66 * dfg/DFGOSRExit.cpp: 67 (JSC::DFG::jsValueFor): 68 (JSC::DFG::restoreCalleeSavesFor): 69 (JSC::DFG::saveCalleeSavesFor): 70 (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): 71 (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): 72 (JSC::DFG::saveOrCopyCalleeSavesFor): 73 (JSC::DFG::createDirectArgumentsDuringExit): 74 (JSC::DFG::createClonedArgumentsDuringExit): 75 (JSC::DFG::OSRExit::OSRExit): 76 (JSC::DFG::emitRestoreArguments): 77 (JSC::DFG::OSRExit::executeOSRExit): 78 (JSC::DFG::reifyInlinedCallFrames): 79 (JSC::DFG::adjustAndJumpToTarget): 80 (JSC::DFG::printOSRExit): 81 (JSC::DFG::OSRExit::setPatchableCodeOffset): Deleted. 82 (JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const): Deleted. 83 (JSC::DFG::OSRExit::codeLocationForRepatch const): Deleted. 84 (JSC::DFG::OSRExit::correctJump): Deleted. 85 (JSC::DFG::OSRExit::emitRestoreArguments): Deleted. 86 (JSC::DFG::OSRExit::compileOSRExit): Deleted. 87 (JSC::DFG::OSRExit::compileExit): Deleted. 88 (JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure): Deleted. 89 * dfg/DFGOSRExit.h: 90 (JSC::DFG::OSRExitState::OSRExitState): 91 (JSC::DFG::OSRExit::considerAddingAsFrequentExitSite): 92 * dfg/DFGOSRExitCompilerCommon.cpp: 93 * dfg/DFGOSRExitCompilerCommon.h: 94 * dfg/DFGOperations.cpp: 95 * dfg/DFGOperations.h: 96 * dfg/DFGThunks.cpp: 97 (JSC::DFG::osrExitThunkGenerator): 98 (JSC::DFG::osrExitGenerationThunkGenerator): Deleted. 99 * dfg/DFGThunks.h: 100 * jit/AssemblyHelpers.cpp: 101 (JSC::AssemblyHelpers::debugCall): Deleted. 102 * jit/AssemblyHelpers.h: 103 * jit/JITOperations.cpp: 104 * jit/JITOperations.h: 105 * profiler/ProfilerOSRExit.h: 106 (JSC::Profiler::OSRExit::incCount): 107 * runtime/JSCJSValue.h: 108 * runtime/JSCJSValueInlines.h: 109 * runtime/VM.h: 110 1 111 2017-09-09 Ryan Haddad <ryanhaddad@apple.com> 2 112 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r221823 r221832 2445 2445 FE10AAEE1F44D954009DEDC5 /* ProbeContext.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAED1F44D946009DEDC5 /* ProbeContext.h */; settings = {ATTRIBUTES = (Private, ); }; }; 2446 2446 FE10AAF41F468396009DEDC5 /* ProbeContext.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */; }; 2447 FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */; }; 2447 2448 FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */; }; 2448 2449 FE1220281BE7F5910039E6F2 /* JITAddGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */; }; … … 5139 5140 FE10AAED1F44D946009DEDC5 /* ProbeContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeContext.h; sourceTree = "<group>"; }; 5140 5141 FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProbeContext.cpp; sourceTree = "<group>"; }; 5142 FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; }; 5141 5143 FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITAddGenerator.cpp; sourceTree = "<group>"; }; 5142 5144 FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAddGenerator.h; sourceTree = "<group>"; }; … … 7723 7725 FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */, 7724 7726 FE10AAED1F44D946009DEDC5 /* ProbeContext.h */, 7727 FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */, 7725 7728 FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */, 7726 7729 FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */, … … 9690 9693 AD4937C81DDD0AAE0077C807 /* WebAssemblyModuleRecord.h in Headers */, 9691 9694 AD2FCC2D1DB838FD00B3E736 /* WebAssemblyPrototype.h in Headers */, 9695 FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */, 9692 9696 AD2FCBF91DB58DAD00B3E736 /* WebAssemblyRuntimeErrorConstructor.h in Headers */, 9693 9697 AD2FCC1E1DB59CB200B3E736 /* WebAssemblyRuntimeErrorConstructor.lut.h in Headers */, -
trunk/Source/JavaScriptCore/assembler/MacroAssembler.cpp
r221823 r221832 39 39 static void stdFunctionCallback(Probe::Context& context) 40 40 { 41 auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);41 auto func = context.arg<const std::function<void(Probe::Context&)>*>(); 42 42 (*func)(context); 43 43 } -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp
r221823 r221832 176 176 { 177 177 auto& out = WTF::dataFile(); 178 PrintRecordList& list = * reinterpret_cast<PrintRecordList*>(probeContext.arg);178 PrintRecordList& list = *probeContext.arg<PrintRecordList*>(); 179 179 for (size_t i = 0; i < list.size(); i++) { 180 180 auto& record = list[i]; -
trunk/Source/JavaScriptCore/assembler/ProbeContext.h
r221823 r221832 46 46 inline double& fpr(FPRegisterID); 47 47 48 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr> 49 T gpr(RegisterID) const; 50 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr> 51 T gpr(RegisterID) const; 52 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr> 53 T spr(SPRegisterID) const; 54 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr> 55 T spr(SPRegisterID) const; 48 template<typename T> T gpr(RegisterID) const; 49 template<typename T> T spr(SPRegisterID) const; 56 50 template<typename T> T fpr(FPRegisterID) const; 57 51 … … 86 80 } 87 81 88 template<typename T , typename std::enable_if<std::is_integral<T>::value>::type*>82 template<typename T> 89 83 T CPUState::gpr(RegisterID id) const 90 84 { 91 85 CPUState* cpu = const_cast<CPUState*>(this); 92 return static_cast<T>(cpu->gpr(id)); 93 } 94 95 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*> 96 T CPUState::gpr(RegisterID id) const 97 { 98 CPUState* cpu = const_cast<CPUState*>(this); 99 return reinterpret_cast<T>(cpu->gpr(id)); 100 } 101 102 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*> 86 auto& from = cpu->gpr(id); 87 typename std::remove_const<T>::type to { }; 88 std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 89 return to; 90 } 91 92 template<typename T> 103 93 T CPUState::spr(SPRegisterID id) const 104 94 { 105 95 CPUState* cpu = const_cast<CPUState*>(this); 106 return static_cast<T>(cpu->spr(id)); 107 } 108 109 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*> 110 T CPUState::spr(SPRegisterID id) const 111 { 112 CPUState* cpu = const_cast<CPUState*>(this); 113 return reinterpret_cast<T>(cpu->spr(id)); 96 auto& from = cpu->spr(id); 97 typename std::remove_const<T>::type to { }; 98 std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 99 return to; 114 100 } 115 101 … … 206 192 207 193 Context(State* state) 208 : m_state(state) 209 , arg(state->arg) 210 , cpu(state->cpu) 194 : cpu(state->cpu) 195 , m_state(state) 211 196 { } 212 197 213 uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); } 214 uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); } 215 double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); } 216 const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); } 217 const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); } 218 const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); } 219 220 void*& pc() { return m_state->cpu.pc(); } 221 void*& fp() { return m_state->cpu.fp(); } 222 void*& sp() { return m_state->cpu.sp(); } 223 224 template<typename T> T pc() { return m_state->cpu.pc<T>(); } 225 template<typename T> T fp() { return m_state->cpu.fp<T>(); } 226 template<typename T> T sp() { return m_state->cpu.sp<T>(); } 198 template<typename T> 199 T arg() { return reinterpret_cast<T>(m_state->arg); } 200 201 uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); } 202 uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); } 203 double& fpr(FPRegisterID id) { return cpu.fpr(id); } 204 const char* gprName(RegisterID id) { return cpu.gprName(id); } 205 const char* sprName(SPRegisterID id) { return cpu.sprName(id); } 206 const char* fprName(FPRegisterID id) { return cpu.fprName(id); } 207 208 template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); } 209 template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); } 210 template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); } 211 212 void*& pc() { return cpu.pc(); } 213 void*& fp() { return cpu.fp(); } 214 void*& sp() { return cpu.sp(); } 215 216 template<typename T> T pc() { return cpu.pc<T>(); } 217 template<typename T> T fp() { return cpu.fp<T>(); } 218 template<typename T> T sp() { return cpu.sp<T>(); } 227 219 228 220 Stack& stack() … … 235 227 Stack* releaseStack() { return new Stack(WTFMove(m_stack)); } 236 228 229 CPUState& cpu; 230 237 231 private: 238 232 State* m_state; 239 public:240 void* arg;241 CPUState& cpu;242 243 private:244 233 Stack m_stack; 245 234 -
trunk/Source/JavaScriptCore/assembler/ProbeStack.cpp
r221823 r221832 36 36 Page::Page(void* baseAddress) 37 37 : m_baseLogicalAddress(baseAddress) 38 , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress)) 38 39 { 39 40 memcpy(&m_buffer, baseAddress, s_pageSize); -
trunk/Source/JavaScriptCore/assembler/ProbeStack.h
r221823 r221832 57 57 T get(void* logicalAddress) 58 58 { 59 return *physicalAddressFor<T*>(logicalAddress); 59 void* from = physicalAddressFor(logicalAddress); 60 typename std::remove_const<T>::type to { }; 61 std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 62 return to; 63 } 64 template<typename T> 65 T get(void* logicalBaseAddress, ptrdiff_t offset) 66 { 67 return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset); 60 68 } 61 69 … … 64 72 { 65 73 m_dirtyBits |= dirtyBitFor(logicalAddress); 66 *physicalAddressFor<T*>(logicalAddress) = value; 74 void* to = physicalAddressFor(logicalAddress); 75 std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues. 76 } 77 template<typename T> 78 void set(void* logicalBaseAddress, ptrdiff_t offset, T value) 79 { 80 set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value); 67 81 } 68 82 … … 81 95 } 82 96 83 template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type> 84 T physicalAddressFor(void* logicalAddress) 85 { 86 uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask; 87 void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset; 88 return reinterpret_cast<T>(physicalAddress); 97 void* physicalAddressFor(void* logicalAddress) 98 { 99 return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset; 89 100 } 90 101 … … 93 104 void* m_baseLogicalAddress { nullptr }; 94 105 uintptr_t m_dirtyBits { 0 }; 106 ptrdiff_t m_physicalAddressOffset; 95 107 96 108 static constexpr size_t s_pageSize = 1024; … … 121 133 Stack(Stack&& other); 122 134 123 void* lowWatermark() { return m_lowWatermark; } 124 125 template<typename T> 126 typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address) 127 { 128 Page* page = pageFor(address); 129 return page->get<T>(address); 130 } 131 132 template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type> 133 void set(void* address, T value) 134 { 135 Page* page = pageFor(address); 136 page->set<T>(address, value); 137 135 void* lowWatermark() 136 { 138 137 // We use the chunkAddress for the low watermark because we'll be doing write backs 139 138 // to the stack in increments of chunks. Hence, we'll treat the lowest address of 140 139 // the chunk as the low watermark of any given set address. 141 void* chunkAddress = Page::chunkAddressFor(address); 142 if (chunkAddress < m_lowWatermark) 143 m_lowWatermark = chunkAddress; 144 } 145 146 template<typename T> 147 typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address) 140 return Page::chunkAddressFor(m_lowWatermark); 141 } 142 143 template<typename T> 144 T get(void* address) 148 145 { 149 146 Page* page = pageFor(address); 150 return bitwise_cast<double>(page->get<uint64_t>(address)); 151 } 152 153 template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type> 154 void set(void* address, double value) 155 { 156 set<uint64_t>(address, bitwise_cast<uint64_t>(value)); 147 return page->get<T>(address); 148 } 149 template<typename T> 150 T get(void* logicalBaseAddress, ptrdiff_t offset) 151 { 152 return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset); 153 } 154 155 template<typename T> 156 void set(void* address, T value) 157 { 158 Page* page = pageFor(address); 159 page->set<T>(address, value); 160 161 if (address < m_lowWatermark) 162 m_lowWatermark = address; 163 } 164 template<typename T> 165 void set(void* logicalBaseAddress, ptrdiff_t offset, T value) 166 { 167 set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value); 157 168 } 158 169 -
trunk/Source/JavaScriptCore/bytecode/ArithProfile.cpp
r221823 r221832 1 1 /* 2 * Copyright (C) 2016 Apple Inc. All rights reserved.2 * Copyright (C) 2016-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 33 33 34 34 #if ENABLE(JIT) 35 // FIXME: This is being supplanted by observeResult(). Remove this one 36 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed. 35 37 void ArithProfile::emitObserveResult(CCallHelpers& jit, JSValueRegs regs, TagRegistersMode mode) 36 38 { -
trunk/Source/JavaScriptCore/bytecode/ArithProfile.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2016 Apple Inc. All rights reserved.2 * Copyright (C) 2016-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 212 212 // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble) if it sees a 213 213 // double. Sets NonNumber if it sees a non-number. 214 // FIXME: This is being supplanted by observeResult(). Remove this one 215 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed. 214 216 void emitObserveResult(CCallHelpers&, JSValueRegs, TagRegistersMode = HaveTagRegisters); 215 217 -
trunk/Source/JavaScriptCore/bytecode/ArrayProfile.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2012 , 2013Apple Inc. All rights reserved.2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 219 219 void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure); 220 220 221 void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; } 221 222 ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; } 222 223 bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; } -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r221823 r221832 2316 2316 } 2317 2317 2318 auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction 2319 { 2320 DFG::OSRExitBase& exit = exitState.exit; 2321 if (!exitKindMayJettison(exit.m_kind)) { 2322 // FIXME: We may want to notice that we're frequently exiting 2323 // at an op_catch that we didn't compile an entrypoint for, and 2324 // then trigger a reoptimization of this CodeBlock: 2325 // https://bugs.webkit.org/show_bug.cgi?id=175842 2326 return OptimizeAction::None; 2327 } 2328 2329 exit.m_count++; 2330 m_osrExitCounter++; 2331 2332 CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock; 2333 ASSERT(baselineCodeBlock == baselineAlternative()); 2334 if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold())) 2335 return OptimizeAction::ReoptimizeNow; 2336 2337 // We want to figure out if there's a possibility that we're in a loop. For the outermost 2338 // code block in the inline stack, we handle this appropriately by having the loop OSR trigger 2339 // check the exit count of the replacement of the CodeBlock from which we are OSRing. The 2340 // problem is the inlined functions, which might also have loops, but whose baseline versions 2341 // don't know where to look for the exit count. Figure out if those loops are severe enough 2342 // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger. 2343 // Otherwise, we should use the normal reoptimization trigger. 2344 2345 bool didTryToEnterInLoop = false; 2346 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) { 2347 if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) { 2348 didTryToEnterInLoop = true; 2349 break; 2350 } 2351 } 2352 2353 uint32_t exitCountThreshold = didTryToEnterInLoop 2354 ? exitCountThresholdForReoptimizationFromLoop() 2355 : exitCountThresholdForReoptimization(); 2356 2357 if (m_osrExitCounter > exitCountThreshold) 2358 return OptimizeAction::ReoptimizeNow; 2359 2360 // Too few fails. Adjust the execution counter such that the target is to only optimize after a while. 2361 baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold); 2362 return OptimizeAction::None; 2363 } 2364 2318 2365 void CodeBlock::optimizeNextInvocation() 2319 2366 { -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r221823 r221832 78 78 namespace JSC { 79 79 80 namespace DFG { 81 struct OSRExitState; 82 } // namespace DFG 83 80 84 class BytecodeLivenessAnalysis; 81 85 class CodeBlockSet; … … 763 767 void countOSRExit() { m_osrExitCounter++; } 764 768 765 uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; } 766 769 enum class OptimizeAction { None, ReoptimizeNow }; 770 OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&); 771 772 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 767 773 static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); } 768 774 -
trunk/Source/JavaScriptCore/bytecode/ExecutionCounter.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2012 , 2014Apple Inc. All rights reserved.2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 42 42 int32_t applyMemoryUsageHeuristicsAndConvertToInt(int32_t value, CodeBlock*); 43 43 44 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 44 45 inline int32_t formattedTotalExecutionCount(float value) 45 46 { … … 58 59 void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false. 59 60 bool checkIfThresholdCrossedAndSet(CodeBlock*); 61 bool hasCrossedThreshold() const { return m_counter >= 0; } 60 62 void setNewThreshold(int32_t threshold, CodeBlock*); 61 63 void deferIndefinitely(); … … 63 65 void dump(PrintStream&) const; 64 66 67 void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold) 68 { 69 m_activeThreshold = activeThreshold; 70 m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold); 71 m_totalCount = memoryUsageAdjustedThreshold; 72 } 73 65 74 static int32_t maximumExecutionCountsBetweenCheckpoints() 66 75 { -
trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
r221823 r221832 1 1 /* 2 * Copyright (C) 2012 , 2013, 2016Apple Inc. All rights reserved.2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 47 47 } 48 48 49 // FIXME: This is being supplanted by reportValue(). Remove this one 50 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed. 49 51 void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueRegs regs) const 50 52 { … … 75 77 } 76 78 79 void MethodOfGettingAValueProfile::reportValue(JSValue value) 80 { 81 switch (m_kind) { 82 case None: 83 return; 84 85 case Ready: 86 *u.profile->specFailBucket(0) = JSValue::encode(value); 87 return; 88 89 case LazyOperand: { 90 LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand)); 91 92 ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock); 93 LazyOperandValueProfile* profile = 94 u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key); 95 *profile->specFailBucket(0) = JSValue::encode(value); 96 return; 97 } 98 99 case ArithProfileReady: { 100 u.arithProfile->observeResult(value); 101 return; 102 } } 103 104 RELEASE_ASSERT_NOT_REACHED(); 105 } 106 77 107 } // namespace JSC 78 108 -
trunk/Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2012 , 2016Apple Inc. All rights reserved.2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 71 71 72 72 explicit operator bool() const { return m_kind != None; } 73 73 74 // FIXME: emitReportValue is being supplanted by reportValue(). Remove this one 75 // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed. 74 76 void emitReportValue(CCallHelpers&, JSValueRegs) const; 75 77 void reportValue(JSValue); 78 76 79 private: 77 80 enum Kind { -
trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp
r221823 r221832 1 1 /* 2 * Copyright (C) 2011-201 4, 2016Apple Inc. All rights reserved.2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 90 90 // Make sure that any stubs that the DFG is going to use are initialized. We want to 91 91 // make sure that all JIT code generation does finalization on the main thread. 92 vm.getCTIStub(osrExit GenerationThunkGenerator);92 vm.getCTIStub(osrExitThunkGenerator); 93 93 vm.getCTIStub(throwExceptionFromCallSlowPathGenerator); 94 94 vm.getCTIStub(linkCallThunkGenerator); -
trunk/Source/JavaScriptCore/dfg/DFGJITCode.cpp
r221823 r221832 226 226 } 227 227 228 std::optional<CodeOrigin> JITCode::findPC(CodeBlock*, void* pc)229 {230 for (OSRExit& exit : osrExit) {231 if (ExecutableMemoryHandle* handle = exit.m_code.executableMemory()) {232 if (handle->start() <= pc && pc < handle->end())233 return std::optional<CodeOrigin>(exit.m_codeOriginForExitProfile);234 }235 }236 237 return std::nullopt;238 }239 240 228 void JITCode::finalizeOSREntrypoints() 241 229 { -
trunk/Source/JavaScriptCore/dfg/DFGJITCode.h
r221823 r221832 127 127 static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); } 128 128 129 std::optional<CodeOrigin> findPC(CodeBlock*, void* pc) override;130 131 129 private: 132 130 friend class JITCompiler; // Allow JITCompiler to call setCodeRef(). -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r221823 r221832 86 86 } 87 87 88 MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator); 89 CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code()); 88 90 for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) { 89 OSRExit& exit = m_jitCode->osrExit[i];90 91 OSRExitCompilationInfo& info = m_exitCompilationInfo[i]; 91 92 JumpList& failureJumps = info.m_failureJumps; … … 97 98 jitAssertHasValidCallFrame(); 98 99 store32(TrustedImm32(i), &vm()->osrExitIndex); 99 exit.setPatchableCodeOffset(patchableJump()); 100 Jump target = jump(); 101 addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) { 102 linkBuffer.link(target, osrExitThunkLabel); 103 }); 100 104 } 101 105 } … … 304 308 } 305 309 306 MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitGenerationThunkGenerator);307 CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());308 310 for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) { 309 OSRExit& exit = m_jitCode->osrExit[i];310 311 OSRExitCompilationInfo& info = m_exitCompilationInfo[i]; 311 linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);312 exit.correctJump(linkBuffer);313 312 if (info.m_replacementSource.isSet()) { 314 313 m_jitCode->common.jumpReplacements.append(JumpReplacement( -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
r221823 r221832 1 1 /* 2 * Copyright (C) 2011 , 2013Apple Inc. All rights reserved.2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 30 30 31 31 #include "AssemblyHelpers.h" 32 #include "ClonedArguments.h" 32 33 #include "DFGGraph.h" 33 34 #include "DFGMayExit.h" 34 #include "DFGOSRExitCompilerCommon.h"35 35 #include "DFGOSRExitPreparation.h" 36 36 #include "DFGOperations.h" 37 37 #include "DFGSpeculativeJIT.h" 38 #include "FrameTracers.h" 38 #include "DirectArguments.h" 39 #include "InlineCallFrame.h" 39 40 #include "JSCInlines.h" 41 #include "JSCJSValue.h" 40 42 #include "OperandsInlines.h" 43 #include "ProbeContext.h" 44 #include "ProbeFrame.h" 41 45 42 46 namespace JSC { namespace DFG { 47 48 using CPUState = Probe::CPUState; 49 using Context = Probe::Context; 50 using Frame = Probe::Frame; 51 52 static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&); 53 static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&); 54 static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&); 55 56 static JSValue jsValueFor(CPUState& cpu, JSValueSource source) 57 { 58 if (source.isAddress()) { 59 JSValue result; 60 std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue)); 61 return result; 62 } 63 #if USE(JSVALUE64) 64 return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr())); 65 #else 66 if (source.hasKnownTag()) 67 return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR())); 68 return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR())); 69 #endif 70 } 71 72 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 73 74 static_assert(is64Bit(), "we only support callee save registers on 64-bit"); 75 76 // Based on AssemblyHelpers::emitRestoreCalleeSavesFor(). 77 static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock) 78 { 79 ASSERT(codeBlock); 80 81 RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 82 RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 83 unsigned registerCount = calleeSaves->size(); 84 85 uintptr_t* physicalStackFrame = context.fp<uintptr_t*>(); 86 for (unsigned i = 0; i < registerCount; i++) { 87 RegisterAtOffset entry = calleeSaves->at(i); 88 if (dontRestoreRegisters.get(entry.reg())) 89 continue; 90 // The callee saved values come from the original stack, not the recovered stack. 91 // Hence, we read the values directly from the physical stack memory instead of 92 // going through context.stack(). 93 ASSERT(!(entry.offset() % sizeof(uintptr_t))); 94 context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)]; 95 } 96 } 97 98 // Based on AssemblyHelpers::emitSaveCalleeSavesFor(). 99 static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock) 100 { 101 auto& stack = context.stack(); 102 ASSERT(codeBlock); 103 104 RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 105 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 106 unsigned registerCount = calleeSaves->size(); 107 108 for (unsigned i = 0; i < registerCount; i++) { 109 RegisterAtOffset entry = calleeSaves->at(i); 110 if (dontSaveRegisters.get(entry.reg())) 111 continue; 112 stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr())); 113 } 114 } 115 116 // Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(). 117 static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context) 118 { 119 VM& vm = *context.arg<VM*>(); 120 121 RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets(); 122 RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters(); 123 unsigned registerCount = allCalleeSaves->size(); 124 125 VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame); 126 uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer); 127 128 // Restore all callee saves. 129 for (unsigned i = 0; i < registerCount; i++) { 130 RegisterAtOffset entry = allCalleeSaves->at(i); 131 if (dontRestoreRegisters.get(entry.reg())) 132 continue; 133 size_t uintptrOffset = entry.offset() / sizeof(uintptr_t); 134 if (entry.reg().isGPR()) 135 context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset]; 136 else 137 context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]); 138 } 139 } 140 141 // Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(). 142 static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context) 143 { 144 VM& vm = *context.arg<VM*>(); 145 auto& stack = context.stack(); 146 147 VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame); 148 void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer; 149 150 RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets(); 151 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 152 unsigned registerCount = allCalleeSaves->size(); 153 154 for (unsigned i = 0; i < registerCount; i++) { 155 RegisterAtOffset entry = allCalleeSaves->at(i); 156 if (dontCopyRegisters.get(entry.reg())) 157 continue; 158 if (entry.reg().isGPR()) 159 stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr())); 160 else 161 stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr())); 162 } 163 } 164 165 // Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor(). 166 static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall) 167 { 168 Frame frame(context.fp(), context.stack()); 169 ASSERT(codeBlock); 170 171 RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 172 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 173 unsigned registerCount = calleeSaves->size(); 174 175 RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters(); 176 177 for (unsigned i = 0; i < registerCount; i++) { 178 RegisterAtOffset entry = calleeSaves->at(i); 179 if (dontSaveRegisters.get(entry.reg())) 180 continue; 181 182 uintptr_t savedRegisterValue; 183 184 if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg())) 185 savedRegisterValue = frame.get<uintptr_t>(entry.offset()); 186 else 187 savedRegisterValue = context.gpr(entry.reg().gpr()); 188 189 frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue); 190 } 191 } 192 #else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 193 194 static void restoreCalleeSavesFor(Context&, CodeBlock*) { } 195 static void saveCalleeSavesFor(Context&, CodeBlock*) { } 196 static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { } 197 static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { } 198 static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { } 199 200 #endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 201 202 static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount) 203 { 204 VM& vm = *context.arg<VM*>(); 205 206 ASSERT(vm.heap.isDeferred()); 207 208 if (inlineCallFrame) 209 codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame); 210 211 unsigned length = argumentCount - 1; 212 unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1)); 213 DirectArguments* result = DirectArguments::create( 214 vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity); 215 216 result->callee().set(vm, result, callee); 217 218 void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0); 219 Frame frame(frameBase, context.stack()); 220 for (unsigned i = length; i--;) 221 result->setIndexQuickly(vm, i, frame.argument(i)); 222 223 return result; 224 } 225 226 static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount) 227 { 228 VM& vm = *context.arg<VM*>(); 229 ExecState* exec = context.fp<ExecState*>(); 230 231 ASSERT(vm.heap.isDeferred()); 232 233 if (inlineCallFrame) 234 codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame); 235 236 unsigned length = argumentCount - 1; 237 ClonedArguments* result = ClonedArguments::createEmpty( 238 vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length); 239 240 void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0); 241 Frame frame(frameBase, context.stack()); 242 for (unsigned i = length; i--;) 243 result->putDirectIndex(exec, i, frame.argument(i)); 244 return result; 245 } 43 246 44 247 OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex) … … 57 260 } 58 261 59 void OSRExit::setPatchableCodeOffset(MacroAssembler::PatchableJump check) 60 { 61 m_patchableCodeOffset = check.m_jump.m_label.m_offset; 62 } 63 64 MacroAssembler::Jump OSRExit::getPatchableCodeOffsetAsJump() const 65 { 66 return MacroAssembler::Jump(AssemblerLabel(m_patchableCodeOffset)); 67 } 68 69 CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* dfgCodeBlock) const 70 { 71 return CodeLocationJump(dfgCodeBlock->jitCode()->dataAddressAtOffset(m_patchableCodeOffset)); 72 } 73 74 void OSRExit::correctJump(LinkBuffer& linkBuffer) 75 { 76 MacroAssembler::Label label; 77 label.m_label.m_offset = m_patchableCodeOffset; 78 m_patchableCodeOffset = linkBuffer.offsetOf(label); 79 } 80 81 void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecovery>& operands) 82 { 262 static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands) 263 { 264 Frame frame(context.fp(), context.stack()); 265 83 266 HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand. 84 267 for (size_t index = 0; index < operands.size(); ++index) { … … 93 276 auto iter = alreadyAllocatedArguments.find(id); 94 277 if (iter != alreadyAllocatedArguments.end()) { 95 JSValueRegs regs = JSValueRegs::withTwoAvailableRegs(GPRInfo::regT0, GPRInfo::regT1); 96 jit.loadValue(CCallHelpers::addressFor(iter->value), regs); 97 jit.storeValue(regs, CCallHelpers::addressFor(operand)); 278 frame.setOperand(operand, frame.operand(iter->value)); 98 279 continue; 99 280 } 100 281 101 282 InlineCallFrame* inlineCallFrame = 102 jit.codeBlock()->jitCode()->dfg()->minifiedDFG.at(id)->inlineCallFrame();283 dfgJITCode->minifiedDFG.at(id)->inlineCallFrame(); 103 284 104 285 int stackOffset; … … 108 289 stackOffset = 0; 109 290 110 if (!inlineCallFrame || inlineCallFrame->isClosureCall) { 111 jit.loadPtr( 112 AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee), 113 GPRInfo::regT0); 114 } else { 115 jit.move( 116 AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeRecovery.constant().asCell()), 117 GPRInfo::regT0); 118 } 119 120 if (!inlineCallFrame || inlineCallFrame->isVarargs()) { 121 jit.load32( 122 AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCount), 123 GPRInfo::regT1); 124 } else { 125 jit.move( 126 AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), 127 GPRInfo::regT1); 128 } 129 130 jit.setupArgumentsWithExecState( 131 AssemblyHelpers::TrustedImmPtr(inlineCallFrame), GPRInfo::regT0, GPRInfo::regT1); 291 JSFunction* callee; 292 if (!inlineCallFrame || inlineCallFrame->isClosureCall) 293 callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell()); 294 else 295 callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell()); 296 297 int32_t argumentCount; 298 if (!inlineCallFrame || inlineCallFrame->isVarargs()) 299 argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset); 300 else 301 argumentCount = inlineCallFrame->argumentCountIncludingThis; 302 303 JSCell* argumentsObject; 132 304 switch (recovery.technique()) { 133 305 case DirectArgumentsThatWereNotCreated: 134 jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateDirectArgumentsDuringExit)), GPRInfo::nonArgGPR0);306 argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount); 135 307 break; 136 308 case ClonedArgumentsThatWereNotCreated: 137 jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateClonedArgumentsDuringExit)), GPRInfo::nonArgGPR0);309 argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount); 138 310 break; 139 311 default: … … 141 313 break; 142 314 } 143 jit.call(GPRInfo::nonArgGPR0); 144 jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand)); 315 frame.setOperand(operand, JSValue(argumentsObject)); 145 316 146 317 alreadyAllocatedArguments.add(id, operand); … … 148 319 } 149 320 150 void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec) 151 { 152 VM* vm = &exec->vm(); 153 auto scope = DECLARE_THROW_SCOPE(*vm); 154 155 if (vm->callFrameForCatch) 156 RELEASE_ASSERT(vm->callFrameForCatch == exec); 321 void OSRExit::executeOSRExit(Context& context) 322 { 323 VM& vm = *context.arg<VM*>(); 324 auto scope = DECLARE_THROW_SCOPE(vm); 325 326 ExecState* exec = context.fp<ExecState*>(); 327 ASSERT(&exec->vm() == &vm); 328 329 if (vm.callFrameForCatch) { 330 exec = vm.callFrameForCatch; 331 context.fp() = exec; 332 } 157 333 158 334 CodeBlock* codeBlock = exec->codeBlock(); … … 162 338 // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't 163 339 // really be profitable. 164 DeferGCForAWhile deferGC(vm->heap); 165 166 uint32_t exitIndex = vm->osrExitIndex; 167 OSRExit& exit = codeBlock->jitCode()->dfg()->osrExit[exitIndex]; 168 169 if (vm->callFrameForCatch) 170 ASSERT(exit.m_kind == GenericUnwind); 171 if (exit.isExceptionHandler()) 172 ASSERT_UNUSED(scope, !!scope.exception()); 173 174 prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin); 175 176 // Compute the value recoveries. 177 Operands<ValueRecovery> operands; 178 codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands); 179 180 SpeculationRecovery* recovery = 0; 181 if (exit.m_recoveryIndex != UINT_MAX) 182 recovery = &codeBlock->jitCode()->dfg()->speculationRecovery[exit.m_recoveryIndex]; 183 184 { 185 CCallHelpers jit(codeBlock); 186 187 if (exit.m_kind == GenericUnwind) { 188 // We are acting as a defacto op_catch because we arrive here from genericUnwind(). 189 // So, we must restore our call frame and stack pointer. 190 jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(*vm); 191 jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister); 192 } 193 jit.addPtr( 194 CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)), 195 GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister); 196 197 jit.jitAssertHasValidCallFrame(); 198 199 if (UNLIKELY(vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) { 200 Profiler::Database& database = *vm->m_perBytecodeProfiler; 340 DeferGCForAWhile deferGC(vm.heap); 341 342 uint32_t exitIndex = vm.osrExitIndex; 343 DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg(); 344 OSRExit& exit = dfgJITCode->osrExit[exitIndex]; 345 346 ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind); 347 ASSERT_UNUSED(scope, !exit.isExceptionHandler() || !!scope.exception()); 348 349 if (UNLIKELY(!exit.exitState)) { 350 // We only need to execute this block once for each OSRExit record. The computed 351 // results will be cached in the OSRExitState record for use of the rest of the 352 // exit ramp code. 353 354 // Ensure we have baseline codeBlocks to OSR exit to. 355 prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin); 356 357 CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative(); 358 ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT); 359 360 // Compute the value recoveries. 361 Operands<ValueRecovery> operands; 362 dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands); 363 364 SpeculationRecovery* recovery = nullptr; 365 if (exit.m_recoveryIndex != UINT_MAX) 366 recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex]; 367 368 int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp()); 369 double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock); 370 ASSERT(adjustedThreshold > 0); 371 adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold); 372 373 CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); 374 Vector<BytecodeAndMachineOffset> decodedCodeMap; 375 codeBlockForExit->jitCodeMap()->decode(decodedCodeMap); 376 377 BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex); 378 379 ASSERT(mapping); 380 ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex); 381 382 ptrdiff_t finalStackPointerOffset = codeBlockForExit->stackPointerOffset() * sizeof(Register); 383 384 void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset); 385 386 exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, recovery, finalStackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget)); 387 388 if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) { 389 Profiler::Database& database = *vm.m_perBytecodeProfiler; 201 390 Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get(); 202 391 … … 204 393 exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin), 205 394 exit.m_kind, exit.m_kind == UncountableInvalidation); 206 jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(profilerExit->counterAddress()));395 exit.exitState->profilerExit = profilerExit; 207 396 } 208 397 209 compileExit(jit, *vm, exit, operands, recovery); 210 211 LinkBuffer patchBuffer(jit, codeBlock); 212 exit.m_code = FINALIZE_CODE_IF( 213 shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseDFGOSRExit(), 214 patchBuffer, 215 ("DFG OSR exit #%u (%s, %s) from %s, with operands = %s", 398 if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) { 399 dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n", 216 400 exitIndex, toCString(exit.m_codeOrigin).data(), 217 401 exitKindToString(exit.m_kind), toCString(*codeBlock).data(), 218 toCString(ignoringContext<DumpContext>(operands)).data())); 219 } 220 221 MacroAssembler::repatchJump(exit.codeLocationForRepatch(codeBlock), CodeLocationLabel(exit.m_code.code())); 222 223 vm->osrExitJumpDestination = exit.m_code.code().executableAddress(); 224 } 225 226 void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const Operands<ValueRecovery>& operands, SpeculationRecovery* recovery) 227 { 228 jit.jitAssertTagsInPlace(); 229 230 // Pro-forma stuff. 231 if (Options::printEachOSRExit()) { 232 SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo; 233 debugInfo->codeBlock = jit.codeBlock(); 234 debugInfo->kind = exit.m_kind; 235 debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex; 236 237 jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo); 238 } 402 toCString(ignoringContext<DumpContext>(operands)).data()); 403 } 404 } 405 406 OSRExitState& exitState = *exit.exitState.get(); 407 CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock; 408 ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT); 409 410 Operands<ValueRecovery>& operands = exitState.operands; 411 SpeculationRecovery* recovery = exitState.recovery; 412 413 if (exit.m_kind == GenericUnwind) { 414 // We are acting as a defacto op_catch because we arrive here from genericUnwind(). 415 // So, we must restore our call frame and stack pointer. 416 restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context); 417 ASSERT(context.fp() == vm.callFrameForCatch); 418 } 419 context.sp() = context.fp<uint8_t*>() + (codeBlock->stackPointerOffset() * sizeof(Register)); 420 421 ASSERT(!(context.fp<uintptr_t>() & 0x7)); 422 423 if (exitState.profilerExit) 424 exitState.profilerExit->incCount(); 425 426 auto& cpu = context.cpu; 427 Frame frame(cpu.fp(), context.stack()); 428 429 #if USE(JSVALUE64) 430 ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber); 431 ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask); 432 #endif 433 434 if (UNLIKELY(Options::printEachOSRExit())) 435 printOSRExit(context, vm.osrExitIndex, exit); 239 436 240 437 // Perform speculation recovery. This only comes into play when an operation … … 244 441 switch (recovery->type()) { 245 442 case SpeculativeAdd: 246 jit.sub32(recovery->src(), recovery->dest()); 247 #if USE(JSVALUE64) 248 jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest()); 443 cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src()); 444 #if USE(JSVALUE64) 445 ASSERT(!(cpu.gpr(recovery->dest()) >> 32)); 446 cpu.gpr(recovery->dest()) |= TagTypeNumber; 249 447 #endif 250 448 break; 251 449 252 450 case SpeculativeAddImmediate: 253 jit.sub32(AssemblyHelpers::Imm32(recovery->immediate()), recovery->dest()); 254 #if USE(JSVALUE64) 255 jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest()); 451 cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate()); 452 #if USE(JSVALUE64) 453 ASSERT(!(cpu.gpr(recovery->dest()) >> 32)); 454 cpu.gpr(recovery->dest()) |= TagTypeNumber; 256 455 #endif 257 456 break; … … 259 458 case BooleanSpeculationCheck: 260 459 #if USE(JSVALUE64) 261 jit.xor64(AssemblyHelpers::TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());460 cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse; 262 461 #endif 263 462 break; … … 282 481 283 482 CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; 284 if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) { 285 #if USE(JSVALUE64) 286 GPRReg usedRegister; 287 if (exit.m_jsValueSource.isAddress()) 288 usedRegister = exit.m_jsValueSource.base(); 289 else 290 usedRegister = exit.m_jsValueSource.gpr(); 291 #else 292 GPRReg usedRegister1; 293 GPRReg usedRegister2; 294 if (exit.m_jsValueSource.isAddress()) { 295 usedRegister1 = exit.m_jsValueSource.base(); 296 usedRegister2 = InvalidGPRReg; 297 } else { 298 usedRegister1 = exit.m_jsValueSource.payloadGPR(); 299 if (exit.m_jsValueSource.hasKnownTag()) 300 usedRegister2 = InvalidGPRReg; 301 else 302 usedRegister2 = exit.m_jsValueSource.tagGPR(); 303 } 304 #endif 305 306 GPRReg scratch1; 307 GPRReg scratch2; 308 #if USE(JSVALUE64) 309 scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister); 310 scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister, scratch1); 311 #else 312 scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2); 313 scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2, scratch1); 314 #endif 315 316 if (isARM64()) { 317 jit.pushToSave(scratch1); 318 jit.pushToSave(scratch2); 319 } else { 320 jit.push(scratch1); 321 jit.push(scratch2); 322 } 323 324 GPRReg value; 325 if (exit.m_jsValueSource.isAddress()) { 326 value = scratch1; 327 jit.loadPtr(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), value); 328 } else 329 value = exit.m_jsValueSource.payloadGPR(); 330 331 jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1); 332 jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID()); 333 #if USE(JSVALUE64) 334 jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeAndMiscOffset()), scratch1); 335 #else 336 jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeIncludingHistoryOffset()), scratch1); 337 #endif 338 jit.move(AssemblyHelpers::TrustedImm32(1), scratch2); 339 jit.lshift32(scratch1, scratch2); 340 jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes())); 341 342 if (isARM64()) { 343 jit.popToRestore(scratch2); 344 jit.popToRestore(scratch1); 345 } else { 346 jit.pop(scratch2); 347 jit.pop(scratch1); 348 } 483 CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock); 484 if (ArrayProfile* arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex)) { 485 Structure* structure = jsValueFor(cpu, exit.m_jsValueSource).asCell()->structure(vm); 486 arrayProfile->observeStructure(structure); 487 // FIXME: We should be able to use arrayModeFromStructure() to determine the observed ArrayMode here. 488 // However, currently, doing so would result in a pdfjs preformance regression. 489 // https://bugs.webkit.org/show_bug.cgi?id=176473 490 arrayProfile->observeArrayMode(asArrayModes(structure->indexingType())); 349 491 } 350 492 } 351 493 352 if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) { 353 #if USE(JSVALUE64) 354 if (exit.m_jsValueSource.isAddress()) { 355 // We can't be sure that we have a spare register. So use the tagTypeNumberRegister, 356 // since we know how to restore it. 357 jit.load64(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), GPRInfo::tagTypeNumberRegister); 358 profile.emitReportValue(jit, JSValueRegs(GPRInfo::tagTypeNumberRegister)); 359 jit.move(AssemblyHelpers::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister); 360 } else 361 profile.emitReportValue(jit, JSValueRegs(exit.m_jsValueSource.gpr())); 362 #else // not USE(JSVALUE64) 363 if (exit.m_jsValueSource.isAddress()) { 364 // Save a register so we can use it. 365 GPRReg scratchPayload = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base()); 366 GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base(), scratchPayload); 367 jit.pushToSave(scratchPayload); 368 jit.pushToSave(scratchTag); 369 370 JSValueRegs scratch(scratchTag, scratchPayload); 371 372 jit.loadValue(exit.m_jsValueSource.asAddress(), scratch); 373 profile.emitReportValue(jit, scratch); 374 375 jit.popToRestore(scratchTag); 376 jit.popToRestore(scratchPayload); 377 } else if (exit.m_jsValueSource.hasKnownTag()) { 378 GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.payloadGPR()); 379 jit.pushToSave(scratchTag); 380 jit.move(AssemblyHelpers::TrustedImm32(exit.m_jsValueSource.tag()), scratchTag); 381 JSValueRegs value(scratchTag, exit.m_jsValueSource.payloadGPR()); 382 profile.emitReportValue(jit, value); 383 jit.popToRestore(scratchTag); 384 } else 385 profile.emitReportValue(jit, exit.m_jsValueSource.regs()); 386 #endif // USE(JSVALUE64) 387 } 388 } 389 390 // What follows is an intentionally simple OSR exit implementation that generates 391 // fairly poor code but is very easy to hack. In particular, it dumps all state that 392 // needs conversion into a scratch buffer so that in step 6, where we actually do the 393 // conversions, we know that all temp registers are free to use and the variable is 394 // definitely in a well-known spot in the scratch buffer regardless of whether it had 395 // originally been in a register or spilled. This allows us to decouple "where was 396 // the variable" from "how was it represented". Consider that the 397 // Int32DisplacedInJSStack recovery: it tells us that the value is in a 398 // particular place and that that place holds an unboxed int32. We have two different 399 // places that a value could be (displaced, register) and a bunch of different 400 // ways of representing a value. The number of recoveries is two * a bunch. The code 401 // below means that we have to have two + a bunch cases rather than two * a bunch. 402 // Once we have loaded the value from wherever it was, the reboxing is the same 403 // regardless of its location. Likewise, before we do the reboxing, the way we get to 404 // the value (i.e. where we load it from) is the same regardless of its type. Because 405 // the code below always dumps everything into a scratch buffer first, the two 406 // questions become orthogonal, which simplifies adding new types and adding new 407 // locations. 408 // 409 // This raises the question: does using such a suboptimal implementation of OSR exit, 410 // where we always emit code to dump all state into a scratch buffer only to then 411 // dump it right back into the stack, hurt us in any way? The asnwer is that OSR exits 412 // are rare. Our tiering strategy ensures this. This is because if an OSR exit is 413 // taken more than ~100 times, we jettison the DFG code block along with all of its 414 // exits. It is impossible for an OSR exit - i.e. the code we compile below - to 415 // execute frequently enough for the codegen to matter that much. It probably matters 416 // enough that we don't want to turn this into some super-slow function call, but so 417 // long as we're generating straight-line code, that code can be pretty bad. Also 418 // because we tend to exit only along one OSR exit from any DFG code block - that's an 419 // empirical result that we're extremely confident about - the code size of this 420 // doesn't matter much. Hence any attempt to optimize the codegen here is just purely 421 // harmful to the system: it probably won't reduce either net memory usage or net 422 // execution time. It will only prevent us from cleanly decoupling "where was the 423 // variable" from "how was it represented", which will make it more difficult to add 424 // features in the future and it will make it harder to reason about bugs. 425 426 // Save all state from GPRs into the scratch buffer. 427 428 ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(sizeof(EncodedJSValue) * operands.size()); 429 EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0; 430 431 for (size_t index = 0; index < operands.size(); ++index) { 494 if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) 495 profile.reportValue(jsValueFor(cpu, exit.m_jsValueSource)); 496 } 497 498 // Do all data format conversions and store the results into the stack. 499 // Note: we need to recover values before restoring callee save registers below 500 // because the recovery may rely on values in some of callee save registers. 501 502 int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters()); 503 size_t numberOfOperands = operands.size(); 504 for (size_t index = 0; index < numberOfOperands; ++index) { 432 505 const ValueRecovery& recovery = operands[index]; 433 434 switch (recovery.technique()) { 435 case UnboxedInt32InGPR: 436 case UnboxedCellInGPR: 437 #if USE(JSVALUE64) 438 case InGPR: 439 case UnboxedInt52InGPR: 440 case UnboxedStrictInt52InGPR: 441 jit.store64(recovery.gpr(), scratch + index); 442 break; 443 #else 444 case UnboxedBooleanInGPR: 445 jit.store32( 446 recovery.gpr(), 447 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload); 448 break; 449 450 case InPair: 451 jit.store32( 452 recovery.tagGPR(), 453 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag); 454 jit.store32( 455 recovery.payloadGPR(), 456 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload); 457 break; 458 #endif 459 460 default: 461 break; 462 } 463 } 464 465 // And voila, all GPRs are free to reuse. 466 467 // Save all state from FPRs into the scratch buffer. 468 469 for (size_t index = 0; index < operands.size(); ++index) { 470 const ValueRecovery& recovery = operands[index]; 471 472 switch (recovery.technique()) { 473 case UnboxedDoubleInFPR: 474 case InFPR: 475 jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0); 476 jit.storeDouble(recovery.fpr(), MacroAssembler::Address(GPRInfo::regT0)); 477 break; 478 479 default: 480 break; 481 } 482 } 483 484 // Now, all FPRs are also free. 485 486 // Save all state from the stack into the scratch buffer. For simplicity we 487 // do this even for state that's already in the right place on the stack. 488 // It makes things simpler later. 489 490 for (size_t index = 0; index < operands.size(); ++index) { 491 const ValueRecovery& recovery = operands[index]; 506 VirtualRegister reg = operands.virtualRegisterForIndex(index); 507 508 if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters) 509 continue; 510 511 int operand = reg.offset(); 492 512 493 513 switch (recovery.technique()) { 494 514 case DisplacedInJSStack: 515 frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue()); 516 break; 517 518 case InFPR: 519 frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr())); 520 break; 521 522 #if USE(JSVALUE64) 523 case InGPR: 524 frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr())); 525 break; 526 #else 527 case InPair: 528 frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR()))); 529 break; 530 #endif 531 532 case UnboxedCellInGPR: 533 frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr()))); 534 break; 535 495 536 case CellDisplacedInJSStack: 537 frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell())); 538 break; 539 540 #if USE(JSVALUE32_64) 541 case UnboxedBooleanInGPR: 542 frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr()))); 543 break; 544 #endif 545 496 546 case BooleanDisplacedInJSStack: 547 #if USE(JSVALUE64) 548 frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue()); 549 #else 550 frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload())); 551 #endif 552 break; 553 554 case UnboxedInt32InGPR: 555 frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr()))); 556 break; 557 497 558 case Int32DisplacedInJSStack: 559 frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32())); 560 break; 561 562 #if USE(JSVALUE64) 563 case UnboxedInt52InGPR: 564 frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount)); 565 break; 566 567 case Int52DisplacedInJSStack: 568 frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52())); 569 break; 570 571 case UnboxedStrictInt52InGPR: 572 frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()))); 573 break; 574 575 case StrictInt52DisplacedInJSStack: 576 frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52())); 577 break; 578 #endif 579 580 case UnboxedDoubleInFPR: 581 frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr())))); 582 break; 583 498 584 case DoubleDisplacedInJSStack: 499 #if USE(JSVALUE64) 500 case Int52DisplacedInJSStack: 501 case StrictInt52DisplacedInJSStack: 502 jit.load64(AssemblyHelpers::addressFor(recovery.virtualRegister()), GPRInfo::regT0); 503 jit.store64(GPRInfo::regT0, scratch + index); 504 break; 505 #else 506 jit.load32( 507 AssemblyHelpers::tagFor(recovery.virtualRegister()), 508 GPRInfo::regT0); 509 jit.load32( 510 AssemblyHelpers::payloadFor(recovery.virtualRegister()), 511 GPRInfo::regT1); 512 jit.store32( 513 GPRInfo::regT0, 514 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag); 515 jit.store32( 516 GPRInfo::regT1, 517 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload); 518 break; 519 #endif 585 frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble()))); 586 break; 587 588 case Constant: 589 frame.setOperand(operand, recovery.constant()); 590 break; 591 592 case DirectArgumentsThatWereNotCreated: 593 case ClonedArgumentsThatWereNotCreated: 594 // Don't do this, yet. 595 break; 520 596 521 597 default: 598 RELEASE_ASSERT_NOT_REACHED(); 522 599 break; 523 600 } … … 527 604 // could toast some stack that the DFG used. We need to do it before storing to stack offsets 528 605 // used by baseline. 529 jit.addPtr( 530 CCallHelpers::TrustedImm32( 531 -jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)), 532 CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister); 606 cpu.sp() = cpu.fp<uint8_t*>() - (codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)); 533 607 534 608 // Restore the DFG callee saves and then save the ones the baseline JIT uses. 535 jit.emitRestoreCalleeSaves();536 jit.emitSaveCalleeSavesFor(jit.baselineCodeBlock());609 restoreCalleeSavesFor(context, codeBlock); 610 saveCalleeSavesFor(context, baselineCodeBlock); 537 611 538 612 // The tag registers are needed to materialize recoveries below. 539 jit.emitMaterializeTagCheckRegisters(); 613 #if USE(JSVALUE64) 614 cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber; 615 cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther; 616 #endif 540 617 541 618 if (exit.isExceptionHandler()) 542 jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm); 543 544 // Do all data format conversions and store the results into the stack. 545 546 for (size_t index = 0; index < operands.size(); ++index) { 547 const ValueRecovery& recovery = operands[index]; 548 VirtualRegister reg = operands.virtualRegisterForIndex(index); 549 550 if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters())) 551 continue; 552 553 int operand = reg.offset(); 554 555 switch (recovery.technique()) { 556 case DisplacedInJSStack: 557 case InFPR: 558 #if USE(JSVALUE64) 559 case InGPR: 560 case UnboxedCellInGPR: 561 case CellDisplacedInJSStack: 562 case BooleanDisplacedInJSStack: 563 jit.load64(scratch + index, GPRInfo::regT0); 564 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand)); 565 break; 566 #else // not USE(JSVALUE64) 567 case InPair: 568 jit.load32( 569 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag, 570 GPRInfo::regT0); 571 jit.load32( 572 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload, 573 GPRInfo::regT1); 574 jit.store32( 575 GPRInfo::regT0, 576 AssemblyHelpers::tagFor(operand)); 577 jit.store32( 578 GPRInfo::regT1, 579 AssemblyHelpers::payloadFor(operand)); 580 break; 581 582 case UnboxedCellInGPR: 583 case CellDisplacedInJSStack: 584 jit.load32( 585 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload, 586 GPRInfo::regT0); 587 jit.store32( 588 AssemblyHelpers::TrustedImm32(JSValue::CellTag), 589 AssemblyHelpers::tagFor(operand)); 590 jit.store32( 591 GPRInfo::regT0, 592 AssemblyHelpers::payloadFor(operand)); 593 break; 594 595 case UnboxedBooleanInGPR: 596 case BooleanDisplacedInJSStack: 597 jit.load32( 598 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload, 599 GPRInfo::regT0); 600 jit.store32( 601 AssemblyHelpers::TrustedImm32(JSValue::BooleanTag), 602 AssemblyHelpers::tagFor(operand)); 603 jit.store32( 604 GPRInfo::regT0, 605 AssemblyHelpers::payloadFor(operand)); 606 break; 607 #endif // USE(JSVALUE64) 608 609 case UnboxedInt32InGPR: 610 case Int32DisplacedInJSStack: 611 #if USE(JSVALUE64) 612 jit.load64(scratch + index, GPRInfo::regT0); 613 jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0); 614 jit.or64(GPRInfo::tagTypeNumberRegister, GPRInfo::regT0); 615 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand)); 616 #else 617 jit.load32( 618 &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload, 619 GPRInfo::regT0); 620 jit.store32( 621 AssemblyHelpers::TrustedImm32(JSValue::Int32Tag), 622 AssemblyHelpers::tagFor(operand)); 623 jit.store32( 624 GPRInfo::regT0, 625 AssemblyHelpers::payloadFor(operand)); 626 #endif 627 break; 628 629 #if USE(JSVALUE64) 630 case UnboxedInt52InGPR: 631 case Int52DisplacedInJSStack: 632 jit.load64(scratch + index, GPRInfo::regT0); 633 jit.rshift64( 634 AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), GPRInfo::regT0); 635 jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0); 636 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand)); 637 break; 638 639 case UnboxedStrictInt52InGPR: 640 case StrictInt52DisplacedInJSStack: 641 jit.load64(scratch + index, GPRInfo::regT0); 642 jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0); 643 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand)); 644 break; 645 #endif 646 647 case UnboxedDoubleInFPR: 648 case DoubleDisplacedInJSStack: 649 jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0); 650 jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::fpRegT0); 651 jit.purifyNaN(FPRInfo::fpRegT0); 652 #if USE(JSVALUE64) 653 jit.boxDouble(FPRInfo::fpRegT0, GPRInfo::regT0); 654 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand)); 655 #else 656 jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand)); 657 #endif 658 break; 659 660 case Constant: 661 #if USE(JSVALUE64) 662 jit.store64( 663 AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())), 664 AssemblyHelpers::addressFor(operand)); 665 #else 666 jit.store32( 667 AssemblyHelpers::TrustedImm32(recovery.constant().tag()), 668 AssemblyHelpers::tagFor(operand)); 669 jit.store32( 670 AssemblyHelpers::TrustedImm32(recovery.constant().payload()), 671 AssemblyHelpers::payloadFor(operand)); 672 #endif 673 break; 674 675 case DirectArgumentsThatWereNotCreated: 676 case ClonedArgumentsThatWereNotCreated: 677 // Don't do this, yet. 678 break; 679 680 default: 681 RELEASE_ASSERT_NOT_REACHED(); 682 break; 683 } 684 } 619 copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context); 685 620 686 621 // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments … … 690 625 // inline call frame scope - but for now the DFG wouldn't do that. 691 626 692 emitRestoreArguments( jit, operands);627 emitRestoreArguments(context, codeBlock, dfgJITCode, operands); 693 628 694 629 // Adjust the old JIT's execute counter. Since we are exiting OSR, we know … … 728 663 // counterValueForOptimizeAfterWarmUp(). 729 664 730 handleExitCounts(jit, exit); 731 732 // Reify inlined call frames. 733 734 reifyInlinedCallFrames(jit, exit); 735 736 // And finish. 737 adjustAndJumpToTarget(vm, jit, exit); 738 } 739 740 void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exec, void* debugInfoRaw, void* scratch) 741 { 742 VM* vm = &exec->vm(); 743 NativeCallFrameTracer tracer(vm, exec); 744 745 SpeculationFailureDebugInfo* debugInfo = static_cast<SpeculationFailureDebugInfo*>(debugInfoRaw); 746 CodeBlock* codeBlock = debugInfo->codeBlock; 665 if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow)) 666 triggerReoptimizationNow(baselineCodeBlock, &exit); 667 668 reifyInlinedCallFrames(context, baselineCodeBlock, exit); 669 adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit); 670 } 671 672 static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit) 673 { 674 auto& cpu = context.cpu; 675 Frame frame(cpu.fp(), context.stack()); 676 677 // FIXME: We shouldn't leave holes on the stack when performing an OSR exit 678 // in presence of inlined tail calls. 679 // https://bugs.webkit.org/show_bug.cgi?id=147511 680 ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT); 681 frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock); 682 683 const CodeOrigin* codeOrigin; 684 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) { 685 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame; 686 CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock); 687 InlineCallFrame::Kind trueCallerCallKind; 688 CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind); 689 void* callerFrame = cpu.fp(); 690 691 if (!trueCaller) { 692 ASSERT(inlineCallFrame->isTail()); 693 void* returnPC = frame.get<void*>(CallFrame::returnPCOffset()); 694 frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC); 695 callerFrame = frame.get<void*>(CallFrame::callerFrameOffset()); 696 } else { 697 CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); 698 unsigned callBytecodeIndex = trueCaller->bytecodeIndex; 699 void* jumpTarget = nullptr; 700 701 switch (trueCallerCallKind) { 702 case InlineCallFrame::Call: 703 case InlineCallFrame::Construct: 704 case InlineCallFrame::CallVarargs: 705 case InlineCallFrame::ConstructVarargs: 706 case InlineCallFrame::TailCall: 707 case InlineCallFrame::TailCallVarargs: { 708 CallLinkInfo* callLinkInfo = 709 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 710 RELEASE_ASSERT(callLinkInfo); 711 712 jumpTarget = callLinkInfo->callReturnLocation().executableAddress(); 713 break; 714 } 715 716 case InlineCallFrame::GetterCall: 717 case InlineCallFrame::SetterCall: { 718 StructureStubInfo* stubInfo = 719 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 720 RELEASE_ASSERT(stubInfo); 721 722 jumpTarget = stubInfo->doneLocation().executableAddress(); 723 break; 724 } 725 726 default: 727 RELEASE_ASSERT_NOT_REACHED(); 728 } 729 730 if (trueCaller->inlineCallFrame) 731 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue); 732 733 frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget); 734 } 735 736 frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock); 737 738 // Restore the inline call frame's callee save registers. 739 // If this inlined frame is a tail call that will return back to the original caller, we need to 740 // copy the prior contents of the tag registers already saved for the outer frame to this frame. 741 saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller); 742 743 if (!inlineCallFrame->isVarargs()) 744 frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis); 745 ASSERT(callerFrame); 746 frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame); 747 #if USE(JSVALUE64) 748 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits(); 749 frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits); 750 if (!inlineCallFrame->isClosureCall) 751 frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant())); 752 #else // USE(JSVALUE64) // so this is the 32-bit part 753 Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex; 754 uint32_t locationBits = CallSiteIndex(instruction).bits(); 755 frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits); 756 frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag)); 757 if (!inlineCallFrame->isClosureCall) 758 frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant()); 759 #endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part 760 } 761 762 // Don't need to set the toplevel code origin if we only did inline tail calls 763 if (codeOrigin) { 764 #if USE(JSVALUE64) 765 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits(); 766 #else 767 Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex; 768 uint32_t locationBits = CallSiteIndex(instruction).bits(); 769 #endif 770 frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits); 771 } 772 } 773 774 static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit) 775 { 776 OSRExitState* exitState = exit.exitState.get(); 777 778 WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence. 779 vm.heap.writeBarrier(baselineCodeBlock); 780 781 // We barrier all inlined frames -- and not just the current inline stack -- 782 // because we don't know which inlined function owns the value profile that 783 // we'll update when we exit. In the case of "f() { a(); b(); }", if both 784 // a and b are inlined, we might exit inside b due to a bad value loaded 785 // from a. 786 // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns 787 // the value profile. 788 InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get(); 789 if (inlineCallFrames) { 790 for (InlineCallFrame* inlineCallFrame : *inlineCallFrames) 791 vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get()); 792 } 793 794 if (exit.m_codeOrigin.inlineCallFrame) 795 context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue); 796 797 void* jumpTarget = exitState->jumpTarget; 798 ASSERT(jumpTarget); 799 800 context.sp() = context.fp<uint8_t*>() + exitState->stackPointerOffset; 801 if (exit.isExceptionHandler()) { 802 // Since we're jumping to op_catch, we need to set callFrameForCatch. 803 vm.callFrameForCatch = context.fp<ExecState*>(); 804 } 805 806 vm.topCallFrame = context.fp<ExecState*>(); 807 context.pc() = jumpTarget; 808 } 809 810 static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit) 811 { 812 ExecState* exec = context.fp<ExecState*>(); 813 CodeBlock* codeBlock = exec->codeBlock(); 747 814 CodeBlock* alternative = codeBlock->alternative(); 815 ExitKind kind = exit.m_kind; 816 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex; 817 748 818 dataLog("Speculation failure in ", *codeBlock); 749 dataLog(" @ exit #", vm->osrExitIndex, " (bc#", debugInfo->bytecodeOffset, ", ", exitKindToString(debugInfo->kind), ") with ");819 dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with "); 750 820 if (alternative) { 751 821 dataLog( … … 757 827 dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n"); 758 828 dataLog(" GPRs at time of exit:"); 759 char* scratchPointer = static_cast<char*>(scratch);760 829 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) { 761 830 GPRReg gpr = GPRInfo::toRegister(i); 762 dataLog(" ", GPRInfo::debugName(gpr), ":", RawPointer(*reinterpret_cast_ptr<void**>(scratchPointer))); 763 scratchPointer += sizeof(EncodedJSValue); 831 dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr))); 764 832 } 765 833 dataLog("\n"); … … 767 835 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { 768 836 FPRReg fpr = FPRInfo::toRegister(i); 769 dataLog(" ", FPRInfo::debugName(fpr), ":");770 uint64_t bits = *reinterpret_cast_ptr<uint64_t*>(scratchPointer);771 double value = *reinterpret_cast_ptr<double*>(scratchPointer);837 dataLog(" ", context.fprName(fpr), ":"); 838 uint64_t bits = context.fpr<uint64_t>(fpr); 839 double value = context.fpr(fpr); 772 840 dataLogF("%llx:%lf", static_cast<long long>(bits), value); 773 scratchPointer += sizeof(EncodedJSValue);774 841 } 775 842 dataLog("\n"); -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.h
r221823 r221832 34 34 #include "Operands.h" 35 35 #include "ValueRecovery.h" 36 #include <wtf/RefPtr.h> 36 37 37 38 namespace JSC { 38 39 39 class CCallHelpers; 40 namespace Probe { 41 class Context; 42 } // namespace Probe 43 44 namespace Profiler { 45 class OSRExit; 46 } // namespace Profiler 40 47 41 48 namespace DFG { … … 92 99 }; 93 100 101 struct OSRExitState : RefCounted<OSRExitState> { 102 OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget) 103 : exit(exit) 104 , codeBlock(codeBlock) 105 , baselineCodeBlock(baselineCodeBlock) 106 , operands(operands) 107 , recovery(recovery) 108 , stackPointerOffset(stackPointerOffset) 109 , activeThreshold(activeThreshold) 110 , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold) 111 , jumpTarget(jumpTarget) 112 { } 113 114 OSRExitBase& exit; 115 CodeBlock* codeBlock; 116 CodeBlock* baselineCodeBlock; 117 Operands<ValueRecovery> operands; 118 SpeculationRecovery* recovery; 119 ptrdiff_t stackPointerOffset; 120 uint32_t activeThreshold; 121 double memoryUsageAdjustedThreshold; 122 void* jumpTarget; 123 124 Profiler::OSRExit* profilerExit { nullptr }; 125 }; 126 94 127 // === OSRExit === 95 128 // … … 99 132 OSRExit(ExitKind, JSValueSource, MethodOfGettingAValueProfile, SpeculativeJIT*, unsigned streamIndex, unsigned recoveryIndex = UINT_MAX); 100 133 101 static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;134 static void executeOSRExit(Probe::Context&); 102 135 103 unsigned m_patchableCodeOffset { 0 }; 104 105 MacroAssemblerCodeRef m_code; 136 RefPtr<OSRExitState> exitState; 106 137 107 138 JSValueSource m_jsValueSource; … … 110 141 unsigned m_recoveryIndex; 111 142 112 void setPatchableCodeOffset(MacroAssembler::PatchableJump);113 MacroAssembler::Jump getPatchableCodeOffsetAsJump() const;114 CodeLocationJump codeLocationForRepatch(CodeBlock*) const;115 void correctJump(LinkBuffer&);116 117 143 unsigned m_streamIndex; 118 144 void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock) … … 120 146 OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromDFG); 121 147 } 122 123 private:124 static void compileExit(CCallHelpers&, VM&, const OSRExit&, const Operands<ValueRecovery>&, SpeculationRecovery*);125 static void emitRestoreArguments(CCallHelpers&, const Operands<ValueRecovery>&);126 static void JIT_OPERATION debugOperationPrintSpeculationFailure(ExecState*, void*, void*) WTF_INTERNAL;127 148 }; 128 149 -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r221823 r221832 1 1 /* 2 * Copyright (C) 2013-201 5Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 38 38 namespace JSC { namespace DFG { 39 39 40 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 40 41 void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit) 41 42 { … … 144 145 } 145 146 147 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 146 148 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit) 147 149 { … … 253 255 } 254 256 257 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 255 258 static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch) 256 259 { … … 273 276 } 274 277 278 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 275 279 void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit) 276 280 { -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2013 , 2015Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 41 41 void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&); 42 42 43 // FIXME: This won't be needed once we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 43 44 template <typename JITCodeType> 44 45 void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType) -
trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp
r221823 r221832 1475 1475 } 1476 1476 1477 JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)1478 {1479 VM& vm = exec->vm();1480 NativeCallFrameTracer target(&vm, exec);1481 1482 DeferGCForAWhile deferGC(vm.heap);1483 1484 CodeBlock* codeBlock;1485 if (inlineCallFrame)1486 codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);1487 else1488 codeBlock = exec->codeBlock();1489 1490 unsigned length = argumentCount - 1;1491 unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));1492 DirectArguments* result = DirectArguments::create(1493 vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);1494 1495 result->callee().set(vm, result, callee);1496 1497 Register* arguments =1498 exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +1499 CallFrame::argumentOffset(0);1500 for (unsigned i = length; i--;)1501 result->setIndexQuickly(vm, i, arguments[i].jsValue());1502 1503 return result;1504 }1505 1506 JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)1507 {1508 VM& vm = exec->vm();1509 NativeCallFrameTracer target(&vm, exec);1510 1511 DeferGCForAWhile deferGC(vm.heap);1512 1513 CodeBlock* codeBlock;1514 if (inlineCallFrame)1515 codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);1516 else1517 codeBlock = exec->codeBlock();1518 1519 unsigned length = argumentCount - 1;1520 ClonedArguments* result = ClonedArguments::createEmpty(1521 vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);1522 1523 Register* arguments =1524 exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +1525 CallFrame::argumentOffset(0);1526 for (unsigned i = length; i--;)1527 result->putDirectIndex(exec, i, arguments[i].jsValue());1528 1529 1530 return result;1531 }1532 1533 1477 JSCell* JIT_OPERATION operationCreateRest(ExecState* exec, Register* argumentStart, unsigned numberOfParamsToSkip, unsigned arraySize) 1534 1478 { -
trunk/Source/JavaScriptCore/dfg/DFGOperations.h
r221823 r221832 150 150 JSCell* JIT_OPERATION operationCreateActivationDirect(ExecState*, Structure*, JSScope*, SymbolTable*, EncodedJSValue); 151 151 JSCell* JIT_OPERATION operationCreateDirectArguments(ExecState*, Structure*, int32_t length, int32_t minCapacity); 152 JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);153 152 JSCell* JIT_OPERATION operationCreateScopedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee, JSLexicalEnvironment*); 154 JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);155 153 JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee); 156 154 JSCell* JIT_OPERATION operationCreateRest(ExecState*, Register* argumentStart, unsigned numberOfArgumentsToSkip, unsigned arraySize); -
trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp
r221823 r221832 41 41 namespace JSC { namespace DFG { 42 42 43 MacroAssemblerCodeRef osrExit GenerationThunkGenerator(VM* vm)43 MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm) 44 44 { 45 45 MacroAssembler jit; 46 47 // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer. 48 adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT); 49 50 size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters); 51 ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize); 52 EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()); 53 54 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) { 55 #if USE(JSVALUE64) 56 jit.store64(GPRInfo::toRegister(i), buffer + i); 57 #else 58 jit.store32(GPRInfo::toRegister(i), buffer + i); 59 #endif 60 } 61 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { 62 jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0); 63 jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0)); 64 } 65 66 // Tell GC mark phase how much of the scratch buffer is active during call. 67 jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0); 68 jit.storePtr(MacroAssembler::TrustedImmPtr(scratchSize), MacroAssembler::Address(GPRInfo::regT0)); 69 70 // Set up one argument. 71 #if CPU(X86) 72 jit.poke(GPRInfo::callFrameRegister, 0); 73 #else 74 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0); 75 #endif 76 77 MacroAssembler::Call functionCall = jit.call(); 78 79 jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0); 80 jit.storePtr(MacroAssembler::TrustedImmPtr(0), MacroAssembler::Address(GPRInfo::regT0)); 81 82 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { 83 jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0); 84 jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i)); 85 } 86 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) { 87 #if USE(JSVALUE64) 88 jit.load64(buffer + i, GPRInfo::toRegister(i)); 89 #else 90 jit.load32(buffer + i, GPRInfo::toRegister(i)); 91 #endif 92 } 93 94 jit.jump(MacroAssembler::AbsoluteAddress(&vm->osrExitJumpDestination)); 95 46 jit.probe(OSRExit::executeOSRExit, vm); 96 47 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 97 98 patchBuffer.link(functionCall, OSRExit::compileOSRExit); 99 100 return FINALIZE_CODE(patchBuffer, ("DFG OSR exit generation thunk")); 48 return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk")); 101 49 } 102 50 -
trunk/Source/JavaScriptCore/dfg/DFGThunks.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2011 , 2014Apple Inc. All rights reserved.2 * Copyright (C) 2011-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 36 36 namespace DFG { 37 37 38 MacroAssemblerCodeRef osrExit GenerationThunkGenerator(VM*);38 MacroAssemblerCodeRef osrExitThunkGenerator(VM*); 39 39 MacroAssemblerCodeRef osrEntryThunkGenerator(VM*); 40 40 -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r221823 r221832 51 51 } 52 52 53 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 53 54 Vector<BytecodeAndMachineOffset>& AssemblyHelpers::decodedCodeMapFor(CodeBlock* codeBlock) 54 55 { … … 821 822 #endif // ENABLE(WEBASSEMBLY) 822 823 823 void AssemblyHelpers::debugCall(VM& vm, V_DebugOperation_EPP function, void* argument)824 {825 size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);826 ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize);827 EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());828 829 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {830 #if USE(JSVALUE64)831 store64(GPRInfo::toRegister(i), buffer + i);832 #else833 store32(GPRInfo::toRegister(i), buffer + i);834 #endif835 }836 837 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {838 move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);839 storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);840 }841 842 // Tell GC mark phase how much of the scratch buffer is active during call.843 move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);844 storePtr(TrustedImmPtr(scratchSize), GPRInfo::regT0);845 846 #if CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)847 move(TrustedImmPtr(buffer), GPRInfo::argumentGPR2);848 move(TrustedImmPtr(argument), GPRInfo::argumentGPR1);849 move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);850 GPRReg scratch = selectScratchGPR(GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2);851 #elif CPU(X86)852 poke(GPRInfo::callFrameRegister, 0);853 poke(TrustedImmPtr(argument), 1);854 poke(TrustedImmPtr(buffer), 2);855 GPRReg scratch = GPRInfo::regT0;856 #else857 #error "JIT not supported on this platform."858 #endif859 move(TrustedImmPtr(reinterpret_cast<void*>(function)), scratch);860 call(scratch);861 862 move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);863 storePtr(TrustedImmPtr(0), GPRInfo::regT0);864 865 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {866 move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);867 loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));868 }869 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {870 #if USE(JSVALUE64)871 load64(buffer + i, GPRInfo::toRegister(i));872 #else873 load32(buffer + i, GPRInfo::toRegister(i));874 #endif875 }876 }877 878 824 void AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBufferImpl(GPRReg calleeSavesBuffer) 879 825 { -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r221823 r221832 992 992 return GPRInfo::regT5; 993 993 } 994 995 // Add a debug call. This call has no effect on JIT code execution state.996 void debugCall(VM&, V_DebugOperation_EPP function, void* argument);997 994 998 995 // These methods JIT generate dynamic, debug-only checks - akin to ASSERTs. … … 1466 1463 void emitDumbVirtualCall(VM&, CallLinkInfo*); 1467 1464 1465 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 1468 1466 Vector<BytecodeAndMachineOffset>& decodedCodeMapFor(CodeBlock*); 1469 1467 … … 1657 1655 CodeBlock* m_baselineCodeBlock; 1658 1656 1657 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 1659 1658 HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps; 1660 1659 }; -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r221823 r221832 2308 2308 } 2309 2309 2310 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 2310 2311 void JIT_OPERATION operationOSRWriteBarrier(ExecState* exec, JSCell* cell) 2311 2312 { -
trunk/Source/JavaScriptCore/jit/JITOperations.h
r221823 r221832 448 448 449 449 void JIT_OPERATION operationWriteBarrierSlowPath(ExecState*, JSCell*); 450 // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145. 450 451 void JIT_OPERATION operationOSRWriteBarrier(ExecState*, JSCell*); 451 452 -
trunk/Source/JavaScriptCore/profiler/ProfilerOSRExit.h
r221823 r221832 1 1 /* 2 * Copyright (C) 2012 Apple Inc. All rights reserved.2 * Copyright (C) 2012-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 44 44 uint64_t* counterAddress() { return &m_counter; } 45 45 uint64_t count() const { return m_counter; } 46 46 void incCount() { m_counter++; } 47 47 48 JSValue toJS(ExecState*) const; 48 49 -
trunk/Source/JavaScriptCore/runtime/JSCJSValue.h
r221823 r221832 2 2 * Copyright (C) 1999-2001 Harri Porten (porten@kde.org) 3 3 * Copyright (C) 2001 Peter Kelly (pmk@post.com) 4 * Copyright (C) 2003 , 2004, 2005, 2007, 2008, 2009, 2012, 2015Apple Inc. All rights reserved.4 * Copyright (C) 2003-2017 Apple Inc. All rights reserved. 5 5 * 6 6 * This library is free software; you can redistribute it and/or … … 345 345 int32_t payload() const; 346 346 347 #if !ENABLE(JIT) 348 // This should only be used by the LLInt C Loop interpreter who needs 349 // synthesize JSValue from its "register"s holding tag and payload 350 // values. 347 // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs 348 // synthesize JSValue from its "register"s holding tag and payload values. 351 349 explicit JSValue(int32_t tag, int32_t payload); 352 #endif353 350 354 351 #elif USE(JSVALUE64) -
trunk/Source/JavaScriptCore/runtime/JSCJSValueInlines.h
r221823 r221832 341 341 } 342 342 343 #if !ENABLE(JIT)343 #if USE(JSVALUE32_64) 344 344 inline JSValue::JSValue(int32_t tag, int32_t payload) 345 345 { -
trunk/Source/JavaScriptCore/runtime/VM.h
r221823 r221832 572 572 Instruction* targetInterpreterPCForThrow; 573 573 uint32_t osrExitIndex; 574 void* osrExitJumpDestination;575 574 bool isExecutingInRegExpJIT { false }; 576 575
Note:
See TracChangeset
for help on using the changeset viewer.