Changeset 189575 in webkit
- Timestamp:
- Sep 10, 2015, 10:47:16 AM (10 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 4 added
- 4 deleted
- 58 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r189544 r189575 359 359 360 360 jit/AccessorCallJITStubRoutine.cpp 361 jit/ArityCheckFailReturnThunks.cpp362 361 jit/AssemblyHelpers.cpp 363 362 jit/BinarySwitch.cpp … … 387 386 jit/PolymorphicCallStubRoutine.cpp 388 387 jit/Reg.cpp 388 jit/RegisterAtOffset.cpp 389 jit/RegisterAtOffsetList.cpp 389 390 jit/RegisterPreservationWrapperGenerator.cpp 390 391 jit/RegisterSet.cpp … … 905 906 ftl/FTLOutput.cpp 906 907 ftl/FTLRecoveryOpcode.cpp 907 ftl/FTLRegisterAtOffset.cpp908 908 ftl/FTLSaveRestore.cpp 909 909 ftl/FTLSlowPathCall.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r189571 r189575 1 2015-09-10 Michael Saboff <msaboff@apple.com> 2 3 Add support for Callee-Saves registers 4 https://bugs.webkit.org/show_bug.cgi?id=148666 5 6 Reviewed by Filip Pizlo. 7 8 We save platform callee save registers right below the call frame header, 9 in the location(s) starting with VirtualRegister 0. This local space is 10 allocated in the bytecode compiler. This space is the maximum space 11 needed for the callee registers that the LLInt and baseline JIT use, 12 rounded up to a stack aligned number of VirtualRegisters. 13 The LLInt explicitly saves and restores the registers in the macros 14 preserveCalleeSavesUsedByLLInt and restoreCalleeSavesUsedByLLInt. 15 The JITs saves and restores callee saves registers by what registers 16 are included in m_calleeSaveRegisters in the code block. 17 18 Added handling of callee save register restoration to exception handling. 19 The basic flow is when an exception is thrown or one is recognized to 20 have been generated in C++ code, we save the current state of all 21 callee save registers to VM::calleeSaveRegistersBuffer. As we unwind 22 looking for the corresponding catch, we copy the callee saves from call 23 frames to the same VM::calleeSaveRegistersBuffer. This is done for all 24 call frames on the stack up to but not including the call frame that has 25 the corresponding catch block. When we process the catch, we restore 26 the callee save registers with the contents of VM::calleeSaveRegistersBuffer. 27 If there isn't a catch, then handleUncaughtException will restore callee 28 saves before it returns back to the calling C++. 29 30 Eliminated callee saves registers as free registers for various thunk 31 generators as the callee saves may not have been saved by the function 32 calling the thunk. 33 34 Added code to transition callee saves from one VM's format to the another 35 as part of OSR entry and OSR exit. 36 37 Cleaned up the static RegisterSet's including adding one for LLInt and 38 baseline JIT callee saves and one to be used to allocate local registers 39 not including the callee saves or other special registers. 40 41 Moved ftl/FTLRegisterAtOffset.{cpp,h} to jit/RegisterAtOffset.{cpp,h}. 42 Factored out the vector of RegisterAtOffsets in ftl/FTLUnwindInfo.{cpp,h} 43 into a new class in jit/RegisterAtOffsetList.{cpp,h}. 44 Eliminted UnwindInfo and changed UnwindInfo::parse() into a standalone 45 function named parseUnwindInfo. That standalone function now returns 46 the callee saves RegisterAtOffsetList. This is stored in the CodeBlock 47 and used instead of UnwindInfo. 48 49 Turned off register preservation thunks for outgoing calls from FTL 50 generated code. THey'll be removed in a subsequent patch. 51 52 Changed specialized thinks to save and restore the contents of 53 tagTypeNumberRegister and tagMaskRegister as they can be called by FTL 54 compiled functions. We materialize those tag registers for the thunk's 55 use and then restore the prior contents on function exit. 56 57 Also removed the arity check fail return thunk since it is now the 58 caller's responsibility to restore the stack pointer. 59 60 Removed saving of callee save registers and materialization of special 61 tag registers for 64 bit platforms from vmEntryToJavaScript and 62 vmEntryToNative. 63 64 * CMakeLists.txt: 65 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: 66 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: 67 * JavaScriptCore.xcodeproj/project.pbxproj: 68 * ftl/FTLJITCode.h: 69 * ftl/FTLRegisterAtOffset.cpp: Removed. 70 * ftl/FTLRegisterAtOffset.h: Removed. 71 * ftl/FTLUnwindInfo.cpp: 72 (JSC::FTL::parseUnwindInfo): 73 (JSC::FTL::UnwindInfo::UnwindInfo): Deleted. 74 (JSC::FTL::UnwindInfo::~UnwindInfo): Deleted. 75 (JSC::FTL::UnwindInfo::parse): Deleted. 76 (JSC::FTL::UnwindInfo::dump): Deleted. 77 (JSC::FTL::UnwindInfo::find): Deleted. 78 (JSC::FTL::UnwindInfo::indexOf): Deleted. 79 * ftl/FTLUnwindInfo.h: 80 (JSC::RegisterAtOffset::dump): 81 * jit/RegisterAtOffset.cpp: Added. 82 * jit/RegisterAtOffset.h: Added. 83 (JSC::RegisterAtOffset::RegisterAtOffset): 84 (JSC::RegisterAtOffset::operator!): 85 (JSC::RegisterAtOffset::reg): 86 (JSC::RegisterAtOffset::offset): 87 (JSC::RegisterAtOffset::offsetAsIndex): 88 (JSC::RegisterAtOffset::operator==): 89 (JSC::RegisterAtOffset::operator<): 90 (JSC::RegisterAtOffset::getReg): 91 * jit/RegisterAtOffsetList.cpp: Added. 92 (JSC::RegisterAtOffsetList::RegisterAtOffsetList): 93 (JSC::RegisterAtOffsetList::sort): 94 (JSC::RegisterAtOffsetList::dump): 95 (JSC::RegisterAtOffsetList::find): 96 (JSC::RegisterAtOffsetList::indexOf): 97 * jit/RegisterAtOffsetList.h: Added. 98 (JSC::RegisterAtOffsetList::clear): 99 (JSC::RegisterAtOffsetList::size): 100 (JSC::RegisterAtOffsetList::at): 101 (JSC::RegisterAtOffsetList::append): 102 Move and refactored use of FTLRegisterAtOffset to RegisterAtOffset. 103 Added RegisterAtOffset and RegisterAtOffsetList to build configurations. 104 Remove FTLRegisterAtOffset files. 105 106 * bytecode/CallLinkInfo.h: 107 (JSC::CallLinkInfo::setUpCallFromFTL): 108 Turned off FTL register preservation thunks. 109 110 * bytecode/CodeBlock.cpp: 111 (JSC::CodeBlock::CodeBlock): 112 (JSC::CodeBlock::setCalleeSaveRegisters): 113 (JSC::roundCalleeSaveSpaceAsVirtualRegisters): 114 (JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters): 115 (JSC::CodeBlock::calleeSaveSpaceAsVirtualRegisters): 116 * bytecode/CodeBlock.h: 117 (JSC::CodeBlock::numberOfLLIntBaselineCalleeSaveRegisters): 118 (JSC::CodeBlock::calleeSaveRegisters): 119 (JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters): 120 (JSC::CodeBlock::optimizeAfterWarmUp): 121 (JSC::CodeBlock::numberOfDFGCompiles): 122 Methods to manage a set of callee save registers. Also to allocate the appropriate 123 number of VirtualRegisters for callee saves. 124 125 * bytecompiler/BytecodeGenerator.cpp: 126 (JSC::BytecodeGenerator::BytecodeGenerator): 127 (JSC::BytecodeGenerator::allocateCalleeSaveSpace): 128 * bytecompiler/BytecodeGenerator.h: 129 Allocate the appropriate number of VirtualRegisters for callee saves needed by LLInt or baseline JIT. 130 131 * dfg/DFGJITCompiler.cpp: 132 (JSC::DFG::JITCompiler::compileEntry): 133 (JSC::DFG::JITCompiler::compileSetupRegistersForEntry): 134 (JSC::DFG::JITCompiler::compileBody): 135 (JSC::DFG::JITCompiler::compileExceptionHandlers): 136 (JSC::DFG::JITCompiler::compile): 137 (JSC::DFG::JITCompiler::compileFunction): 138 * dfg/DFGJITCompiler.h: 139 * interpreter/Interpreter.cpp: 140 (JSC::UnwindFunctor::operator()): 141 (JSC::UnwindFunctor::copyCalleeSavesToVMCalleeSavesBuffer): 142 * dfg/DFGPlan.cpp: 143 (JSC::DFG::Plan::compileInThreadImpl): 144 * dfg/DFGSpeculativeJIT.cpp: 145 (JSC::DFG::SpeculativeJIT::usedRegisters): 146 * dfg/DFGSpeculativeJIT32_64.cpp: 147 (JSC::DFG::SpeculativeJIT::compile): 148 * dfg/DFGSpeculativeJIT64.cpp: 149 (JSC::DFG::SpeculativeJIT::compile): 150 * dfg/DFGStackLayoutPhase.cpp: 151 (JSC::DFG::StackLayoutPhase::run): 152 * ftl/FTLCompile.cpp: 153 (JSC::FTL::fixFunctionBasedOnStackMaps): 154 (JSC::FTL::compile): 155 * ftl/FTLLink.cpp: 156 (JSC::FTL::link): 157 * ftl/FTLOSRExitCompiler.cpp: 158 (JSC::FTL::compileStub): 159 * ftl/FTLThunks.cpp: 160 (JSC::FTL::osrExitGenerationThunkGenerator): 161 * jit/ArityCheckFailReturnThunks.cpp: Removed. 162 * jit/ArityCheckFailReturnThunks.h: Removed. 163 * jit/JIT.cpp: 164 (JSC::JIT::emitEnterOptimizationCheck): 165 (JSC::JIT::privateCompile): 166 (JSC::JIT::privateCompileExceptionHandlers): 167 * jit/JITCall32_64.cpp: 168 (JSC::JIT::emit_op_ret): 169 * jit/JITExceptions.cpp: 170 (JSC::genericUnwind): 171 * jit/JITExceptions.h: 172 * jit/JITOpcodes.cpp: 173 (JSC::JIT::emit_op_end): 174 (JSC::JIT::emit_op_ret): 175 (JSC::JIT::emit_op_throw): 176 (JSC::JIT::emit_op_catch): 177 (JSC::JIT::emit_op_enter): 178 (JSC::JIT::emitSlow_op_loop_hint): 179 * jit/JITOpcodes32_64.cpp: 180 (JSC::JIT::emit_op_end): 181 (JSC::JIT::emit_op_throw): 182 (JSC::JIT::emit_op_catch): 183 * jit/JITOperations.cpp: 184 * jit/Repatch.cpp: 185 (JSC::generateByIdStub): 186 * jit/ThunkGenerators.cpp: 187 * llint/LLIntData.cpp: 188 (JSC::LLInt::Data::performAssertions): 189 * llint/LLIntSlowPaths.cpp: 190 (JSC::LLInt::LLINT_SLOW_PATH_DECL): 191 * llint/LowLevelInterpreter.asm: 192 * llint/LowLevelInterpreter32_64.asm: 193 * llint/LowLevelInterpreter64.asm: 194 (JSC::throwExceptionFromCallSlowPathGenerator): 195 (JSC::arityFixupGenerator): 196 * runtime/CommonSlowPaths.cpp: 197 (JSC::setupArityCheckData): 198 * runtime/CommonSlowPaths.h: 199 (JSC::CommonSlowPaths::arityCheckFor): 200 Emit code to save and restore callee save registers and materialize tagTypeNumberRegister 201 and tagMaskRegister. 202 Handle callee saves when tiering up. 203 Copy callee saves register contents to VM::calleeSaveRegistersBuffer at beginning of 204 exception processing. 205 Process callee save registers in frames when unwinding from an exception. 206 Restore callee saves register contents from VM::calleeSaveRegistersBuffer on catch. 207 Use appropriate register set to make sure we don't allocate a callee save register when 208 compiling a thunk. 209 Helper to populate tagTypeNumberRegister and tagMaskRegister with the appropriate 210 constants. 211 Removed arity fixup return thunks. 212 213 * dfg/DFGOSREntry.cpp: 214 (JSC::DFG::prepareOSREntry): 215 * dfg/DFGOSRExitCompiler32_64.cpp: 216 (JSC::DFG::OSRExitCompiler::compileExit): 217 * dfg/DFGOSRExitCompiler64.cpp: 218 (JSC::DFG::OSRExitCompiler::compileExit): 219 * dfg/DFGOSRExitCompilerCommon.cpp: 220 (JSC::DFG::reifyInlinedCallFrames): 221 (JSC::DFG::adjustAndJumpToTarget): 222 Restore callee saves from the DFG and save the appropriate ones for the baseline JIT. 223 Materialize the tag registers on 64 bit platforms. 224 225 * jit/AssemblyHelpers.h: 226 (JSC::AssemblyHelpers::emitSaveCalleeSavesFor): 227 (JSC::AssemblyHelpers::emitRestoreCalleeSavesFor): 228 (JSC::AssemblyHelpers::emitSaveCalleeSaves): 229 (JSC::AssemblyHelpers::emitRestoreCalleeSaves): 230 (JSC::AssemblyHelpers::copyCalleeSavesToVMCalleeSavesBuffer): 231 (JSC::AssemblyHelpers::restoreCalleeSavesFromVMCalleeSavesBuffer): 232 (JSC::AssemblyHelpers::copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer): 233 (JSC::AssemblyHelpers::emitMaterializeTagCheckRegisters): 234 New helpers to save and restore callee saves as well as materialize the tag registers 235 contents. 236 237 * jit/FPRInfo.h: 238 * jit/GPRInfo.h: 239 (JSC::GPRInfo::toRegister): 240 Updated to include FP callee save registers. Added number of callee saves registers and 241 cleanup register aliases that collide with callee save registers. 242 243 * jit/JITPropertyAccess.cpp: 244 (JSC::JIT::emitGetByValWithCachedId): 245 (JSC::JIT::emitPutByValWithCachedId): 246 (JSC::JIT::emit_op_get_by_id): 247 (JSC::JIT::emit_op_put_by_id): 248 * jit/JITPropertyAccess32_64.cpp: 249 (JSC::JIT::emitGetByValWithCachedId): 250 (JSC::JIT::emitPutByValWithCachedId): 251 (JSC::JIT::emit_op_get_by_id): 252 (JSC::JIT::emit_op_put_by_id): 253 Uses new stubUnavailableRegisters register set to limit what registers are available for 254 temporaries. 255 256 * jit/RegisterSet.cpp: 257 (JSC::RegisterSet::stubUnavailableRegisters): 258 (JSC::RegisterSet::calleeSaveRegisters): 259 (JSC::RegisterSet::llintBaselineCalleeSaveRegisters): 260 (JSC::RegisterSet::dfgCalleeSaveRegisters): 261 (JSC::RegisterSet::ftlCalleeSaveRegisters): 262 * jit/RegisterSet.h: 263 New register sets with the callee saves used by various tiers as well as one listing registers 264 not availble to stub code. 265 266 * jit/SpecializedThunkJIT.h: 267 (JSC::SpecializedThunkJIT::SpecializedThunkJIT): 268 (JSC::SpecializedThunkJIT::loadDoubleArgument): 269 (JSC::SpecializedThunkJIT::returnJSValue): 270 (JSC::SpecializedThunkJIT::returnDouble): 271 (JSC::SpecializedThunkJIT::returnInt32): 272 (JSC::SpecializedThunkJIT::returnJSCell): 273 (JSC::SpecializedThunkJIT::callDoubleToDoublePreservingReturn): 274 (JSC::SpecializedThunkJIT::emitSaveThenMaterializeTagRegisters): 275 (JSC::SpecializedThunkJIT::emitRestoreSavedTagRegisters): 276 (JSC::SpecializedThunkJIT::tagReturnAsInt32): 277 * jit/ThunkGenerators.cpp: 278 (JSC::nativeForGenerator): 279 Changed to save and restore existing tag register contents as the may contain other values. 280 After saving the existing values, we materialize the tag constants. 281 282 * jit/TempRegisterSet.h: 283 (JSC::TempRegisterSet::getFPRByIndex): 284 (JSC::TempRegisterSet::getFreeFPR): 285 (JSC::TempRegisterSet::setByIndex): 286 * offlineasm/arm64.rb: 287 * offlineasm/registers.rb: 288 Added methods for floating point registers to support callee save FP registers. 289 290 * jit/JITArithmetic32_64.cpp: 291 (JSC::JIT::emit_op_mod): 292 Removed unnecessary #if CPU(X86_64) check to this 32 bit only file. 293 294 * offlineasm/x86.rb: 295 Fixed Windows callee saves naming. 296 297 * runtime/VM.cpp: 298 (JSC::VM::VM): 299 * runtime/VM.h: 300 (JSC::VM::calleeSaveRegistersBufferOffset): 301 (JSC::VM::getAllCalleeSaveRegistersMap): 302 Provide a RegisterSaveMap that has all registers that might be saved. Added a callee save buffer to be 303 used for OSR exit and for exception processing in a future patch. 304 1 305 2015-09-10 Yusuke Suzuki <utatane.tea@gmail.com> 2 306 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
r189544 r189575 546 546 <ClCompile Include="..\ftl\FTLOutput.cpp" /> 547 547 <ClCompile Include="..\ftl\FTLRecoveryOpcode.cpp" /> 548 <ClCompile Include="..\ftl\FTLRegisterAtOffset.cpp" />549 548 <ClCompile Include="..\ftl\FTLSaveRestore.cpp" /> 550 549 <ClCompile Include="..\ftl\FTLSlowPathCall.cpp" /> … … 620 619 <ClCompile Include="..\interpreter\StackVisitor.cpp" /> 621 620 <ClCompile Include="..\jit\AccessorCallJITStubRoutine.cpp" /> 622 <ClCompile Include="..\jit\ArityCheckFailReturnThunks.cpp" />623 621 <ClCompile Include="..\jit\AssemblyHelpers.cpp" /> 624 622 <ClCompile Include="..\jit\BinarySwitch.cpp" /> … … 650 648 <ClCompile Include="..\jit\PolymorphicCallStubRoutine.cpp" /> 651 649 <ClCompile Include="..\jit\Reg.cpp" /> 650 <ClCompile Include="..\jit\RegisterAtOffset.cpp" /> 651 <ClCompile Include="..\jit\RegisterAtOffsetList.cpp" /> 652 652 <ClCompile Include="..\jit\RegisterPreservationWrapperGenerator.cpp" /> 653 653 <ClCompile Include="..\jit\RegisterSet.cpp" /> … … 1298 1298 <ClInclude Include="..\ftl\FTLOutput.h" /> 1299 1299 <ClInclude Include="..\ftl\FTLRecoveryOpcode.h" /> 1300 <ClInclude Include="..\ftl\FTLRegisterAtOffset.h" />1301 1300 <ClInclude Include="..\ftl\FTLSaveRestore.h" /> 1302 1301 <ClInclude Include="..\ftl\FTLSlowPathCall.h" /> … … 1420 1419 <ClInclude Include="..\interpreter\StackVisitor.h" /> 1421 1420 <ClInclude Include="..\jit\AccessorCallJITStubRoutine.h" /> 1422 <ClInclude Include="..\jit\ArityCheckFailReturnThunks.h" />1423 1421 <ClInclude Include="..\jit\AssemblyHelpers.h" /> 1424 1422 <ClInclude Include="..\jit\BinarySwitch.h" /> … … 1451 1449 <ClInclude Include="..\jit\PolymorphicCallStubRoutine.h" /> 1452 1450 <ClInclude Include="..\jit\Reg.h" /> 1451 <ClInclude Include="..\jit\RegisterAtOffset.h" /> 1452 <ClInclude Include="..\jit\RegisterAtOffsetList.h" /> 1453 1453 <ClInclude Include="..\jit\RegisterMap.h" /> 1454 1454 <ClInclude Include="..\jit\RegisterPreservationWrapperGenerator.h" /> -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
r189544 r189575 1492 1492 <Filter>Derived Sources</Filter> 1493 1493 </ClCompile> 1494 <ClCompile Include="..\jit\ArityCheckFailReturnThunks.cpp">1495 <Filter>jit</Filter>1496 </ClCompile>1497 1494 <ClCompile Include="..\jit\RegisterPreservationWrapperGenerator.cpp"> 1498 1495 <Filter>jit</Filter> … … 4038 4035 </ClInclude> 4039 4036 <ClInclude Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\JSDataViewPrototype.lut.h" /> 4040 <ClInclude Include="..\jit\ArityCheckFailReturnThunks.h">4041 <Filter>jit</Filter>4042 </ClInclude>4043 4037 <ClInclude Include="..\jit\RegisterPreservationWrapperGenerator.h"> 4044 4038 <Filter>jit</Filter> -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r189544 r189575 383 383 0F6B1CBD1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */; }; 384 384 0F6B1CBE1861246A00845D97 /* RegisterPreservationWrapperGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CBC1861246A00845D97 /* RegisterPreservationWrapperGenerator.h */; settings = {ATTRIBUTES = (Private, ); }; }; 385 0F6B1CC31862C47800845D97 /* FTLRegisterAtOffset.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */; };386 0F6B1CC41862C47800845D97 /* FTLRegisterAtOffset.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */; settings = {ATTRIBUTES = (Private, ); }; };387 385 0F6B1CC51862C47800845D97 /* FTLUnwindInfo.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CC11862C47800845D97 /* FTLUnwindInfo.cpp */; }; 388 386 0F6B1CC61862C47800845D97 /* FTLUnwindInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC21862C47800845D97 /* FTLUnwindInfo.h */; settings = {ATTRIBUTES = (Private, ); }; }; 389 0F6B1CC918641DF800845D97 /* ArityCheckFailReturnThunks.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */; };390 0F6B1CCA18641DF800845D97 /* ArityCheckFailReturnThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */; settings = {ATTRIBUTES = (Private, ); }; };391 387 0F6C73501AC9F99F00BE1682 /* VariableWriteFireDetail.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */; }; 392 388 0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 988 984 6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */ = {isa = PBXBuildFile; fileRef = 6514F21718B3E1670098FF8B /* Bytecodes.h */; settings = {ATTRIBUTES = (Private, ); }; }; 989 985 65303D641447B9E100D3F904 /* ParserTokens.h in Headers */ = {isa = PBXBuildFile; fileRef = 65303D631447B9E100D3F904 /* ParserTokens.h */; settings = {ATTRIBUTES = (Private, ); }; }; 986 6540C7A01B82E1C3000F6B79 /* RegisterAtOffset.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */; }; 987 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */; }; 990 988 6546F5211A32B313006F07D5 /* NullGetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 6546F51F1A32A59C006F07D5 /* NullGetterFunction.cpp */; }; 991 989 65525FC51A6DD801007B5495 /* NullSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65525FC31A6DD3B3007B5495 /* NullSetterFunction.cpp */; }; … … 2217 2215 0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterPreservationWrapperGenerator.cpp; sourceTree = "<group>"; }; 2218 2216 0F6B1CBC1861246A00845D97 /* RegisterPreservationWrapperGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterPreservationWrapperGenerator.h; sourceTree = "<group>"; }; 2219 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLRegisterAtOffset.cpp; path = ftl/FTLRegisterAtOffset.cpp; sourceTree = "<group>"; };2220 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLRegisterAtOffset.h; path = ftl/FTLRegisterAtOffset.h; sourceTree = "<group>"; };2221 2217 0F6B1CC11862C47800845D97 /* FTLUnwindInfo.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLUnwindInfo.cpp; path = ftl/FTLUnwindInfo.cpp; sourceTree = "<group>"; }; 2222 2218 0F6B1CC21862C47800845D97 /* FTLUnwindInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLUnwindInfo.h; path = ftl/FTLUnwindInfo.h; sourceTree = "<group>"; }; 2223 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ArityCheckFailReturnThunks.cpp; sourceTree = "<group>"; };2224 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ArityCheckFailReturnThunks.h; sourceTree = "<group>"; };2225 2219 0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = VariableWriteFireDetail.cpp; sourceTree = "<group>"; }; 2226 2220 0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VariableWriteFireDetail.h; sourceTree = "<group>"; }; … … 2802 2796 65303D631447B9E100D3F904 /* ParserTokens.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ParserTokens.h; sourceTree = "<group>"; }; 2803 2797 65400C100A69BAF200509887 /* PropertyNameArray.h */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.c.h; path = PropertyNameArray.h; sourceTree = "<group>"; }; 2798 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterAtOffsetList.cpp; sourceTree = "<group>"; }; 2799 6540C79D1B82D99D000F6B79 /* RegisterAtOffsetList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterAtOffsetList.h; sourceTree = "<group>"; }; 2800 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RegisterAtOffset.cpp; sourceTree = "<group>"; }; 2801 6540C79F1B82D9CE000F6B79 /* RegisterAtOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterAtOffset.h; sourceTree = "<group>"; }; 2804 2802 6546F51F1A32A59C006F07D5 /* NullGetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; lineEnding = 0; path = NullGetterFunction.cpp; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.cpp; }; 2805 2803 6546F5201A32A59C006F07D5 /* NullGetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = NullGetterFunction.h; sourceTree = "<group>"; }; … … 3973 3971 0F485325187DFDEC0083B687 /* FTLRecoveryOpcode.cpp */, 3974 3972 0F485326187DFDEC0083B687 /* FTLRecoveryOpcode.h */, 3975 0F6B1CBF1862C47800845D97 /* FTLRegisterAtOffset.cpp */,3976 0F6B1CC01862C47800845D97 /* FTLRegisterAtOffset.h */,3977 3973 0FCEFAA91804C13E00472CE4 /* FTLSaveRestore.cpp */, 3978 3974 0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */, … … 4105 4101 0F7576D018E1FEE9002EF4CD /* AccessorCallJITStubRoutine.cpp */, 4106 4102 0F7576D118E1FEE9002EF4CD /* AccessorCallJITStubRoutine.h */, 4107 0F6B1CC718641DF800845D97 /* ArityCheckFailReturnThunks.cpp */,4108 0F6B1CC818641DF800845D97 /* ArityCheckFailReturnThunks.h */,4109 4103 0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */, 4110 4104 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */, … … 4159 4153 A76F54A213B28AAB00EF2BCE /* JITWriteBarrier.h */, 4160 4154 A76C51741182748D00715B05 /* JSInterfaceJIT.h */, 4161 0FEE98421A89227500754E93 /* SetupVarargsFrame.cpp */,4162 0FEE98401A8865B600754E93 /* SetupVarargsFrame.h */,4163 4155 0FE834151A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp */, 4164 4156 0FE834161A6EF97B00D04847 /* PolymorphicCallStubRoutine.h */, 4165 4157 0FA7A8E918B413C80052371D /* Reg.cpp */, 4166 4158 0FA7A8EA18B413C80052371D /* Reg.h */, 4159 6540C79E1B82D9CE000F6B79 /* RegisterAtOffset.cpp */, 4160 6540C79F1B82D9CE000F6B79 /* RegisterAtOffset.h */, 4161 6540C79C1B82D99D000F6B79 /* RegisterAtOffsetList.cpp */, 4162 6540C79D1B82D99D000F6B79 /* RegisterAtOffsetList.h */, 4167 4163 623A37EB1B87A7BD00754209 /* RegisterMap.h */, 4168 4164 0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */, … … 4174 4170 0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */, 4175 4171 0F24E54B17EE274900ABB217 /* ScratchRegisterAllocator.h */, 4172 0FEE98421A89227500754E93 /* SetupVarargsFrame.cpp */, 4173 0FEE98401A8865B600754E93 /* SetupVarargsFrame.h */, 4176 4174 A709F2EF17A0AC0400512E98 /* SlowPathCall.h */, 4177 4175 A7386551118697B400540279 /* SpecializedThunkJIT.h */, … … 5926 5924 BCF605140E203EF800B9A64D /* ArgList.h in Headers */, 5927 5925 2A88067919107D5500CB0BBB /* DFGFunctionWhitelist.h in Headers */, 5928 0F6B1CCA18641DF800845D97 /* ArityCheckFailReturnThunks.h in Headers */,5929 5926 0F6B1CB91861244C00845D97 /* ArityCheckMode.h in Headers */, 5930 5927 A1A009C11831A26E00CF8711 /* ARM64Assembler.h in Headers */, … … 6326 6323 0F48532A187DFDEC0083B687 /* FTLRecoveryOpcode.h in Headers */, 6327 6324 E3794E761B77EB97005543AE /* ModuleAnalyzer.h in Headers */, 6328 0F6B1CC41862C47800845D97 /* FTLRegisterAtOffset.h in Headers */,6329 6325 0FCEFAAC1804C13E00472CE4 /* FTLSaveRestore.h in Headers */, 6330 6326 0F25F1B2181635F300522F39 /* FTLSlowPathCall.h in Headers */, … … 7393 7389 0F55F0F414D1063900AC7649 /* AbstractPC.cpp in Sources */, 7394 7390 147F39BD107EC37600427A48 /* ArgList.cpp in Sources */, 7395 0F6B1CC918641DF800845D97 /* ArityCheckFailReturnThunks.cpp in Sources */,7396 7391 797E07A91B8FCFB9008400BA /* JSGlobalLexicalEnvironment.cpp in Sources */, 7397 7392 E3794E751B77EB97005543AE /* ModuleAnalyzer.cpp in Sources */, … … 7656 7651 0FEA0A2A1709629600BB722C /* FTLOutput.cpp in Sources */, 7657 7652 0F485329187DFDEC0083B687 /* FTLRecoveryOpcode.cpp in Sources */, 7658 0F6B1CC31862C47800845D97 /* FTLRegisterAtOffset.cpp in Sources */,7659 7653 0FCEFAAB1804C13E00472CE4 /* FTLSaveRestore.cpp in Sources */, 7660 7654 0F25F1B1181635F300522F39 /* FTLSlowPathCall.cpp in Sources */, … … 7927 7921 0FF60AC316740F8800029779 /* ReduceWhitespace.cpp in Sources */, 7928 7922 0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */, 7923 6540C7A11B82E1C3000F6B79 /* RegisterAtOffsetList.cpp in Sources */, 7924 6540C7A01B82E1C3000F6B79 /* RegisterAtOffset.cpp in Sources */, 7929 7925 14280841107EC0930013E7B2 /* RegExp.cpp in Sources */, 7930 7926 A1712B3B11C7B212007A5315 /* RegExpCache.cpp in Sources */, -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h
r189493 r189575 118 118 CodeLocationNearCall hotPathOther, unsigned calleeGPR) 119 119 { 120 m_registerPreservationMode = static_cast<unsigned>( MustPreserveRegisters);120 m_registerPreservationMode = static_cast<unsigned>(RegisterPreservationNotRequired); 121 121 m_callType = callType; 122 122 m_codeOrigin = codeOrigin; -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r189556 r189575 69 69 #include <wtf/StringPrintStream.h> 70 70 #include <wtf/text/UniquedStringImpl.h> 71 72 #if ENABLE(JIT) 73 #include "RegisterAtOffsetList.h" 74 #endif 71 75 72 76 #if ENABLE(DFG_JIT) … … 1887 1891 if (size_t size = unlinkedCodeBlock->numberOfObjectAllocationProfiles()) 1888 1892 m_objectAllocationProfiles.resizeToFit(size); 1893 1894 #if ENABLE(JIT) 1895 setCalleeSaveRegisters(RegisterSet::llintBaselineCalleeSaveRegisters()); 1896 #endif 1889 1897 1890 1898 // Copy and translate the UnlinkedInstructions … … 3330 3338 3331 3339 #if ENABLE(JIT) 3340 void CodeBlock::setCalleeSaveRegisters(RegisterSet calleeSaveRegisters) 3341 { 3342 m_calleeSaveRegisters = std::make_unique<RegisterAtOffsetList>(calleeSaveRegisters); 3343 } 3344 3345 void CodeBlock::setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList> registerAtOffsetList) 3346 { 3347 m_calleeSaveRegisters = WTF::move(registerAtOffsetList); 3348 } 3349 3350 static size_t roundCalleeSaveSpaceAsVirtualRegisters(size_t calleeSaveRegisters) 3351 { 3352 static const unsigned cpuRegisterSize = sizeof(void*); 3353 return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * cpuRegisterSize) / sizeof(Register)); 3354 3355 } 3356 3357 size_t CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() 3358 { 3359 return roundCalleeSaveSpaceAsVirtualRegisters(numberOfLLIntBaselineCalleeSaveRegisters()); 3360 } 3361 3362 size_t CodeBlock::calleeSaveSpaceAsVirtualRegisters() 3363 { 3364 return roundCalleeSaveSpaceAsVirtualRegisters(m_calleeSaveRegisters->size()); 3365 } 3366 3332 3367 void CodeBlock::countReoptimization() 3333 3368 { -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r189571 r189575 81 81 class ExecState; 82 82 class LLIntOffsetsExtractor; 83 class RegisterAtOffsetList; 83 84 class TypeLocation; 84 85 class JSModuleEnvironment; … … 732 733 void countReoptimization(); 733 734 #if ENABLE(JIT) 735 static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return RegisterSet::llintBaselineCalleeSaveRegisters().numberOfSetRegisters(); } 736 static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters(); 737 size_t calleeSaveSpaceAsVirtualRegisters(); 738 734 739 unsigned numberOfDFGCompiles(); 735 740 … … 817 822 bool shouldReoptimizeNow(); 818 823 bool shouldReoptimizeFromLoopNow(); 824 825 void setCalleeSaveRegisters(RegisterSet); 826 void setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList>); 827 828 RegisterAtOffsetList* calleeSaveRegisters() const { return m_calleeSaveRegisters.get(); } 819 829 #else // No JIT 830 static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return 0; } 831 static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters() { return 0; }; 820 832 void optimizeAfterWarmUp() { } 821 833 unsigned numberOfDFGCompiles() { return 0; } … … 856 868 // FIXME: Make these remaining members private. 857 869 870 int m_numLocalRegistersForCalleeSaves; 858 871 int m_numCalleeRegisters; 859 872 int m_numVars; … … 1016 1029 RefPtr<JITCode> m_jitCode; 1017 1030 #if ENABLE(JIT) 1031 std::unique_ptr<RegisterAtOffsetList> m_calleeSaveRegisters; 1018 1032 Bag<StructureStubInfo> m_stubInfos; 1019 1033 Bag<ByValInfo> m_byValInfos; -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
r189504 r189575 159 159 constantRegister = nullptr; 160 160 161 allocateCalleeSaveSpace(); 162 161 163 m_codeBlock->setNumParameters(1); // Allocate space for "this" 162 164 … … 199 201 if (m_isBuiltinFunction) 200 202 m_shouldEmitDebugHooks = false; 203 204 allocateCalleeSaveSpace(); 201 205 202 206 SymbolTable* functionSymbolTable = SymbolTable::create(*m_vm); … … 494 498 constantRegister = nullptr; 495 499 500 allocateCalleeSaveSpace(); 501 496 502 m_codeBlock->setNumParameters(1); 497 503 … … 537 543 if (m_isBuiltinFunction) 538 544 m_shouldEmitDebugHooks = false; 545 546 allocateCalleeSaveSpace(); 539 547 540 548 SymbolTable* moduleEnvironmentSymbolTable = SymbolTable::create(*m_vm); … … 3063 3071 } 3064 3072 return LabelScopePtr::null(); 3073 } 3074 3075 void BytecodeGenerator::allocateCalleeSaveSpace() 3076 { 3077 size_t virtualRegisterCountForCalleeSaves = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); 3078 3079 for (size_t i = 0; i < virtualRegisterCountForCalleeSaves; i++) { 3080 RegisterID* localRegister = addVar(); 3081 localRegister->ref(); 3082 m_localRegistersForCalleeSaveRegisters.append(localRegister); 3083 } 3065 3084 } 3066 3085 -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
r189504 r189575 698 698 ALWAYS_INLINE void rewindUnaryOp(); 699 699 700 void allocateCalleeSaveSpace(); 700 701 void allocateAndEmitScope(); 701 702 RegisterID* emitLoadArrowFunctionThis(RegisterID*); … … 811 812 RegisterID* m_linkTimeConstantRegisters[LinkTimeConstantCount]; 812 813 814 SegmentedVector<RegisterID*, 16> m_localRegistersForCalleeSaveRegisters; 813 815 SegmentedVector<RegisterID, 32> m_constantPoolRegisters; 814 816 SegmentedVector<RegisterID, 32> m_calleeRegisters; -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r189293 r189575 29 29 #if ENABLE(DFG_JIT) 30 30 31 #include "ArityCheckFailReturnThunks.h"32 31 #include "CodeBlock.h" 33 32 #include "DFGFailedFinalizer.h" … … 103 102 emitFunctionPrologue(); 104 103 emitPutImmediateToCallFrameHeader(m_codeBlock, JSStack::CodeBlock); 105 jitAssertTagsInPlace(); 104 } 105 106 void JITCompiler::compileSetupRegistersForEntry() 107 { 108 emitSaveCalleeSaves(); 109 emitMaterializeTagCheckRegisters(); 106 110 } 107 111 … … 119 123 if (!m_exceptionChecksWithCallFrameRollback.empty()) { 120 124 m_exceptionChecksWithCallFrameRollback.link(this); 125 126 copyCalleeSavesToVMCalleeSavesBuffer(); 121 127 122 128 // lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*). … … 137 143 if (!m_exceptionChecks.empty()) { 138 144 m_exceptionChecks.link(this); 145 146 copyCalleeSavesToVMCalleeSavesBuffer(); 139 147 140 148 // lookupExceptionHandler is passed two arguments, the VM and the exec (the CallFrame*). … … 295 303 addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister); 296 304 checkStackPointerAlignment(); 305 compileSetupRegistersForEntry(); 297 306 compileBody(); 298 307 setEndOfMainPath(); … … 358 367 addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister); 359 368 checkStackPointerAlignment(); 369 370 compileSetupRegistersForEntry(); 360 371 361 372 // === Function body code generation === … … 398 409 branchTest32(Zero, GPRInfo::returnValueGPR).linkTo(fromArityCheck, this); 399 410 emitStoreCodeOrigin(CodeOrigin(0)); 400 GPRReg thunkReg = GPRInfo::argumentGPR1;401 CodeLocationLabel* arityThunkLabels =402 m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters());403 move(TrustedImmPtr(arityThunkLabels), thunkReg);404 loadPtr(BaseIndex(thunkReg, GPRInfo::returnValueGPR, timesPtr()), thunkReg);405 411 move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0); 406 412 m_callArityFixup = call(); -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.h
r189272 r189575 266 266 // Internal implementation to compile. 267 267 void compileEntry(); 268 void compileSetupRegistersForEntry(); 268 269 void compileBody(); 269 270 void link(LinkBuffer&); -
trunk/Source/JavaScriptCore/dfg/DFGOSREntry.cpp
r189123 r189575 309 309 pivot[i] = JSValue(); 310 310 } 311 312 // 6) Fix the call frame to have the right code block. 311 312 // 6) Copy our callee saves to buffer. 313 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 314 RegisterAtOffsetList* registerSaveLocations = codeBlock->calleeSaveRegisters(); 315 RegisterAtOffsetList* allCalleeSaves = vm->getAllCalleeSaveRegisterOffsets(); 316 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 317 318 unsigned registerCount = registerSaveLocations->size(); 319 for (unsigned i = 0; i < registerCount; i++) { 320 RegisterAtOffset currentEntry = registerSaveLocations->at(i); 321 if (dontSaveRegisters.get(currentEntry.reg())) 322 continue; 323 RegisterAtOffset* vmCalleeSavesEntry = allCalleeSaves->find(currentEntry.reg()); 324 325 *(bitwise_cast<intptr_t*>(pivot - 1) - currentEntry.offsetAsIndex()) = vm->calleeSaveRegistersBuffer[vmCalleeSavesEntry->offsetAsIndex()]; 326 } 327 #endif 328 329 // 7) Fix the call frame to have the right code block. 313 330 314 331 *bitwise_cast<CodeBlock**>(pivot - 1 - JSStack::CodeBlock) = codeBlock; -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompiler32_64.cpp
r189192 r189575 245 245 CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister); 246 246 247 // Restore the DFG callee saves and then save the ones the baseline JIT uses. 248 m_jit.emitRestoreCalleeSaves(); 249 m_jit.emitSaveCalleeSavesFor(m_jit.baselineCodeBlock()); 250 247 251 // Do all data format conversions and store the results into the stack. 248 252 249 253 for (size_t index = 0; index < operands.size(); ++index) { 250 254 const ValueRecovery& recovery = operands[index]; 251 int operand = operands.operandForIndex(index); 252 255 VirtualRegister reg = operands.virtualRegisterForIndex(index); 256 257 if (reg.isLocal() && reg.toLocal() < static_cast<int>(m_jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters())) 258 continue; 259 260 int operand = reg.offset(); 261 253 262 switch (recovery.technique()) { 254 263 case InPair: -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompiler64.cpp
r189192 r189575 203 203 204 204 // And voila, all GPRs are free to reuse. 205 205 206 206 // Save all state from FPRs into the scratch buffer. 207 207 … … 255 255 CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister); 256 256 257 // Restore the DFG callee saves and then save the ones the baseline JIT uses. 258 m_jit.emitRestoreCalleeSaves(); 259 m_jit.emitSaveCalleeSavesFor(m_jit.baselineCodeBlock()); 260 261 // The tag registers are needed to materialize recoveries below. 262 m_jit.emitMaterializeTagCheckRegisters(); 263 257 264 // Do all data format conversions and store the results into the stack. 258 265 259 266 for (size_t index = 0; index < operands.size(); ++index) { 260 267 const ValueRecovery& recovery = operands[index]; 261 int operand = operands.operandForIndex(index); 262 268 VirtualRegister reg = operands.virtualRegisterForIndex(index); 269 270 if (reg.isLocal() && reg.toLocal() < static_cast<int>(m_jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters())) 271 continue; 272 273 int operand = reg.offset(); 274 263 275 switch (recovery.technique()) { 264 276 case InGPR: … … 321 333 } 322 334 } 323 335 324 336 // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments 325 337 // recoveries don't recursively refer to each other. But, we don't try to assume that they only … … 371 383 372 384 reifyInlinedCallFrames(m_jit, exit); 373 385 374 386 // And finish. 375 387 adjustAndJumpToTarget(m_jit, exit); -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r188932 r189575 198 198 199 199 jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock))); 200 jit.emitSaveCalleeSavesFor(baselineCodeBlock, static_cast<VirtualRegister>(inlineCallFrame->stackOffset)); 200 201 if (!inlineCallFrame->isVarargs()) 201 202 jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount))); … … 251 252 { 252 253 #if ENABLE(GGC) 253 jit.move(AssemblyHelpers::TrustedImmPtr(jit.codeBlock()->ownerExecutable()), GPRInfo:: nonArgGPR0);254 osrWriteBarrier(jit, GPRInfo:: nonArgGPR0, GPRInfo::nonArgGPR1);254 jit.move(AssemblyHelpers::TrustedImmPtr(jit.codeBlock()->ownerExecutable()), GPRInfo::argumentGPR1); 255 osrWriteBarrier(jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0); 255 256 InlineCallFrameSet* inlineCallFrames = jit.codeBlock()->jitCode()->dfgCommon()->inlineCallFrames.get(); 256 257 if (inlineCallFrames) { 257 258 for (InlineCallFrame* inlineCallFrame : *inlineCallFrames) { 258 259 ScriptExecutable* ownerExecutable = inlineCallFrame->executable.get(); 259 jit.move(AssemblyHelpers::TrustedImmPtr(ownerExecutable), GPRInfo:: nonArgGPR0);260 osrWriteBarrier(jit, GPRInfo:: nonArgGPR0, GPRInfo::nonArgGPR1);260 jit.move(AssemblyHelpers::TrustedImmPtr(ownerExecutable), GPRInfo::argumentGPR1); 261 osrWriteBarrier(jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0); 261 262 } 262 263 } … … 278 279 jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(baselineCodeBlock) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister); 279 280 280 jit.jitAssertTagsInPlace();281 282 281 jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget), GPRInfo::regT2); 283 282 jit.jump(GPRInfo::regT2); -
trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp
r189553 r189575 243 243 return FailPath; 244 244 } 245 246 codeBlock->setCalleeSaveRegisters(RegisterSet::dfgCalleeSaveRegisters()); 245 247 246 248 // By this point the DFG bytecode parser will have potentially mutated various tables -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r189272 r189575 312 312 } 313 313 314 result.merge(RegisterSet::s pecialRegisters());314 result.merge(RegisterSet::stubUnavailableRegisters()); 315 315 316 316 return result; -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r189325 r189575 3132 3132 } 3133 3133 3134 m_jit.emitRestoreCalleeSaves(); 3134 3135 m_jit.emitFunctionEpilogue(); 3135 3136 m_jit.ret(); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r189325 r189575 3191 3191 m_jit.move(op1.gpr(), GPRInfo::returnValueGPR); 3192 3192 3193 m_jit.emitRestoreCalleeSaves(); 3193 3194 m_jit.emitFunctionEpilogue(); 3194 3195 m_jit.ret(); … … 4791 4792 appendCallSetResult(triggerOSREntryNow, tempGPR); 4792 4793 MacroAssembler::Jump dontEnter = m_jit.branchTestPtr(MacroAssembler::Zero, tempGPR); 4794 m_jit.emitRestoreCalleeSaves(); 4793 4795 m_jit.jump(tempGPR); 4794 4796 dontEnter.link(&m_jit); -
trunk/Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
r182167 r189575 130 130 131 131 Vector<unsigned> allocation(usedLocals.size()); 132 m_graph.m_nextMachineLocal = 0;132 m_graph.m_nextMachineLocal = codeBlock()->calleeSaveSpaceAsVirtualRegisters(); 133 133 for (unsigned i = 0; i < usedLocals.size(); ++i) { 134 134 if (!usedLocals.get(i)) { -
trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp
r189325 r189575 330 330 static void fixFunctionBasedOnStackMaps( 331 331 State& state, CodeBlock* codeBlock, JITCode* jitCode, GeneratedFunction generatedFunction, 332 StackMaps::RecordMap& recordMap , bool didSeeUnwindInfo)332 StackMaps::RecordMap& recordMap) 333 333 { 334 334 Graph& graph = state.graph; … … 366 366 // At this point it's perfectly fair to just blow away all state and restore the 367 367 // JS JIT view of the universe. 368 checkJIT.copyCalleeSavesToVMCalleeSavesBuffer(); 368 369 checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0); 369 370 checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1); … … 372 373 373 374 stackOverflowException = checkJIT.label(); 375 checkJIT.copyCalleeSavesToVMCalleeSavesBuffer(); 374 376 checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0); 375 377 checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1); … … 393 395 if (exitThunkGenerator.didThings()) { 394 396 RELEASE_ASSERT(state.finalizer->osrExit.size()); 395 RELEASE_ASSERT(didSeeUnwindInfo);396 397 397 398 auto linkBuffer = std::make_unique<LinkBuffer>( … … 815 816 } 816 817 817 bool didSeeUnwindInfo = state.jitCode->unwindInfo.parse(818 std::unique_ptr<RegisterAtOffsetList> registerOffsets = parseUnwindInfo( 818 819 state.unwindDataSection, state.unwindDataSectionSize, 819 820 state.generatedFunction); 820 821 if (shouldShowDisassembly()) { 821 822 dataLog("Unwind info for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ":\n"); 822 if (didSeeUnwindInfo) 823 dataLog(" ", state.jitCode->unwindInfo, "\n"); 824 else 825 dataLog(" <no unwind info>\n"); 826 } 823 dataLog(" ", *registerOffsets, "\n"); 824 } 825 state.graph.m_codeBlock->setCalleeSaveRegisters(WTF::move(registerOffsets)); 827 826 828 827 if (state.stackmapsSection && state.stackmapsSection->size()) { … … 847 846 fixFunctionBasedOnStackMaps( 848 847 state, state.graph.m_codeBlock, state.jitCode.get(), state.generatedFunction, 849 recordMap , didSeeUnwindInfo);848 recordMap); 850 849 if (state.allocationFailed) 851 850 return; -
trunk/Source/JavaScriptCore/ftl/FTLJITCode.h
r186691 r189575 85 85 SegmentedVector<OSRExit, 8> osrExit; 86 86 StackMaps stackmaps; 87 UnwindInfo unwindInfo;88 87 89 88 private: -
trunk/Source/JavaScriptCore/ftl/FTLLink.cpp
r189293 r189575 29 29 #if ENABLE(FTL_JIT) 30 30 31 #include "ArityCheckFailReturnThunks.h"32 31 #include "CCallHelpers.h" 33 32 #include "CodeBlockWithJITType.h" … … 55 54 codeBlock->clearSwitchJumpTables(); 56 55 57 // FIXME: Need to know the real frame register count. 58 // https://bugs.webkit.org/show_bug.cgi?id=125727 59 state.jitCode->common.frameRegisterCount = 1000; 56 state.jitCode->common.frameRegisterCount = state.jitCode->stackmaps.stackSizeForLocals() / sizeof(void*); 60 57 61 58 state.jitCode->common.requiredRegisterCountForExit = graph.requiredRegisterCountForExit(); … … 170 167 mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0)); 171 168 jit.emitFunctionPrologue(); 172 CodeLocationLabel* arityThunkLabels =173 vm.arityCheckFailReturnThunks->returnPCsFor(vm, codeBlock->numParameters());174 jit.move(CCallHelpers::TrustedImmPtr(arityThunkLabels), GPRInfo::argumentGPR1);175 jit.loadPtr(CCallHelpers::BaseIndex(GPRInfo::argumentGPR1, GPRInfo::argumentGPR0, CCallHelpers::timesPtr()), GPRInfo::argumentGPR1);176 169 CCallHelpers::Call callArityFixup = jit.call(); 177 170 jit.emitFunctionEpilogue(); -
trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
r189362 r189575 212 212 exit.m_values.size() + numMaterializations + maxMaterializationNumArguments) + 213 213 requiredScratchMemorySizeInBytes() + 214 jitCode->unwindInfo.m_registers.size() * sizeof(uint64_t));214 codeBlock->calleeSaveRegisters()->size() * sizeof(uint64_t)); 215 215 EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0; 216 216 EncodedJSValue* materializationPointers = scratch + exit.m_values.size(); … … 385 385 // Before we start messing with the frame, we need to set aside any registers that the 386 386 // FTL code was preserving. 387 for (unsigned i = jitCode->unwindInfo.m_registers.size(); i--;) {388 RegisterAtOffset entry = jitCode->unwindInfo.m_registers[i];387 for (unsigned i = codeBlock->calleeSaveRegisters()->size(); i--;) { 388 RegisterAtOffset entry = codeBlock->calleeSaveRegisters()->at(i); 389 389 jit.load64( 390 390 MacroAssembler::Address(MacroAssembler::framePointerRegister, entry.offset()), … … 433 433 arityIntact.link(&jit); 434 434 435 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(exit.m_codeOrigin); 436 435 437 // First set up SP so that our data doesn't get clobbered by signals. 436 438 unsigned conservativeStackDelta = 437 439 registerPreservationOffset() + 438 exit.m_values.numberOfLocals() * sizeof(Register) +440 (exit.m_values.numberOfLocals() + baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters()) * sizeof(Register) + 439 441 maxFrameExtentForSlowPathCall; 440 442 conservativeStackDelta = WTF::roundUpToMultipleOf( … … 458 460 jit.addPtr(MacroAssembler::TrustedImm32(sizeof(Register)), GPRInfo::regT1); 459 461 jit.branchTest32(MacroAssembler::NonZero, GPRInfo::regT2).linkTo(loop, &jit); 460 461 // At this point regT1 points to where we would save our registers. Save them here.462 ptrdiff_t currentOffset = 0; 462 463 RegisterAtOffsetList* baselineCalleeSaves = baselineCodeBlock->calleeSaveRegisters(); 464 463 465 for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) { 464 if (!toSave.get(reg) )466 if (!toSave.get(reg) || !reg.isGPR()) 465 467 continue; 466 currentOffset += sizeof(Register); 467 unsigned unwindIndex = jitCode->unwindInfo.indexOf(reg); 468 if (unwindIndex == UINT_MAX) { 469 // The FTL compilation didn't preserve this register. This means that it also 470 // didn't use the register. So its value at the beginning of OSR exit should be 471 // preserved by the thunk. Luckily, we saved all registers into the register 472 // scratch buffer, so we can restore them from there. 473 jit.load64(registerScratch + offsetOfReg(reg), GPRInfo::regT0); 468 unsigned unwindIndex = codeBlock->calleeSaveRegisters()->indexOf(reg); 469 RegisterAtOffset* baselineRegisterOffset = baselineCalleeSaves->find(reg); 470 471 if (reg.isGPR()) { 472 GPRReg regToLoad = baselineRegisterOffset ? GPRInfo::regT0 : reg.gpr(); 473 474 if (unwindIndex == UINT_MAX) { 475 // The FTL compilation didn't preserve this register. This means that it also 476 // didn't use the register. So its value at the beginning of OSR exit should be 477 // preserved by the thunk. Luckily, we saved all registers into the register 478 // scratch buffer, so we can restore them from there. 479 jit.load64(registerScratch + offsetOfReg(reg), regToLoad); 480 } else { 481 // The FTL compilation preserved the register. Its new value is therefore 482 // irrelevant, but we can get the value that was preserved by using the unwind 483 // data. We've already copied all unwind-able preserved registers into the unwind 484 // scratch buffer, so we can get it from there. 485 jit.load64(unwindScratch + unwindIndex, regToLoad); 486 } 487 488 if (baselineRegisterOffset) 489 jit.store64(regToLoad, MacroAssembler::Address(MacroAssembler::framePointerRegister, baselineRegisterOffset->offset())); 474 490 } else { 475 // The FTL compilation preserved the register. Its new value is therefore 476 // irrelevant, but we can get the value that was preserved by using the unwind 477 // data. We've already copied all unwind-able preserved registers into the unwind 478 // scratch buffer, so we can get it from there. 479 jit.load64(unwindScratch + unwindIndex, GPRInfo::regT0); 491 FPRReg fpRegToLoad = baselineRegisterOffset ? FPRInfo::fpRegT0 : reg.fpr(); 492 493 if (unwindIndex == UINT_MAX) 494 jit.loadDouble(MacroAssembler::TrustedImmPtr(registerScratch + offsetOfReg(reg)), fpRegToLoad); 495 else 496 jit.loadDouble(MacroAssembler::TrustedImmPtr(unwindScratch + unwindIndex), fpRegToLoad); 497 498 if (baselineRegisterOffset) 499 jit.storeDouble(fpRegToLoad, MacroAssembler::Address(MacroAssembler::framePointerRegister, baselineRegisterOffset->offset())); 480 500 } 481 jit.store64(GPRInfo::regT0, AssemblyHelpers::Address(GPRInfo::regT1, currentOffset)); 482 } 483 484 // We need to make sure that we return into the register restoration thunk. This works 485 // differently depending on whether or not we had arity issues. 486 MacroAssembler::Jump arityIntactForReturnPC = jit.branch32( 487 MacroAssembler::GreaterThanOrEqual, 488 CCallHelpers::payloadFor(JSStack::ArgumentCount), 489 MacroAssembler::TrustedImm32(codeBlock->numParameters())); 490 491 // The return PC in the call frame header points at exactly the right arity restoration 492 // thunk. We don't want to change that. But the arity restoration thunk's frame has a 493 // return PC and we want to reroute that to our register restoration thunk. The arity 494 // restoration's return PC just just below regT1, and the register restoration's return PC 495 // is right at regT1. 496 jit.loadPtr(MacroAssembler::Address(GPRInfo::regT1, -static_cast<ptrdiff_t>(sizeof(Register))), GPRInfo::regT0); 497 jit.storePtr(GPRInfo::regT0, GPRInfo::regT1); 498 jit.storePtr( 499 MacroAssembler::TrustedImmPtr(vm->getCTIStub(registerRestorationThunkGenerator).code().executableAddress()), 500 MacroAssembler::Address(GPRInfo::regT1, -static_cast<ptrdiff_t>(sizeof(Register)))); 501 502 MacroAssembler::Jump arityReturnPCReady = jit.jump(); 503 504 arityIntactForReturnPC.link(&jit); 505 506 jit.loadPtr(MacroAssembler::Address(MacroAssembler::framePointerRegister, CallFrame::returnPCOffset()), GPRInfo::regT0); 507 jit.storePtr(GPRInfo::regT0, GPRInfo::regT1); 508 jit.storePtr( 509 MacroAssembler::TrustedImmPtr(vm->getCTIStub(registerRestorationThunkGenerator).code().executableAddress()), 510 MacroAssembler::Address(MacroAssembler::framePointerRegister, CallFrame::returnPCOffset())); 511 512 arityReturnPCReady.link(&jit); 513 501 } 502 503 size_t baselineVirtualRegistersForCalleeSaves = baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters(); 504 514 505 // Now get state out of the scratch buffer and place it back into the stack. The values are 515 506 // already reboxed so we just move them. 516 507 for (unsigned index = exit.m_values.size(); index--;) { 517 int operand = exit.m_values.operandForIndex(index); 518 508 VirtualRegister reg = exit.m_values.virtualRegisterForIndex(index); 509 510 if (reg.isLocal() && reg.toLocal() < static_cast<int>(baselineVirtualRegistersForCalleeSaves)) 511 continue; 512 519 513 jit.load64(scratch + index, GPRInfo::regT0); 520 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor( static_cast<VirtualRegister>(operand)));514 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(reg)); 521 515 } 522 516 -
trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp
r170876 r189575 67 67 68 68 // Tell GC mark phase how much of the scratch buffer is active during call. 69 jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->activeLengthPtr()), GPRInfo::nonArgGPR 1);70 jit.storePtr(MacroAssembler::TrustedImmPtr(requiredScratchMemorySizeInBytes()), GPRInfo::nonArgGPR 1);69 jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->activeLengthPtr()), GPRInfo::nonArgGPR0); 70 jit.storePtr(MacroAssembler::TrustedImmPtr(requiredScratchMemorySizeInBytes()), GPRInfo::nonArgGPR0); 71 71 72 72 jit.loadPtr(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0); -
trunk/Source/JavaScriptCore/ftl/FTLUnwindInfo.cpp
r177315 r189575 95 95 #include "FTLUnwindInfo.h" 96 96 97 #include "CodeBlock.h" 98 #include "RegisterAtOffsetList.h" 99 97 100 #if ENABLE(FTL_JIT) 98 101 … … 103 106 104 107 namespace JSC { namespace FTL { 105 106 UnwindInfo::UnwindInfo() { }107 UnwindInfo::~UnwindInfo() { }108 109 108 110 109 namespace { … … 655 654 } // anonymous namespace 656 655 657 bool UnwindInfo::parse(void* section, size_t size, GeneratedFunction generatedFunction) 658 { 659 m_registers.clear(); 656 std::unique_ptr<RegisterAtOffsetList> parseUnwindInfo(void* section, size_t size, GeneratedFunction generatedFunction) 657 { 660 658 RELEASE_ASSERT(!!section); 661 if (!section) 662 return false;659 660 std::unique_ptr<RegisterAtOffsetList> registerOffsets = std::make_unique<RegisterAtOffsetList>(); 663 661 664 662 #if OS(DARWIN) … … 690 688 691 689 case UNWIND_X86_64_REG_RBX: 692 m_registers.append(RegisterAtOffset(X86Registers::ebx, offset));690 registerOffsets->append(RegisterAtOffset(X86Registers::ebx, offset)); 693 691 break; 694 692 695 693 case UNWIND_X86_64_REG_R12: 696 m_registers.append(RegisterAtOffset(X86Registers::r12, offset));694 registerOffsets->append(RegisterAtOffset(X86Registers::r12, offset)); 697 695 break; 698 696 699 697 case UNWIND_X86_64_REG_R13: 700 m_registers.append(RegisterAtOffset(X86Registers::r13, offset));698 registerOffsets->append(RegisterAtOffset(X86Registers::r13, offset)); 701 699 break; 702 700 703 701 case UNWIND_X86_64_REG_R14: 704 m_registers.append(RegisterAtOffset(X86Registers::r14, offset));702 registerOffsets->append(RegisterAtOffset(X86Registers::r14, offset)); 705 703 break; 706 704 707 705 case UNWIND_X86_64_REG_R15: 708 m_registers.append(RegisterAtOffset(X86Registers::r15, offset));706 registerOffsets->append(RegisterAtOffset(X86Registers::r15, offset)); 709 707 break; 710 708 711 709 case UNWIND_X86_64_REG_RBP: 712 m_registers.append(RegisterAtOffset(X86Registers::ebp, offset));710 registerOffsets->append(RegisterAtOffset(X86Registers::ebp, offset)); 713 711 break; 714 712 … … 722 720 RELEASE_ASSERT((encoding & UNWIND_ARM64_MODE_MASK) == UNWIND_ARM64_MODE_FRAME); 723 721 724 m_registers.append(RegisterAtOffset(ARM64Registers::fp, 0));722 registerOffsets->append(RegisterAtOffset(ARM64Registers::fp, 0)); 725 723 726 724 int32_t offset = 0; 727 725 if (encoding & UNWIND_ARM64_FRAME_X19_X20_PAIR) { 728 m_registers.append(RegisterAtOffset(ARM64Registers::x19, offset -= 8));729 m_registers.append(RegisterAtOffset(ARM64Registers::x20, offset -= 8));726 registerOffsets->append(RegisterAtOffset(ARM64Registers::x19, offset -= 8)); 727 registerOffsets->append(RegisterAtOffset(ARM64Registers::x20, offset -= 8)); 730 728 } 731 729 if (encoding & UNWIND_ARM64_FRAME_X21_X22_PAIR) { 732 m_registers.append(RegisterAtOffset(ARM64Registers::x21, offset -= 8));733 m_registers.append(RegisterAtOffset(ARM64Registers::x22, offset -= 8));730 registerOffsets->append(RegisterAtOffset(ARM64Registers::x21, offset -= 8)); 731 registerOffsets->append(RegisterAtOffset(ARM64Registers::x22, offset -= 8)); 734 732 } 735 733 if (encoding & UNWIND_ARM64_FRAME_X23_X24_PAIR) { 736 m_registers.append(RegisterAtOffset(ARM64Registers::x23, offset -= 8));737 m_registers.append(RegisterAtOffset(ARM64Registers::x24, offset -= 8));734 registerOffsets->append(RegisterAtOffset(ARM64Registers::x23, offset -= 8)); 735 registerOffsets->append(RegisterAtOffset(ARM64Registers::x24, offset -= 8)); 738 736 } 739 737 if (encoding & UNWIND_ARM64_FRAME_X25_X26_PAIR) { 740 m_registers.append(RegisterAtOffset(ARM64Registers::x25, offset -= 8));741 m_registers.append(RegisterAtOffset(ARM64Registers::x26, offset -= 8));738 registerOffsets->append(RegisterAtOffset(ARM64Registers::x25, offset -= 8)); 739 registerOffsets->append(RegisterAtOffset(ARM64Registers::x26, offset -= 8)); 742 740 } 743 741 if (encoding & UNWIND_ARM64_FRAME_X27_X28_PAIR) { 744 m_registers.append(RegisterAtOffset(ARM64Registers::x27, offset -= 8));745 m_registers.append(RegisterAtOffset(ARM64Registers::x28, offset -= 8));742 registerOffsets->append(RegisterAtOffset(ARM64Registers::x27, offset -= 8)); 743 registerOffsets->append(RegisterAtOffset(ARM64Registers::x28, offset -= 8)); 746 744 } 747 745 if (encoding & UNWIND_ARM64_FRAME_D8_D9_PAIR) { 748 m_registers.append(RegisterAtOffset(ARM64Registers::q8, offset -= 8));749 m_registers.append(RegisterAtOffset(ARM64Registers::q9, offset -= 8));746 registerOffsets->append(RegisterAtOffset(ARM64Registers::q8, offset -= 8)); 747 registerOffsets->append(RegisterAtOffset(ARM64Registers::q9, offset -= 8)); 750 748 } 751 749 if (encoding & UNWIND_ARM64_FRAME_D10_D11_PAIR) { 752 m_registers.append(RegisterAtOffset(ARM64Registers::q10, offset -= 8));753 m_registers.append(RegisterAtOffset(ARM64Registers::q11, offset -= 8));750 registerOffsets->append(RegisterAtOffset(ARM64Registers::q10, offset -= 8)); 751 registerOffsets->append(RegisterAtOffset(ARM64Registers::q11, offset -= 8)); 754 752 } 755 753 if (encoding & UNWIND_ARM64_FRAME_D12_D13_PAIR) { 756 m_registers.append(RegisterAtOffset(ARM64Registers::q12, offset -= 8));757 m_registers.append(RegisterAtOffset(ARM64Registers::q13, offset -= 8));754 registerOffsets->append(RegisterAtOffset(ARM64Registers::q12, offset -= 8)); 755 registerOffsets->append(RegisterAtOffset(ARM64Registers::q13, offset -= 8)); 758 756 } 759 757 if (encoding & UNWIND_ARM64_FRAME_D14_D15_PAIR) { 760 m_registers.append(RegisterAtOffset(ARM64Registers::q14, offset -= 8));761 m_registers.append(RegisterAtOffset(ARM64Registers::q15, offset -= 8));758 registerOffsets->append(RegisterAtOffset(ARM64Registers::q14, offset -= 8)); 759 registerOffsets->append(RegisterAtOffset(ARM64Registers::q15, offset -= 8)); 762 760 } 763 761 #else … … 783 781 switch (i) { 784 782 case UNW_X86_64_rbx: 785 m_registers.append(RegisterAtOffset(X86Registers::ebx, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));783 registerOffsets->append(RegisterAtOffset(X86Registers::ebx, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 786 784 break; 787 785 case UNW_X86_64_r12: 788 m_registers.append(RegisterAtOffset(X86Registers::r12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));786 registerOffsets->append(RegisterAtOffset(X86Registers::r12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 789 787 break; 790 788 case UNW_X86_64_r13: 791 m_registers.append(RegisterAtOffset(X86Registers::r13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));789 registerOffsets->append(RegisterAtOffset(X86Registers::r13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 792 790 break; 793 791 case UNW_X86_64_r14: 794 m_registers.append(RegisterAtOffset(X86Registers::r14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));792 registerOffsets->append(RegisterAtOffset(X86Registers::r14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 795 793 break; 796 794 case UNW_X86_64_r15: 797 m_registers.append(RegisterAtOffset(X86Registers::r15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));795 registerOffsets->append(RegisterAtOffset(X86Registers::r15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 798 796 break; 799 797 case UNW_X86_64_rbp: 800 m_registers.append(RegisterAtOffset(X86Registers::ebp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));798 registerOffsets->append(RegisterAtOffset(X86Registers::ebp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 801 799 break; 802 800 case DW_X86_64_RET_addr: … … 817 815 switch (i) { 818 816 case UNW_ARM64_x0: 819 m_registers.append(RegisterAtOffset(ARM64Registers::x0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));817 registerOffsets->append(RegisterAtOffset(ARM64Registers::x0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 820 818 break; 821 819 case UNW_ARM64_x1: 822 m_registers.append(RegisterAtOffset(ARM64Registers::x1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));820 registerOffsets->append(RegisterAtOffset(ARM64Registers::x1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 823 821 break; 824 822 case UNW_ARM64_x2: 825 m_registers.append(RegisterAtOffset(ARM64Registers::x2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));823 registerOffsets->append(RegisterAtOffset(ARM64Registers::x2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 826 824 break; 827 825 case UNW_ARM64_x3: 828 m_registers.append(RegisterAtOffset(ARM64Registers::x3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));826 registerOffsets->append(RegisterAtOffset(ARM64Registers::x3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 829 827 break; 830 828 case UNW_ARM64_x4: 831 m_registers.append(RegisterAtOffset(ARM64Registers::x4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));829 registerOffsets->append(RegisterAtOffset(ARM64Registers::x4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 832 830 break; 833 831 case UNW_ARM64_x5: 834 m_registers.append(RegisterAtOffset(ARM64Registers::x5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));832 registerOffsets->append(RegisterAtOffset(ARM64Registers::x5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 835 833 break; 836 834 case UNW_ARM64_x6: 837 m_registers.append(RegisterAtOffset(ARM64Registers::x6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));835 registerOffsets->append(RegisterAtOffset(ARM64Registers::x6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 838 836 break; 839 837 case UNW_ARM64_x7: 840 m_registers.append(RegisterAtOffset(ARM64Registers::x7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));838 registerOffsets->append(RegisterAtOffset(ARM64Registers::x7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 841 839 break; 842 840 case UNW_ARM64_x8: 843 m_registers.append(RegisterAtOffset(ARM64Registers::x8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));841 registerOffsets->append(RegisterAtOffset(ARM64Registers::x8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 844 842 break; 845 843 case UNW_ARM64_x9: 846 m_registers.append(RegisterAtOffset(ARM64Registers::x9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));844 registerOffsets->append(RegisterAtOffset(ARM64Registers::x9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 847 845 break; 848 846 case UNW_ARM64_x10: 849 m_registers.append(RegisterAtOffset(ARM64Registers::x10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));847 registerOffsets->append(RegisterAtOffset(ARM64Registers::x10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 850 848 break; 851 849 case UNW_ARM64_x11: 852 m_registers.append(RegisterAtOffset(ARM64Registers::x11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));850 registerOffsets->append(RegisterAtOffset(ARM64Registers::x11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 853 851 break; 854 852 case UNW_ARM64_x12: 855 m_registers.append(RegisterAtOffset(ARM64Registers::x12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));853 registerOffsets->append(RegisterAtOffset(ARM64Registers::x12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 856 854 break; 857 855 case UNW_ARM64_x13: 858 m_registers.append(RegisterAtOffset(ARM64Registers::x13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));856 registerOffsets->append(RegisterAtOffset(ARM64Registers::x13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 859 857 break; 860 858 case UNW_ARM64_x14: 861 m_registers.append(RegisterAtOffset(ARM64Registers::x14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));859 registerOffsets->append(RegisterAtOffset(ARM64Registers::x14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 862 860 break; 863 861 case UNW_ARM64_x15: 864 m_registers.append(RegisterAtOffset(ARM64Registers::x15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));862 registerOffsets->append(RegisterAtOffset(ARM64Registers::x15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 865 863 break; 866 864 case UNW_ARM64_x16: 867 m_registers.append(RegisterAtOffset(ARM64Registers::x16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));865 registerOffsets->append(RegisterAtOffset(ARM64Registers::x16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 868 866 break; 869 867 case UNW_ARM64_x17: 870 m_registers.append(RegisterAtOffset(ARM64Registers::x17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));868 registerOffsets->append(RegisterAtOffset(ARM64Registers::x17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 871 869 break; 872 870 case UNW_ARM64_x18: 873 m_registers.append(RegisterAtOffset(ARM64Registers::x18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));871 registerOffsets->append(RegisterAtOffset(ARM64Registers::x18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 874 872 break; 875 873 case UNW_ARM64_x19: 876 m_registers.append(RegisterAtOffset(ARM64Registers::x19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));874 registerOffsets->append(RegisterAtOffset(ARM64Registers::x19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 877 875 break; 878 876 case UNW_ARM64_x20: 879 m_registers.append(RegisterAtOffset(ARM64Registers::x20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));877 registerOffsets->append(RegisterAtOffset(ARM64Registers::x20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 880 878 break; 881 879 case UNW_ARM64_x21: 882 m_registers.append(RegisterAtOffset(ARM64Registers::x21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));880 registerOffsets->append(RegisterAtOffset(ARM64Registers::x21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 883 881 break; 884 882 case UNW_ARM64_x22: 885 m_registers.append(RegisterAtOffset(ARM64Registers::x22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));883 registerOffsets->append(RegisterAtOffset(ARM64Registers::x22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 886 884 break; 887 885 case UNW_ARM64_x23: 888 m_registers.append(RegisterAtOffset(ARM64Registers::x23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));886 registerOffsets->append(RegisterAtOffset(ARM64Registers::x23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 889 887 break; 890 888 case UNW_ARM64_x24: 891 m_registers.append(RegisterAtOffset(ARM64Registers::x24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));889 registerOffsets->append(RegisterAtOffset(ARM64Registers::x24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 892 890 break; 893 891 case UNW_ARM64_x25: 894 m_registers.append(RegisterAtOffset(ARM64Registers::x25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));892 registerOffsets->append(RegisterAtOffset(ARM64Registers::x25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 895 893 break; 896 894 case UNW_ARM64_x26: 897 m_registers.append(RegisterAtOffset(ARM64Registers::x26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));895 registerOffsets->append(RegisterAtOffset(ARM64Registers::x26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 898 896 break; 899 897 case UNW_ARM64_x27: 900 m_registers.append(RegisterAtOffset(ARM64Registers::x27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));898 registerOffsets->append(RegisterAtOffset(ARM64Registers::x27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 901 899 break; 902 900 case UNW_ARM64_x28: 903 m_registers.append(RegisterAtOffset(ARM64Registers::x28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));901 registerOffsets->append(RegisterAtOffset(ARM64Registers::x28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 904 902 break; 905 903 case UNW_ARM64_fp: 906 m_registers.append(RegisterAtOffset(ARM64Registers::fp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));904 registerOffsets->append(RegisterAtOffset(ARM64Registers::fp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 907 905 break; 908 906 case UNW_ARM64_x30: 909 m_registers.append(RegisterAtOffset(ARM64Registers::x30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));907 registerOffsets->append(RegisterAtOffset(ARM64Registers::x30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 910 908 break; 911 909 case UNW_ARM64_sp: 912 m_registers.append(RegisterAtOffset(ARM64Registers::sp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));910 registerOffsets->append(RegisterAtOffset(ARM64Registers::sp, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 913 911 break; 914 912 case UNW_ARM64_v0: 915 m_registers.append(RegisterAtOffset(ARM64Registers::q0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));913 registerOffsets->append(RegisterAtOffset(ARM64Registers::q0, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 916 914 break; 917 915 case UNW_ARM64_v1: 918 m_registers.append(RegisterAtOffset(ARM64Registers::q1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));916 registerOffsets->append(RegisterAtOffset(ARM64Registers::q1, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 919 917 break; 920 918 case UNW_ARM64_v2: 921 m_registers.append(RegisterAtOffset(ARM64Registers::q2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));919 registerOffsets->append(RegisterAtOffset(ARM64Registers::q2, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 922 920 break; 923 921 case UNW_ARM64_v3: 924 m_registers.append(RegisterAtOffset(ARM64Registers::q3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));922 registerOffsets->append(RegisterAtOffset(ARM64Registers::q3, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 925 923 break; 926 924 case UNW_ARM64_v4: 927 m_registers.append(RegisterAtOffset(ARM64Registers::q4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));925 registerOffsets->append(RegisterAtOffset(ARM64Registers::q4, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 928 926 break; 929 927 case UNW_ARM64_v5: 930 m_registers.append(RegisterAtOffset(ARM64Registers::q5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));928 registerOffsets->append(RegisterAtOffset(ARM64Registers::q5, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 931 929 break; 932 930 case UNW_ARM64_v6: 933 m_registers.append(RegisterAtOffset(ARM64Registers::q6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));931 registerOffsets->append(RegisterAtOffset(ARM64Registers::q6, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 934 932 break; 935 933 case UNW_ARM64_v7: 936 m_registers.append(RegisterAtOffset(ARM64Registers::q7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));934 registerOffsets->append(RegisterAtOffset(ARM64Registers::q7, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 937 935 break; 938 936 case UNW_ARM64_v8: 939 m_registers.append(RegisterAtOffset(ARM64Registers::q8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));937 registerOffsets->append(RegisterAtOffset(ARM64Registers::q8, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 940 938 break; 941 939 case UNW_ARM64_v9: 942 m_registers.append(RegisterAtOffset(ARM64Registers::q9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));940 registerOffsets->append(RegisterAtOffset(ARM64Registers::q9, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 943 941 break; 944 942 case UNW_ARM64_v10: 945 m_registers.append(RegisterAtOffset(ARM64Registers::q10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));943 registerOffsets->append(RegisterAtOffset(ARM64Registers::q10, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 946 944 break; 947 945 case UNW_ARM64_v11: 948 m_registers.append(RegisterAtOffset(ARM64Registers::q11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));946 registerOffsets->append(RegisterAtOffset(ARM64Registers::q11, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 949 947 break; 950 948 case UNW_ARM64_v12: 951 m_registers.append(RegisterAtOffset(ARM64Registers::q12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));949 registerOffsets->append(RegisterAtOffset(ARM64Registers::q12, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 952 950 break; 953 951 case UNW_ARM64_v13: 954 m_registers.append(RegisterAtOffset(ARM64Registers::q13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));952 registerOffsets->append(RegisterAtOffset(ARM64Registers::q13, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 955 953 break; 956 954 case UNW_ARM64_v14: 957 m_registers.append(RegisterAtOffset(ARM64Registers::q14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));955 registerOffsets->append(RegisterAtOffset(ARM64Registers::q14, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 958 956 break; 959 957 case UNW_ARM64_v15: 960 m_registers.append(RegisterAtOffset(ARM64Registers::q15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));958 registerOffsets->append(RegisterAtOffset(ARM64Registers::q15, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 961 959 break; 962 960 case UNW_ARM64_v16: 963 m_registers.append(RegisterAtOffset(ARM64Registers::q16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));961 registerOffsets->append(RegisterAtOffset(ARM64Registers::q16, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 964 962 break; 965 963 case UNW_ARM64_v17: 966 m_registers.append(RegisterAtOffset(ARM64Registers::q17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));964 registerOffsets->append(RegisterAtOffset(ARM64Registers::q17, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 967 965 break; 968 966 case UNW_ARM64_v18: 969 m_registers.append(RegisterAtOffset(ARM64Registers::q18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));967 registerOffsets->append(RegisterAtOffset(ARM64Registers::q18, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 970 968 break; 971 969 case UNW_ARM64_v19: 972 m_registers.append(RegisterAtOffset(ARM64Registers::q19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));970 registerOffsets->append(RegisterAtOffset(ARM64Registers::q19, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 973 971 break; 974 972 case UNW_ARM64_v20: 975 m_registers.append(RegisterAtOffset(ARM64Registers::q20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));973 registerOffsets->append(RegisterAtOffset(ARM64Registers::q20, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 976 974 break; 977 975 case UNW_ARM64_v21: 978 m_registers.append(RegisterAtOffset(ARM64Registers::q21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));976 append(RegisterAtOffset(ARM64Registers::q21, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 979 977 break; 980 978 case UNW_ARM64_v22: 981 m_registers.append(RegisterAtOffset(ARM64Registers::q22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));979 registerOffsets->append(RegisterAtOffset(ARM64Registers::q22, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 982 980 break; 983 981 case UNW_ARM64_v23: 984 m_registers.append(RegisterAtOffset(ARM64Registers::q23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));982 registerOffsets->append(RegisterAtOffset(ARM64Registers::q23, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 985 983 break; 986 984 case UNW_ARM64_v24: 987 m_registers.append(RegisterAtOffset(ARM64Registers::q24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));985 registerOffsets->append(RegisterAtOffset(ARM64Registers::q24, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 988 986 break; 989 987 case UNW_ARM64_v25: 990 m_registers.append(RegisterAtOffset(ARM64Registers::q25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));988 registerOffsets->append(RegisterAtOffset(ARM64Registers::q25, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 991 989 break; 992 990 case UNW_ARM64_v26: 993 m_registers.append(RegisterAtOffset(ARM64Registers::q26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));991 registerOffsets->append(RegisterAtOffset(ARM64Registers::q26, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 994 992 break; 995 993 case UNW_ARM64_v27: 996 m_registers.append(RegisterAtOffset(ARM64Registers::q27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));994 registerOffsets->append(RegisterAtOffset(ARM64Registers::q27, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 997 995 break; 998 996 case UNW_ARM64_v28: 999 m_registers.append(RegisterAtOffset(ARM64Registers::q28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));997 registerOffsets->append(RegisterAtOffset(ARM64Registers::q28, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 1000 998 break; 1001 999 case UNW_ARM64_v29: 1002 m_registers.append(RegisterAtOffset(ARM64Registers::q29, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));1000 registerOffsets->append(RegisterAtOffset(ARM64Registers::q29, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 1003 1001 break; 1004 1002 case UNW_ARM64_v30: 1005 m_registers.append(RegisterAtOffset(ARM64Registers::q30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));1003 registerOffsets->append(RegisterAtOffset(ARM64Registers::q30, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 1006 1004 break; 1007 1005 case UNW_ARM64_v31: 1008 m_registers.append(RegisterAtOffset(ARM64Registers::q31, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset));1006 registerOffsets->append(RegisterAtOffset(ARM64Registers::q31, prolog.savedRegisters[i].offset + prolog.cfaRegisterOffset)); 1009 1007 break; 1010 1008 default: … … 1018 1016 1019 1017 #endif 1020 std::sort(m_registers.begin(), m_registers.end()); 1021 return true; 1022 } 1023 1024 void UnwindInfo::dump(PrintStream& out) const 1025 { 1026 out.print(listDump(m_registers)); 1027 } 1028 1029 RegisterAtOffset* UnwindInfo::find(Reg reg) const 1030 { 1031 return tryBinarySearch<RegisterAtOffset, Reg>(m_registers, m_registers.size(), reg, RegisterAtOffset::getReg); 1032 } 1033 1034 unsigned UnwindInfo::indexOf(Reg reg) const 1035 { 1036 if (RegisterAtOffset* pointer = find(reg)) 1037 return pointer - m_registers.begin(); 1038 return UINT_MAX; 1018 registerOffsets->sort(); 1019 return WTF::move(registerOffsets); 1039 1020 } 1040 1021 -
trunk/Source/JavaScriptCore/ftl/FTLUnwindInfo.h
r173509 r189575 32 32 33 33 #include "FTLGeneratedFunction.h" 34 #include "FTLRegisterAtOffset.h" 34 class RegisterAtOffsetList; 35 35 36 36 namespace JSC { namespace FTL { 37 37 38 struct UnwindInfo { 39 UnwindInfo(); 40 ~UnwindInfo(); 41 42 bool parse(void*, size_t, GeneratedFunction); 43 44 void dump(PrintStream&) const; 45 46 RegisterAtOffset* find(Reg) const; 47 unsigned indexOf(Reg) const; // Returns UINT_MAX if not found. 48 49 Vector<RegisterAtOffset> m_registers; 50 }; 38 std::unique_ptr<RegisterAtOffsetList> parseUnwindInfo(void*, size_t, GeneratedFunction); 51 39 52 40 } } // namespace JSC::FTL -
trunk/Source/JavaScriptCore/interpreter/Interpreter.cpp
r189454 r189575 651 651 if (LegacyProfiler* profiler = vm.enabledProfiler()) 652 652 profiler->exceptionUnwind(m_callFrame); 653 654 copyCalleeSavesToVMCalleeSavesBuffer(visitor); 655 653 656 return StackVisitor::Done; 654 657 } … … 656 659 return StackVisitor::Done; 657 660 661 copyCalleeSavesToVMCalleeSavesBuffer(visitor); 662 658 663 return StackVisitor::Continue; 659 664 } 660 665 661 666 private: 667 void copyCalleeSavesToVMCalleeSavesBuffer(StackVisitor& visitor) 668 { 669 #if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 670 671 if (!visitor->isJSFrame()) 672 return; 673 674 #if ENABLE(DFG_JIT) 675 if (visitor->inlineCallFrame()) 676 return; 677 #endif 678 RegisterAtOffsetList* currentCalleeSaves = m_codeBlock ? m_codeBlock->calleeSaveRegisters() : nullptr; 679 680 if (!currentCalleeSaves) 681 return; 682 683 VM& vm = m_callFrame->vm(); 684 RegisterAtOffsetList* allCalleeSaves = vm.getAllCalleeSaveRegisterOffsets(); 685 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 686 intptr_t* frame = reinterpret_cast<intptr_t*>(m_callFrame->registers()); 687 688 unsigned registerCount = currentCalleeSaves->size(); 689 for (unsigned i = 0; i < registerCount; i++) { 690 RegisterAtOffset currentEntry = currentCalleeSaves->at(i); 691 if (dontCopyRegisters.get(currentEntry.reg())) 692 continue; 693 RegisterAtOffset* vmCalleeSavesEntry = allCalleeSaves->find(currentEntry.reg()); 694 695 vm.calleeSaveRegistersBuffer[vmCalleeSavesEntry->offsetAsIndex()] = *(frame + currentEntry.offsetAsIndex()); 696 } 697 #else 698 UNUSED_PARAM(visitor); 699 #endif 700 } 701 662 702 CallFrame*& m_callFrame; 663 703 bool m_isTermination; -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r189272 r189575 35 35 #include "JITCode.h" 36 36 #include "MacroAssembler.h" 37 #include "RegisterAtOffsetList.h" 38 #include "RegisterSet.h" 37 39 #include "TypeofType.h" 38 40 #include "VM.h" … … 173 175 store32(TrustedImm32(value.tag()), address.withOffset(TagOffset)); 174 176 store32(TrustedImm32(value.payload()), address.withOffset(PayloadOffset)); 177 #endif 178 } 179 180 void emitSaveCalleeSavesFor(CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister = static_cast<VirtualRegister>(0)) 181 { 182 ASSERT(codeBlock); 183 184 RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 185 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 186 unsigned registerCount = calleeSaves->size(); 187 188 for (unsigned i = 0; i < registerCount; i++) { 189 RegisterAtOffset entry = calleeSaves->at(i); 190 if (dontSaveRegisters.get(entry.reg())) 191 continue; 192 storePtr(entry.reg().gpr(), Address(framePointerRegister, offsetVirtualRegister.offsetInBytes() + entry.offset())); 193 } 194 } 195 196 void emitRestoreCalleeSavesFor(CodeBlock* codeBlock) 197 { 198 ASSERT(codeBlock); 199 200 RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 201 RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs()); 202 unsigned registerCount = calleeSaves->size(); 203 204 for (unsigned i = 0; i < registerCount; i++) { 205 RegisterAtOffset entry = calleeSaves->at(i); 206 if (dontRestoreRegisters.get(entry.reg())) 207 continue; 208 loadPtr(Address(framePointerRegister, entry.offset()), entry.reg().gpr()); 209 } 210 } 211 212 void emitSaveCalleeSaves() 213 { 214 emitSaveCalleeSavesFor(codeBlock()); 215 } 216 217 void emitRestoreCalleeSaves() 218 { 219 emitRestoreCalleeSavesFor(codeBlock()); 220 } 221 222 void copyCalleeSavesToVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() }) 223 { 224 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 225 GPRReg temp1 = usedRegisters.getFreeGPR(0); 226 227 move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1); 228 229 RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets(); 230 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 231 unsigned registerCount = allCalleeSaves->size(); 232 233 for (unsigned i = 0; i < registerCount; i++) { 234 RegisterAtOffset entry = allCalleeSaves->at(i); 235 if (dontCopyRegisters.get(entry.reg())) 236 continue; 237 if (entry.reg().isGPR()) 238 storePtr(entry.reg().gpr(), Address(temp1, entry.offset())); 239 else 240 storeDouble(entry.reg().fpr(), Address(temp1, entry.offset())); 241 } 242 #else 243 UNUSED_PARAM(usedRegisters); 244 #endif 245 } 246 247 void restoreCalleeSavesFromVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() }) 248 { 249 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 250 GPRReg temp1 = usedRegisters.getFreeGPR(0); 251 252 move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1); 253 254 RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets(); 255 RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters(); 256 unsigned registerCount = allCalleeSaves->size(); 257 258 for (unsigned i = 0; i < registerCount; i++) { 259 RegisterAtOffset entry = allCalleeSaves->at(i); 260 if (dontRestoreRegisters.get(entry.reg())) 261 continue; 262 if (entry.reg().isGPR()) 263 loadPtr(Address(temp1, entry.offset()), entry.reg().gpr()); 264 else 265 loadDouble(Address(temp1, entry.offset()), entry.reg().fpr()); 266 } 267 #else 268 UNUSED_PARAM(usedRegisters); 269 #endif 270 } 271 272 void copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer(const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() }) 273 { 274 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 275 GPRReg temp1 = usedRegisters.getFreeGPR(0); 276 GPRReg temp2 = usedRegisters.getFreeGPR(1); 277 FPRReg fpTemp = usedRegisters.getFreeFPR(); 278 ASSERT(temp2 != InvalidGPRReg); 279 280 ASSERT(codeBlock()); 281 282 // Copy saved calleeSaves on stack or unsaved calleeSaves in register to vm calleeSave buffer 283 move(TrustedImmPtr(m_vm->calleeSaveRegistersBuffer), temp1); 284 285 RegisterAtOffsetList* allCalleeSaves = m_vm->getAllCalleeSaveRegisterOffsets(); 286 RegisterAtOffsetList* currentCalleeSaves = codeBlock()->calleeSaveRegisters(); 287 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 288 unsigned registerCount = allCalleeSaves->size(); 289 290 for (unsigned i = 0; i < registerCount; i++) { 291 RegisterAtOffset vmEntry = allCalleeSaves->at(i); 292 if (dontCopyRegisters.get(vmEntry.reg())) 293 continue; 294 RegisterAtOffset* currentFrameEntry = currentCalleeSaves->find(vmEntry.reg()); 295 296 if (vmEntry.reg().isGPR()) { 297 GPRReg regToStore; 298 if (currentFrameEntry) { 299 // Load calleeSave from stack into temp register 300 regToStore = temp2; 301 loadPtr(Address(framePointerRegister, currentFrameEntry->offset()), regToStore); 302 } else 303 // Just store callee save directly 304 regToStore = vmEntry.reg().gpr(); 305 306 storePtr(regToStore, Address(temp1, vmEntry.offset())); 307 } else { 308 FPRReg fpRegToStore; 309 if (currentFrameEntry) { 310 // Load calleeSave from stack into temp register 311 fpRegToStore = fpTemp; 312 loadDouble(Address(framePointerRegister, currentFrameEntry->offset()), fpRegToStore); 313 } else 314 // Just store callee save directly 315 fpRegToStore = vmEntry.reg().fpr(); 316 317 storeDouble(fpRegToStore, Address(temp1, vmEntry.offset())); 318 } 319 } 320 #else 321 UNUSED_PARAM(usedRegisters); 322 #endif 323 } 324 325 void emitMaterializeTagCheckRegisters() 326 { 327 #if USE(JSVALUE64) 328 move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister); 329 orPtr(MacroAssembler::TrustedImm32(TagBitTypeOther), GPRInfo::tagTypeNumberRegister, GPRInfo::tagMaskRegister); 175 330 #endif 176 331 } -
trunk/Source/JavaScriptCore/jit/FPRInfo.h
r189293 r189575 209 209 static const FPRReg fpRegT21 = ARM64Registers::q29; 210 210 static const FPRReg fpRegT22 = ARM64Registers::q30; 211 static const FPRReg fpRegCS0 = ARM64Registers::q8; 212 static const FPRReg fpRegCS1 = ARM64Registers::q9; 213 static const FPRReg fpRegCS2 = ARM64Registers::q10; 214 static const FPRReg fpRegCS3 = ARM64Registers::q11; 215 static const FPRReg fpRegCS4 = ARM64Registers::q12; 216 static const FPRReg fpRegCS5 = ARM64Registers::q13; 217 static const FPRReg fpRegCS6 = ARM64Registers::q14; 218 static const FPRReg fpRegCS7 = ARM64Registers::q15; 211 219 212 220 static const FPRReg argumentFPR0 = ARM64Registers::q0; // fpRegT0 -
trunk/Source/JavaScriptCore/jit/GPRInfo.h
r189351 r189575 316 316 #if CPU(X86) 317 317 #define NUMBER_OF_ARGUMENT_REGISTERS 0u 318 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 318 319 319 320 class GPRInfo { … … 337 338 static const GPRReg argumentGPR3 = X86Registers::ebx; // regT3 338 339 static const GPRReg nonArgGPR0 = X86Registers::esi; // regT4 339 static const GPRReg nonArgGPR1 = X86Registers::edi; // regT5340 340 static const GPRReg returnValueGPR = X86Registers::eax; // regT0 341 341 static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 … … 383 383 #if !OS(WINDOWS) 384 384 #define NUMBER_OF_ARGUMENT_REGISTERS 6u 385 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 5u 385 386 #else 386 387 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 388 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 7u 387 389 #endif 388 390 … … 446 448 #endif 447 449 static const GPRReg nonArgGPR0 = X86Registers::r10; // regT5 (regT4 on Windows) 448 static const GPRReg nonArgGPR1 = X86Registers::ebx; // Callee save449 450 static const GPRReg returnValueGPR = X86Registers::eax; // regT0 450 451 static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 or regT2 … … 507 508 #if CPU(ARM) 508 509 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 510 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 509 511 510 512 class GPRInfo { … … 537 539 static const GPRReg nonArgGPR0 = ARMRegisters::r4; // regT8 538 540 static const GPRReg nonArgGPR1 = ARMRegisters::r8; // regT4 539 static const GPRReg nonArgGPR2 = ARMRegisters::r9; // regT5540 541 static const GPRReg returnValueGPR = ARMRegisters::r0; // regT0 541 542 static const GPRReg returnValueGPR2 = ARMRegisters::r1; // regT1 … … 590 591 #if CPU(ARM64) 591 592 #define NUMBER_OF_ARGUMENT_REGISTERS 8u 593 // Callee Saves includes x19..x28 and FP registers q8..q15 594 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 18u 592 595 593 596 class GPRInfo { … … 618 621 static const GPRReg regT14 = ARM64Registers::x14; 619 622 static const GPRReg regT15 = ARM64Registers::x15; 620 static const GPRReg regCS0 = ARM64Registers::x26; // Used by LLInt only 621 static const GPRReg regCS1 = ARM64Registers::x27; // tagTypeNumber 622 static const GPRReg regCS2 = ARM64Registers::x28; // tagMask 623 static const GPRReg regCS0 = ARM64Registers::x19; // Used by FTL only 624 static const GPRReg regCS1 = ARM64Registers::x20; // Used by FTL only 625 static const GPRReg regCS2 = ARM64Registers::x21; // Used by FTL only 626 static const GPRReg regCS3 = ARM64Registers::x22; // Used by FTL only 627 static const GPRReg regCS4 = ARM64Registers::x23; // Used by FTL only 628 static const GPRReg regCS5 = ARM64Registers::x24; // Used by FTL only 629 static const GPRReg regCS6 = ARM64Registers::x25; // Used by FTL only 630 static const GPRReg regCS7 = ARM64Registers::x26; 631 static const GPRReg regCS8 = ARM64Registers::x27; // tagTypeNumber 632 static const GPRReg regCS9 = ARM64Registers::x28; // tagMask 623 633 // These constants provide the names for the general purpose argument & return value registers. 624 634 static const GPRReg argumentGPR0 = ARM64Registers::x0; // regT0 … … 638 648 static const GPRReg patchpointScratchRegister = ARM64Registers::ip0; 639 649 640 // GPRReg mapping is direct, the machine reg siter numbers can650 // GPRReg mapping is direct, the machine register numbers can 641 651 // be used directly as indices into the GPR RegisterBank. 642 652 COMPILE_ASSERT(ARM64Registers::q0 == 0, q0_is_0); … … 693 703 #if CPU(MIPS) 694 704 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 705 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 695 706 696 707 class GPRInfo { … … 720 731 static const GPRReg argumentGPR3 = MIPSRegisters::a3; 721 732 static const GPRReg nonArgGPR0 = regT0; 722 static const GPRReg nonArgGPR1 = regT1;723 733 static const GPRReg returnValueGPR = regT0; 724 734 static const GPRReg returnValueGPR2 = regT1; … … 765 775 #if CPU(SH4) 766 776 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 777 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 767 778 768 779 class GPRInfo { … … 794 805 static const GPRReg argumentGPR3 = SH4Registers::r7; // regT3 795 806 static const GPRReg nonArgGPR0 = regT4; 796 static const GPRReg nonArgGPR1 = regT5;797 807 static const GPRReg returnValueGPR = regT0; 798 808 static const GPRReg returnValueGPR2 = regT1; -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r189504 r189575 30 30 #include "JIT.h" 31 31 32 #include "ArityCheckFailReturnThunks.h"33 32 #include "CodeBlock.h" 34 33 #include "CodeBlockWithJITType.h" … … 86 85 skipOptimize.append(branchAdd32(Signed, TrustedImm32(Options::executionCounterIncrementForEntry()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter()))); 87 86 ASSERT(!m_bytecodeOffset); 87 88 copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer(); 89 88 90 callOperation(operationOptimize, m_bytecodeOffset); 89 91 skipOptimize.append(branchTestPtr(Zero, returnValueGPR)); … … 493 495 } 494 496 497 m_codeBlock->setCalleeSaveRegisters(RegisterSet::llintBaselineCalleeSaveRegisters()); // Might be able to remove as this is probably already set to this value. 498 495 499 // This ensures that we have the most up to date type information when performing typecheck optimizations for op_profile_type. 496 500 if (m_vm->typeProfiler()) … … 550 554 checkStackPointerAlignment(); 551 555 556 emitSaveCalleeSaves(); 557 emitMaterializeTagCheckRegisters(); 558 552 559 privateCompileMainPass(); 553 560 privateCompileLinkPass(); … … 581 588 addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister); 582 589 branchTest32(Zero, returnValueGPR).linkTo(beginLabel, this); 583 GPRReg thunkReg = GPRInfo::argumentGPR1;584 CodeLocationLabel* failThunkLabels =585 m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters());586 move(TrustedImmPtr(failThunkLabels), thunkReg);587 loadPtr(BaseIndex(thunkReg, returnValueGPR, timesPtr()), thunkReg);588 590 move(returnValueGPR, GPRInfo::argumentGPR0); 589 591 emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).code()); … … 723 725 m_exceptionChecksWithCallFrameRollback.link(this); 724 726 727 copyCalleeSavesToVMCalleeSavesBuffer(); 728 725 729 // lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*). 726 730 … … 740 744 m_exceptionChecks.link(this); 741 745 746 copyCalleeSavesToVMCalleeSavesBuffer(); 747 742 748 // lookupExceptionHandler is passed two arguments, the VM and the exec (the CallFrame*). 743 749 move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0); -
trunk/Source/JavaScriptCore/jit/JITArithmetic32_64.cpp
r163844 r189575 1070 1070 void JIT::emit_op_mod(Instruction* currentInstruction) 1071 1071 { 1072 #if CPU(X86) || CPU(X86_64)1072 #if CPU(X86) 1073 1073 int dst = currentInstruction[1].u.operand; 1074 1074 int op1 = currentInstruction[2].u.operand; -
trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp
r189288 r189575 60 60 61 61 checkStackPointerAlignment(); 62 emitRestoreCalleeSaves(); 62 63 emitFunctionEpilogue(); 63 64 ret(); -
trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp
r189454 r189575 71 71 RELEASE_ASSERT(returnValueGPR != callFrameRegister); 72 72 emitGetVirtualRegister(currentInstruction[1].u.operand, returnValueGPR); 73 emitRestoreCalleeSaves(); 73 74 emitFunctionEpilogue(); 74 75 ret(); … … 256 257 257 258 checkStackPointerAlignment(); 259 emitRestoreCalleeSaves(); 258 260 emitFunctionEpilogue(); 259 261 ret(); … … 420 422 { 421 423 ASSERT(regT0 == returnValueGPR); 424 copyCalleeSavesToVMCalleeSavesBuffer(); 422 425 emitGetVirtualRegister(currentInstruction[1].u.operand, regT0); 423 426 callOperationNoExceptionCheck(operationThrow, regT0); … … 495 498 void JIT::emit_op_catch(Instruction* currentInstruction) 496 499 { 497 // Gotta restore the tag registers. We could be throwing from FTL, which may 498 // clobber them. 499 move(TrustedImm64(TagTypeNumber), tagTypeNumberRegister); 500 move(TrustedImm64(TagMask), tagMaskRegister); 501 500 restoreCalleeSavesFromVMCalleeSavesBuffer(); 501 502 502 move(TrustedImmPtr(m_vm), regT3); 503 503 load64(Address(regT3, VM::callFrameForThrowOffset()), callFrameRegister); … … 657 657 // object lifetime and increasing GC pressure. 658 658 size_t count = m_codeBlock->m_numVars; 659 for (size_t j = 0; j < count; ++j)659 for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j) 660 660 emitInitRegister(virtualRegisterForLocal(j).offset()); 661 661 … … 923 923 if (canBeOptimized()) { 924 924 linkSlowCase(iter); 925 925 926 copyCalleeSavesFromFrameOrRegisterToVMCalleeSavesBuffer(); 927 926 928 callOperation(operationOptimize, m_bytecodeOffset); 927 929 Jump noOptimizedEntry = branchTestPtr(Zero, returnValueGPR); -
trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
r189454 r189575 149 149 ASSERT(returnValueGPR != callFrameRegister); 150 150 emitLoad(currentInstruction[1].u.operand, regT1, returnValueGPR); 151 emitRestoreCalleeSaves(); 151 152 emitFunctionEpilogue(); 152 153 ret(); … … 742 743 { 743 744 ASSERT(regT0 == returnValueGPR); 745 copyCalleeSavesToVMCalleeSavesBuffer(); 744 746 emitLoad(currentInstruction[1].u.operand, regT1, regT0); 745 747 callOperationNoExceptionCheck(operationThrow, regT1, regT0); … … 801 803 void JIT::emit_op_catch(Instruction* currentInstruction) 802 804 { 805 restoreCalleeSavesFromVMCalleeSavesBuffer(); 806 803 807 move(TrustedImmPtr(m_vm), regT3); 804 808 // operationThrow returns the callFrame for the handler. -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r189504 r189575 1335 1335 numVarsWithValues = 0; 1336 1336 Operands<JSValue> mustHandleValues(codeBlock->numParameters(), numVarsWithValues); 1337 int localsUsedForCalleeSaves = static_cast<int>(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters()); 1337 1338 for (size_t i = 0; i < mustHandleValues.size(); ++i) { 1338 1339 int operand = mustHandleValues.operandForIndex(i); 1340 if (operandIsLocal(operand) && VirtualRegister(operand).toLocal() < localsUsedForCalleeSaves) 1341 continue; 1339 1342 mustHandleValues[i] = exec->uncheckedR(operand).jsValue(); 1340 1343 } -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
r189504 r189575 214 214 215 215 JITGetByIdGenerator gen( 216 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::s pecialRegisters(),216 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 217 217 JSValueRegs(regT0), JSValueRegs(regT0), DontSpill); 218 218 gen.generateFastPath(*this); … … 447 447 448 448 JITPutByIdGenerator gen( 449 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::s pecialRegisters(),449 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 450 450 JSValueRegs(regT0), JSValueRegs(regT1), regT2, DontSpill, m_codeBlock->ecmaMode(), putKind); 451 451 gen.generateFastPath(*this); … … 575 575 576 576 JITGetByIdGenerator gen( 577 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::s pecialRegisters(),577 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 578 578 JSValueRegs(regT0), JSValueRegs(regT0), DontSpill); 579 579 gen.generateFastPath(*this); … … 622 622 623 623 JITPutByIdGenerator gen( 624 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::s pecialRegisters(),624 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 625 625 JSValueRegs(regT0), JSValueRegs(regT1), regT2, DontSpill, m_codeBlock->ecmaMode(), 626 626 direct ? Direct : NotDirect); -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
r189504 r189575 283 283 284 284 JITGetByIdGenerator gen( 285 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::s pecialRegisters(),285 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 286 286 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), DontSpill); 287 287 gen.generateFastPath(*this); … … 495 495 496 496 JITPutByIdGenerator gen( 497 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::s pecialRegisters(),497 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 498 498 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), regT1, DontSpill, m_codeBlock->ecmaMode(), putKind); 499 499 gen.generateFastPath(*this); … … 588 588 589 589 JITGetByIdGenerator gen( 590 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::s pecialRegisters(),590 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 591 591 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), DontSpill); 592 592 gen.generateFastPath(*this); … … 633 633 634 634 JITPutByIdGenerator gen( 635 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::s pecialRegisters(),635 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 636 636 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT3, regT2), 637 637 regT1, DontSpill, m_codeBlock->ecmaMode(), direct ? Direct : NotDirect); -
trunk/Source/JavaScriptCore/jit/RegisterSet.cpp
r182634 r189575 67 67 } 68 68 69 RegisterSet RegisterSet::stubUnavailableRegisters() 70 { 71 return RegisterSet(specialRegisters(), vmCalleeSaveRegisters()); 72 } 73 69 74 RegisterSet RegisterSet::calleeSaveRegisters() 70 75 { … … 123 128 } 124 129 130 RegisterSet RegisterSet::vmCalleeSaveRegisters() 131 { 132 RegisterSet result; 133 #if CPU(X86_64) 134 result.set(GPRInfo::regCS0); 135 result.set(GPRInfo::regCS1); 136 result.set(GPRInfo::regCS2); 137 result.set(GPRInfo::regCS3); 138 result.set(GPRInfo::regCS4); 139 #if OS(WINDOWS) 140 result.set(GPRInfo::regCS5); 141 result.set(GPRInfo::regCS6); 142 #endif 143 #elif CPU(ARM64) 144 result.set(GPRInfo::regCS0); 145 result.set(GPRInfo::regCS1); 146 result.set(GPRInfo::regCS2); 147 result.set(GPRInfo::regCS3); 148 result.set(GPRInfo::regCS4); 149 result.set(GPRInfo::regCS5); 150 result.set(GPRInfo::regCS6); 151 result.set(GPRInfo::regCS7); 152 result.set(GPRInfo::regCS8); 153 result.set(GPRInfo::regCS9); 154 result.set(FPRInfo::fpRegCS0); 155 result.set(FPRInfo::fpRegCS1); 156 result.set(FPRInfo::fpRegCS2); 157 result.set(FPRInfo::fpRegCS3); 158 result.set(FPRInfo::fpRegCS4); 159 result.set(FPRInfo::fpRegCS5); 160 result.set(FPRInfo::fpRegCS6); 161 result.set(FPRInfo::fpRegCS7); 162 #endif 163 return result; 164 } 165 166 RegisterSet RegisterSet::llintBaselineCalleeSaveRegisters() 167 { 168 RegisterSet result; 169 #if CPU(X86) 170 #elif CPU(X86_64) 171 #if !OS(WINDOWS) 172 result.set(GPRInfo::regCS2); 173 ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister); 174 ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister); 175 result.set(GPRInfo::regCS3); 176 result.set(GPRInfo::regCS4); 177 #else 178 result.set(GPRInfo::regCS4); 179 ASSERT(GPRInfo::regCS5 == GPRInfo::tagTypeNumberRegister); 180 ASSERT(GPRInfo::regCS6 == GPRInfo::tagMaskRegister); 181 result.set(GPRInfo::regCS5); 182 result.set(GPRInfo::regCS6); 183 #endif 184 #elif CPU(ARM_THUMB2) 185 #elif CPU(ARM_TRADITIONAL) 186 #elif CPU(ARM64) 187 result.set(GPRInfo::regCS7); 188 ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister); 189 ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister); 190 result.set(GPRInfo::regCS8); 191 result.set(GPRInfo::regCS9); 192 #elif CPU(MIPS) 193 #elif CPU(SH4) 194 #else 195 UNREACHABLE_FOR_PLATFORM(); 196 #endif 197 return result; 198 } 199 200 RegisterSet RegisterSet::dfgCalleeSaveRegisters() 201 { 202 RegisterSet result; 203 #if CPU(X86) 204 #elif CPU(X86_64) 205 result.set(GPRInfo::regCS0); 206 result.set(GPRInfo::regCS1); 207 result.set(GPRInfo::regCS2); 208 #if !OS(WINDOWS) 209 ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister); 210 ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister); 211 result.set(GPRInfo::regCS3); 212 result.set(GPRInfo::regCS4); 213 #else 214 result.set(GPRInfo::regCS3); 215 result.set(GPRInfo::regCS4); 216 ASSERT(GPRInfo::regCS5 == GPRInfo::tagTypeNumberRegister); 217 ASSERT(GPRInfo::regCS6 == GPRInfo::tagMaskRegister); 218 result.set(GPRInfo::regCS5); 219 result.set(GPRInfo::regCS6); 220 #endif 221 #elif CPU(ARM_THUMB2) 222 #elif CPU(ARM_TRADITIONAL) 223 #elif CPU(ARM64) 224 ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister); 225 ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister); 226 result.set(GPRInfo::regCS8); 227 result.set(GPRInfo::regCS9); 228 #elif CPU(MIPS) 229 #elif CPU(SH4) 230 #else 231 UNREACHABLE_FOR_PLATFORM(); 232 #endif 233 return result; 234 } 235 236 RegisterSet RegisterSet::ftlCalleeSaveRegisters() 237 { 238 RegisterSet result; 239 #if ENABLE(FTL_JIT) 240 #if CPU(X86_64) && !OS(WINDOWS) 241 result.set(GPRInfo::regCS0); 242 result.set(GPRInfo::regCS1); 243 result.set(GPRInfo::regCS2); 244 ASSERT(GPRInfo::regCS3 == GPRInfo::tagTypeNumberRegister); 245 ASSERT(GPRInfo::regCS4 == GPRInfo::tagMaskRegister); 246 result.set(GPRInfo::regCS3); 247 result.set(GPRInfo::regCS4); 248 #elif CPU(ARM64) 249 // LLVM might save and use all ARM64 callee saves specified in the ABI. 250 result.set(GPRInfo::regCS0); 251 result.set(GPRInfo::regCS1); 252 result.set(GPRInfo::regCS2); 253 result.set(GPRInfo::regCS3); 254 result.set(GPRInfo::regCS4); 255 result.set(GPRInfo::regCS5); 256 result.set(GPRInfo::regCS6); 257 result.set(GPRInfo::regCS7); 258 ASSERT(GPRInfo::regCS8 == GPRInfo::tagTypeNumberRegister); 259 ASSERT(GPRInfo::regCS9 == GPRInfo::tagMaskRegister); 260 result.set(GPRInfo::regCS8); 261 result.set(GPRInfo::regCS9); 262 result.set(FPRInfo::fpRegCS0); 263 result.set(FPRInfo::fpRegCS1); 264 result.set(FPRInfo::fpRegCS2); 265 result.set(FPRInfo::fpRegCS3); 266 result.set(FPRInfo::fpRegCS4); 267 result.set(FPRInfo::fpRegCS5); 268 result.set(FPRInfo::fpRegCS6); 269 result.set(FPRInfo::fpRegCS7); 270 #else 271 UNREACHABLE_FOR_PLATFORM(); 272 #endif 273 #endif 274 return result; 275 } 276 125 277 RegisterSet RegisterSet::allGPRs() 126 278 { -
trunk/Source/JavaScriptCore/jit/RegisterSet.h
r166463 r189575 51 51 static RegisterSet specialRegisters(); // The union of stack, reserved hardware, and runtime registers. 52 52 static RegisterSet calleeSaveRegisters(); 53 static RegisterSet vmCalleeSaveRegisters(); // Callee save registers that might be saved and used by any tier. 54 static RegisterSet llintBaselineCalleeSaveRegisters(); // Registers saved and used by the LLInt. 55 static RegisterSet dfgCalleeSaveRegisters(); // Registers saved and used by the DFG JIT. 56 static RegisterSet ftlCalleeSaveRegisters(); // Registers that might be saved and used by the FTL JIT. 57 static RegisterSet stubUnavailableRegisters(); // The union of callee saves and special registers. 53 58 static RegisterSet allGPRs(); 54 59 static RegisterSet allFPRs(); -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r189496 r189575 556 556 stubJit.setupResults(valueRegs); 557 557 MacroAssembler::Jump noException = stubJit.emitExceptionCheck(CCallHelpers::InvertedExceptionCheck); 558 558 559 stubJit.copyCalleeSavesToVMCalleeSavesBuffer(); 560 559 561 stubJit.setupArguments(CCallHelpers::TrustedImmPtr(vm), GPRInfo::callFrameRegister); 560 562 handlerCall = stubJit.call(); -
trunk/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
r189272 r189575 45 45 { 46 46 emitFunctionPrologue(); 47 emitSaveThenMaterializeTagRegisters(); 47 48 // Check that we have the expected number of arguments 48 49 m_failures.append(branch32(NotEqual, payloadFor(JSStack::ArgumentCount), TrustedImm32(expectedArgCount + 1))); … … 53 54 { 54 55 emitFunctionPrologue(); 56 emitSaveThenMaterializeTagRegisters(); 55 57 } 56 58 … … 106 108 if (src != regT0) 107 109 move(src, regT0); 110 111 emitRestoreSavedTagRegisters(); 108 112 emitFunctionEpilogue(); 109 113 ret(); … … 114 118 ASSERT_UNUSED(payload, payload == regT0); 115 119 ASSERT_UNUSED(tag, tag == regT1); 120 emitRestoreSavedTagRegisters(); 116 121 emitFunctionEpilogue(); 117 122 ret(); … … 138 143 highNonZero.link(this); 139 144 #endif 145 emitRestoreSavedTagRegisters(); 140 146 emitFunctionEpilogue(); 141 147 ret(); … … 147 153 move(src, regT0); 148 154 tagReturnAsInt32(); 155 emitRestoreSavedTagRegisters(); 149 156 emitFunctionEpilogue(); 150 157 ret(); … … 156 163 move(src, regT0); 157 164 tagReturnAsJSCell(); 165 emitRestoreSavedTagRegisters(); 158 166 emitFunctionEpilogue(); 159 167 ret(); … … 186 194 187 195 private: 188 196 void emitSaveThenMaterializeTagRegisters() 197 { 198 #if USE(JSVALUE64) 199 #if CPU(ARM64) 200 pushPair(tagTypeNumberRegister, tagMaskRegister); 201 #else 202 push(tagTypeNumberRegister); 203 push(tagMaskRegister); 204 #endif 205 emitMaterializeTagCheckRegisters(); 206 #endif 207 } 208 209 void emitRestoreSavedTagRegisters() 210 { 211 #if USE(JSVALUE64) 212 #if CPU(ARM64) 213 popPair(tagTypeNumberRegister, tagMaskRegister); 214 #else 215 pop(tagMaskRegister); 216 pop(tagTypeNumberRegister); 217 #endif 218 #endif 219 } 220 189 221 void tagReturnAsInt32() 190 222 { -
trunk/Source/JavaScriptCore/jit/TempRegisterSet.h
r165021 r189575 116 116 } 117 117 118 // Return the index'th free FPR. 119 FPRReg getFreeFPR(unsigned index = 0) const 120 { 121 for (unsigned i = FPRInfo::numberOfRegisters; i--;) { 122 if (!getFPRByIndex(i) && !index--) 123 return FPRInfo::toRegister(i); 124 } 125 return InvalidFPRReg; 126 } 127 118 128 template<typename BankInfo> 119 129 void setByIndex(unsigned index) -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r189398 r189575 67 67 jit.preserveReturnAddressAfterCall(GPRInfo::nonPreservedNonReturnGPR); 68 68 69 jit.copyCalleeSavesToVMCalleeSavesBuffer(); 70 69 71 jit.setupArguments(CCallHelpers::TrustedImmPtr(vm), GPRInfo::callFrameRegister); 70 72 jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(lookupExceptionHandler)), GPRInfo::nonArgGPR0); … … 210 212 if (entryType == EnterViaCall) 211 213 jit.emitFunctionPrologue(); 214 #if USE(JSVALUE64) 215 else if (entryType == EnterViaJump) { 216 // We're coming from a specialized thunk that has saved the prior tag registers' contents. 217 // Restore them now. 218 #if CPU(ARM64) 219 jit.popPair(JSInterfaceJIT::tagTypeNumberRegister, JSInterfaceJIT::tagMaskRegister); 220 #else 221 jit.pop(JSInterfaceJIT::tagMaskRegister); 222 jit.pop(JSInterfaceJIT::tagTypeNumberRegister); 223 #endif 224 } 225 #endif 212 226 213 227 jit.emitPutImmediateToCallFrameHeader(0, JSStack::CodeBlock); … … 307 321 exceptionHandler.link(&jit); 308 322 323 jit.copyCalleeSavesToVMCalleeSavesBuffer(); 309 324 jit.storePtr(JSInterfaceJIT::callFrameRegister, &vm->topCallFrame); 310 325 … … 392 407 jit.addPtr(extraTemp, JSInterfaceJIT::stackPointerRegister); 393 408 394 // Save the original return PC.395 jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()), extraTemp);396 jit.storePtr(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight));397 398 // Install the new return PC.399 jit.storePtr(GPRInfo::argumentGPR1, JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()));400 401 409 # if CPU(X86_64) 402 410 jit.push(JSInterfaceJIT::regT4); … … 440 448 jit.addPtr(JSInterfaceJIT::regT5, JSInterfaceJIT::stackPointerRegister); 441 449 442 // Save the original return PC.443 jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT5);444 jit.storePtr(GPRInfo::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight));445 446 // Install the new return PC.447 jit.storePtr(GPRInfo::argumentGPR1, JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrame::returnPCOffset()));448 449 450 # if CPU(X86) 450 451 jit.push(JSInterfaceJIT::regT4); -
trunk/Source/JavaScriptCore/llint/LLIntData.cpp
r189339 r189575 27 27 #include "LLIntData.h" 28 28 #include "BytecodeConventions.h" 29 #include "CodeBlock.h" 29 30 #include "CodeType.h" 30 31 #include "Instruction.h" … … 132 133 ASSERT(maxFrameExtentForSlowPathCall == 64); 133 134 #endif 135 136 #if !ENABLE(JIT) || USE(JSVALUE32_64) 137 ASSERT(!CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters()); 138 #elif (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64) 139 ASSERT(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() == 3); 140 #elif (CPU(X86_64) && OS(WINDOWS)) 141 ASSERT(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() == 3); 142 #endif 143 134 144 ASSERT(StringType == 6); 135 145 ASSERT(ObjectType == 21); -
trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
r189504 r189575 486 486 vm.topCallFrame = exec; 487 487 ErrorHandlingScope errorScope(vm); 488 CommonSlowPaths::interpreterThrowInCaller(exec, createStackOverflowError(exec));489 pc = returnToThrow ForThrownException(exec);488 vm.throwException(exec, createStackOverflowError(exec)); 489 pc = returnToThrow(exec); 490 490 LLINT_RETURN_TWO(pc, exec); 491 491 } -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r189504 r189575 108 108 # a1; but t0 and t2 can be either a0 or a2. 109 109 # 110 # - On 64 bits, csr0, csr1, csr2 and optionally csr3, csr4, csr5 and csr6 111 # are available as callee-save registers. 112 # csr0 is used to store the PC base, while the last two csr registers are used 113 # to store special tag values. Don't use them for anything else. 110 # - On 64 bits, there are callee-save registers named csr0, csr1, ... csrN. 111 # The last three csr registers are used used to store the PC base and 112 # two special tag values. Don't use them for anything else. 114 113 # 115 114 # Additional platform-specific details (you shouldn't rely on this remaining … … 219 218 end 220 219 220 if X86_64 or X86_64_WIN or ARM64 221 const CalleeSaveSpaceAsVirtualRegisters = 3 222 else 223 const CalleeSaveSpaceAsVirtualRegisters = 0 224 end 225 226 const CalleeSaveSpaceStackAligned = (CalleeSaveSpaceAsVirtualRegisters * SlotSize + StackAlignment - 1) & ~StackAlignmentMask 227 228 221 229 # Watchpoint states 222 230 const ClearWatchpoint = 0 … … 232 240 # This requires an add before the call, and a sub after. 233 241 const PC = t4 234 const PB = csr0235 242 if ARM64 236 const tagTypeNumber = csr1 237 const tagMask = csr2 243 const PB = csr7 244 const tagTypeNumber = csr8 245 const tagMask = csr9 238 246 elsif X86_64 247 const PB = csr2 239 248 const tagTypeNumber = csr3 240 249 const tagMask = csr4 241 250 elsif X86_64_WIN 251 const PB = csr4 242 252 const tagTypeNumber = csr5 243 253 const tagMask = csr6 244 254 elsif C_LOOP 255 const PB = csr0 245 256 const tagTypeNumber = csr1 246 257 const tagMask = csr2 … … 399 410 end 400 411 401 if C_LOOP 412 if C_LOOP or ARM64 or X86_64 or X86_64_WIN 402 413 const CalleeSaveRegisterCount = 0 403 414 elsif ARM or ARMv7_TRADITIONAL or ARMv7 404 415 const CalleeSaveRegisterCount = 7 405 elsif ARM64 406 const CalleeSaveRegisterCount = 10 407 elsif SH4 or X86_64 or MIPS 416 elsif SH4 or MIPS 408 417 const CalleeSaveRegisterCount = 5 409 418 elsif X86 or X86_WIN 410 419 const CalleeSaveRegisterCount = 3 411 elsif X86_64_WIN412 const CalleeSaveRegisterCount = 7413 420 end 414 421 … … 420 427 421 428 macro pushCalleeSaves() 422 if C_LOOP 429 if C_LOOP or ARM64 or X86_64 or X86_64_WIN 423 430 elsif ARM or ARMv7_TRADITIONAL 424 431 emit "push {r4-r10}" 425 432 elsif ARMv7 426 433 emit "push {r4-r6, r8-r11}" 427 elsif ARM64428 emit "stp x20, x19, [sp, #-16]!"429 emit "stp x22, x21, [sp, #-16]!"430 emit "stp x24, x23, [sp, #-16]!"431 emit "stp x26, x25, [sp, #-16]!"432 emit "stp x28, x27, [sp, #-16]!"433 434 elsif MIPS 434 435 emit "addiu $sp, $sp, -20" … … 452 453 emit "push edi" 453 454 emit "push ebx" 454 elsif X86_64455 emit "push %r12"456 emit "push %r13"457 emit "push %r14"458 emit "push %r15"459 emit "push %rbx"460 elsif X86_64_WIN461 emit "push r12"462 emit "push r13"463 emit "push r14"464 emit "push r15"465 emit "push rbx"466 emit "push rdi"467 emit "push rsi"468 455 end 469 456 end 470 457 471 458 macro popCalleeSaves() 472 if C_LOOP 459 if C_LOOP or ARM64 or X86_64 or X86_64_WIN 473 460 elsif ARM or ARMv7_TRADITIONAL 474 461 emit "pop {r4-r10}" 475 462 elsif ARMv7 476 463 emit "pop {r4-r6, r8-r11}" 477 elsif ARM64478 emit "ldp x28, x27, [sp], #16"479 emit "ldp x26, x25, [sp], #16"480 emit "ldp x24, x23, [sp], #16"481 emit "ldp x22, x21, [sp], #16"482 emit "ldp x20, x19, [sp], #16"483 464 elsif MIPS 484 465 emit "lw $16, 0($sp)" … … 502 483 emit "pop edi" 503 484 emit "pop esi" 504 elsif X86_64505 emit "pop %rbx"506 emit "pop %r15"507 emit "pop %r14"508 emit "pop %r13"509 emit "pop %r12"510 elsif X86_64_WIN511 emit "pop rsi"512 emit "pop rdi"513 emit "pop rbx"514 emit "pop r15"515 emit "pop r14"516 emit "pop r13"517 emit "pop r12"518 485 end 519 486 end … … 545 512 end 546 513 514 macro preserveCalleeSavesUsedByLLInt() 515 subp CalleeSaveSpaceStackAligned, sp 516 if C_LOOP 517 elsif ARM or ARMv7_TRADITIONAL 518 elsif ARMv7 519 elsif ARM64 520 emit "stp x27, x28, [fp, #-16]" 521 emit "stp xzr, x26, [fp, #-32]" 522 elsif MIPS 523 elsif SH4 524 elsif X86 525 elsif X86_WIN 526 elsif X86_64 527 storep csr4, -8[cfr] 528 storep csr3, -16[cfr] 529 storep csr2, -24[cfr] 530 elsif X86_64_WIN 531 storep csr6, -8[cfr] 532 storep csr5, -16[cfr] 533 storep csr4, -24[cfr] 534 end 535 end 536 537 macro restoreCalleeSavesUsedByLLInt() 538 if C_LOOP 539 elsif ARM or ARMv7_TRADITIONAL 540 elsif ARMv7 541 elsif ARM64 542 emit "ldp xzr, x26, [fp, #-32]" 543 emit "ldp x27, x28, [fp, #-16]" 544 elsif MIPS 545 elsif SH4 546 elsif X86 547 elsif X86_WIN 548 elsif X86_64 549 loadp -24[cfr], csr2 550 loadp -16[cfr], csr3 551 loadp -8[cfr], csr4 552 elsif X86_64_WIN 553 loadp -24[cfr], csr4 554 loadp -16[cfr], csr5 555 loadp -8[cfr], csr6 556 end 557 end 558 559 macro copyCalleeSavesToVMCalleeSavesBuffer(vm, temp) 560 if ARM64 or X86_64 or X86_64_WIN 561 leap VM::calleeSaveRegistersBuffer[vm], temp 562 if ARM64 563 storep csr0, [temp] 564 storep csr1, 8[temp] 565 storep csr2, 16[temp] 566 storep csr3, 24[temp] 567 storep csr4, 32[temp] 568 storep csr5, 40[temp] 569 storep csr6, 48[temp] 570 storep csr7, 56[temp] 571 storep csr8, 64[temp] 572 storep csr9, 72[temp] 573 stored csfr0, 80[temp] 574 stored csfr1, 88[temp] 575 stored csfr2, 96[temp] 576 stored csfr3, 104[temp] 577 stored csfr4, 112[temp] 578 stored csfr5, 120[temp] 579 stored csfr6, 128[temp] 580 stored csfr7, 136[temp] 581 elsif X86_64 582 storep csr0, [temp] 583 storep csr1, 8[temp] 584 storep csr2, 16[temp] 585 storep csr3, 24[temp] 586 storep csr4, 32[temp] 587 elsif X86_64_WIN 588 storep csr0, [temp] 589 storep csr1, 8[temp] 590 storep csr2, 16[temp] 591 storep csr3, 24[temp] 592 storep csr4, 32[temp] 593 storep csr5, 40[temp] 594 storep csr6, 48[temp] 595 end 596 end 597 end 598 599 macro restoreCalleeSavesFromVMCalleeSavesBuffer(vm, temp) 600 if ARM64 or X86_64 or X86_64_WIN 601 leap VM::calleeSaveRegistersBuffer[vm], temp 602 if ARM64 603 loadp [temp], csr0 604 loadp 8[temp], csr1 605 loadp 16[temp], csr2 606 loadp 24[temp], csr3 607 loadp 32[temp], csr4 608 loadp 40[temp], csr5 609 loadp 48[temp], csr6 610 loadp 56[temp], csr7 611 loadp 64[temp], csr8 612 loadp 72[temp], csr9 613 loadd 80[temp], csfr0 614 loadd 88[temp], csfr1 615 loadd 96[temp], csfr2 616 loadd 104[temp], csfr3 617 loadd 112[temp], csfr4 618 loadd 120[temp], csfr5 619 loadd 128[temp], csfr6 620 loadd 136[temp], csfr7 621 elsif X86_64 622 loadp [temp], csr0 623 loadp 8[temp], csr1 624 loadp 16[temp], csr2 625 loadp 24[temp], csr3 626 loadp 32[temp], csr4 627 elsif X86_64_WIN 628 loadp [temp], csr0 629 loadp 8[temp], csr1 630 loadp 16[temp], csr2 631 loadp 24[temp], csr3 632 loadp 32[temp], csr4 633 loadp 40[temp], csr5 634 loadp 48[temp], csr6 635 end 636 end 637 end 638 547 639 macro preserveReturnAddressAfterCall(destinationRegister) 548 640 if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4 … … 551 643 elsif X86 or X86_WIN or X86_64 or X86_64_WIN 552 644 pop destinationRegister 553 else554 error555 end556 end557 558 macro restoreReturnAddressBeforeReturn(sourceRegister)559 if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4560 # In C_LOOP case, we're only restoring the bytecode vPC.561 move sourceRegister, lr562 elsif X86 or X86_WIN or X86_64 or X86_64_WIN563 push sourceRegister564 645 else 565 646 error … … 764 845 codeBlockSetter(t1) 765 846 847 preserveCalleeSavesUsedByLLInt() 848 766 849 # Set up the PC. 767 850 if JSVALUE64 … … 779 862 780 863 # Stack height check failed - need to call a slow_path. 781 subp maxFrameExtentForSlowPathCall, sp # Set up temporary stack pointer for call 864 # Set up temporary stack pointer for call including callee saves 865 subp maxFrameExtentForSlowPathCall, sp 782 866 callSlowPath(_llint_stack_check) 783 867 bpeq r1, 0, .stackHeightOKGetCodeBlock … … 794 878 .stackHeightOK: 795 879 move t0, sp 880 881 if JSVALUE64 882 move TagTypeNumber, tagTypeNumber 883 addp TagBitTypeOther, tagTypeNumber, tagMask 884 end 796 885 end 797 886 … … 849 938 850 939 macro doReturn() 940 restoreCalleeSavesUsedByLLInt() 851 941 restoreCallerPCAndCFR() 852 942 ret -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r189454 r189575 303 303 andp MarkedBlockMask, t3 304 304 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 305 restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0) 305 306 loadp VM::callFrameForThrow[t3], cfr 306 307 … … 592 593 593 594 loadp CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], a0 594 loadp CommonSlowPaths::ArityCheckData::returnPC[r1], a1595 595 call t3 596 596 if ASSERT_ENABLED … … 1879 1879 andp MarkedBlockMask, t3 1880 1880 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 1881 restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0) 1881 1882 loadp VM::callFrameForThrow[t3], cfr 1882 1883 restoreStackPointerAfterCall() … … 1917 1918 andp MarkedBlockMask, t1 1918 1919 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1 1920 copyCalleeSavesToVMCalleeSavesBuffer(t1, t2) 1919 1921 jmp VM::targetMachinePCForThrow[t1] 1920 1922 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r189454 r189575 216 216 storep cfr, VM::topVMEntryFrame[vm] 217 217 218 move TagTypeNumber, tagTypeNumber219 addp TagBitTypeOther, tagTypeNumber, tagMask220 221 218 checkStackPointerAlignment(extraTempReg, 0xbad0dc02) 222 219 … … 278 275 andp MarkedBlockMask, t3 279 276 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 277 restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0) 280 278 loadp VM::callFrameForThrow[t3], cfr 281 279 … … 510 508 511 509 .noError: 512 # r1 points to ArityCheckData.513 loadp CommonSlowPaths::ArityCheckData::thunkToCall[r1], t3514 btpz t3, .proceedInline515 516 loadp CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], a0517 loadp CommonSlowPaths::ArityCheckData::returnPC[r1], a1518 call t3519 if ASSERT_ENABLED520 loadp ReturnPC[cfr], t0521 loadp [t0], t0522 end523 jmp .continue524 525 .proceedInline:526 510 loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1 527 511 btiz t1, .continue … … 531 515 negq t1 532 516 move cfr, t3 517 subp CalleeSaveSpaceAsVirtualRegisters * 8, t3 533 518 loadi PayloadOffset + ArgumentCount[cfr], t2 534 addi CallFrameHeaderSlots , t2519 addi CallFrameHeaderSlots + CalleeSaveSpaceAsVirtualRegisters, t2 535 520 .copyLoop: 536 521 loadq [t3], t0 … … 575 560 loadp CodeBlock[cfr], t2 // t2<CodeBlock> = cfr.CodeBlock 576 561 loadi CodeBlock::m_numVars[t2], t2 // t2<size_t> = t2<CodeBlock>.m_numVars 562 subq CalleeSaveSpaceAsVirtualRegisters, t2 563 move cfr, t1 564 subq CalleeSaveSpaceAsVirtualRegisters * 8, t1 577 565 btiz t2, .opEnterDone 578 566 move ValueUndefined, t0 … … 580 568 sxi2q t2, t2 581 569 .opEnterLoop: 582 storeq t0, [ cfr, t2, 8]570 storeq t0, [t1, t2, 8] 583 571 addq 1, t2 584 572 btqnz t2, .opEnterLoop … … 1762 1750 1763 1751 _llint_op_catch: 1764 # Gotta restore the tag registers. We could be throwing from FTL, which may1765 # clobber them.1766 move TagTypeNumber, tagTypeNumber1767 move TagMask, tagMask1768 1769 1752 # This is where we end up from the JIT's throw trampoline (because the 1770 1753 # machine code return address will be set to _llint_op_catch), and from … … 1775 1758 andp MarkedBlockMask, t3 1776 1759 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 1760 restoreCalleeSavesFromVMCalleeSavesBuffer(t3, t0) 1777 1761 loadp VM::callFrameForThrow[t3], cfr 1778 1762 restoreStackPointerAfterCall() … … 1807 1791 1808 1792 _llint_throw_from_slow_path_trampoline: 1793 loadp Callee[cfr], t1 1794 andp MarkedBlockMask, t1 1795 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t1], t1 1796 copyCalleeSavesToVMCalleeSavesBuffer(t1, t2) 1797 1809 1798 callSlowPath(_llint_slow_path_handle_exception) 1810 1799 -
trunk/Source/JavaScriptCore/offlineasm/arm64.rb
r189293 r189575 62 62 # q4 => ft4 (unused in baseline) 63 63 # q5 => ft5 (unused in baseline) 64 # q8 => csfr0 (Only the lower 64 bits) 65 # q9 => csfr1 (Only the lower 64 bits) 66 # q10 => csfr2 (Only the lower 64 bits) 67 # q11 => csfr3 (Only the lower 64 bits) 68 # q12 => csfr4 (Only the lower 64 bits) 69 # q13 => csfr5 (Only the lower 64 bits) 70 # q14 => csfr6 (Only the lower 64 bits) 71 # q15 => csfr7 (Only the lower 64 bits) 64 72 # q31 => scratch 65 73 … … 117 125 arm64GPRName('x29', kind) 118 126 when 'csr0' 127 arm64GPRName('x19', kind) 128 when 'csr1' 129 arm64GPRName('x20', kind) 130 when 'csr2' 131 arm64GPRName('x21', kind) 132 when 'csr3' 133 arm64GPRName('x22', kind) 134 when 'csr4' 135 arm64GPRName('x23', kind) 136 when 'csr5' 137 arm64GPRName('x24', kind) 138 when 'csr6' 139 arm64GPRName('x25', kind) 140 when 'csr7' 119 141 arm64GPRName('x26', kind) 120 when 'csr 1'142 when 'csr8' 121 143 arm64GPRName('x27', kind) 122 when 'csr 2'144 when 'csr9' 123 145 arm64GPRName('x28', kind) 124 146 when 'sp' … … 147 169 when 'ft5' 148 170 arm64FPRName('q5', kind) 171 when 'csfr0' 172 arm64FPRName('q8', kind) 173 when 'csfr1' 174 arm64FPRName('q9', kind) 175 when 'csfr2' 176 arm64FPRName('q10', kind) 177 when 'csfr3' 178 arm64FPRName('q11', kind) 179 when 'csfr4' 180 arm64FPRName('q12', kind) 181 when 'csfr5' 182 arm64FPRName('q13', kind) 183 when 'csfr6' 184 arm64FPRName('q14', kind) 185 when 'csfr7' 186 arm64FPRName('q15', kind) 149 187 else "Bad register name #{@name} at #{codeOriginString}" 150 188 end -
trunk/Source/JavaScriptCore/offlineasm/registers.rb
r189293 r189575 49 49 "csr4", 50 50 "csr5", 51 "csr6" 51 "csr6", 52 "csr7", 53 "csr8", 54 "csr9" 52 55 ] 53 56 … … 64 67 "fa2", 65 68 "fa3", 69 "csfr0", 70 "csfr1", 71 "csfr2", 72 "csfr3", 73 "csfr4", 74 "csfr5", 75 "csfr6", 76 "csfr7", 66 77 "fr" 67 78 ] -
trunk/Source/JavaScriptCore/offlineasm/x86.rb
r189293 r189575 291 291 "ebx" 292 292 when "csr1" 293 "r12"293 isWin ? "esi" : "r12" 294 294 when "csr2" 295 "r13"295 isWin ? "edi" : "r13" 296 296 when "csr3" 297 isWin ? " esi" : "r14"297 isWin ? "r12" : "r14" 298 298 when "csr4" 299 isWin ? "edi" : "r15" 300 "r15" 299 isWin ? "r13" : "r15" 301 300 when "csr5" 302 301 raise "cannot use register #{name} on X86-64" unless isWin -
trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
r189339 r189575 26 26 #include "config.h" 27 27 #include "CommonSlowPaths.h" 28 #include "ArityCheckFailReturnThunks.h"29 28 #include "ArrayConstructor.h" 30 29 #include "CallFrame.h" … … 167 166 result->paddedStackSpace = slotsToAdd; 168 167 #if ENABLE(JIT) 169 if (vm.canUseJIT()) {168 if (vm.canUseJIT()) 170 169 result->thunkToCall = vm.getCTIStub(arityFixupGenerator).code().executableAddress(); 171 result->returnPC = vm.arityCheckFailReturnThunks->returnPCFor(vm, slotsToAdd * stackAlignmentRegisters()).executableAddress(); 172 } else 170 else 173 171 #endif 174 {175 172 result->thunkToCall = 0; 176 result->returnPC = 0;177 }178 173 return result; 179 174 } -
trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.h
r189501 r189575 50 50 unsigned paddedStackSpace; 51 51 void* thunkToCall; 52 void* returnPC;53 52 }; 54 53 -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r189201 r189575 31 31 32 32 #include "ArgList.h" 33 #include "ArityCheckFailReturnThunks.h"34 33 #include "ArrayBufferNeuteringWatchpoint.h" 35 34 #include "BuiltinExecutables.h" … … 78 77 #include "RegExpCache.h" 79 78 #include "RegExpObject.h" 79 #include "RegisterAtOffsetList.h" 80 80 #include "RuntimeType.h" 81 81 #include "SimpleTypedArrayController.h" … … 254 254 #if ENABLE(JIT) 255 255 jitStubs = std::make_unique<JITThunks>(); 256 a rityCheckFailReturnThunks = std::make_unique<ArityCheckFailReturnThunks>();256 allCalleeSaveRegisterOffsets = std::make_unique<RegisterAtOffsetList>(RegisterSet::vmCalleeSaveRegisters(), RegisterAtOffsetList::ZeroBased); 257 257 #endif 258 258 arityCheckData = std::make_unique<CommonSlowPaths::ArityCheckData>(); -
trunk/Source/JavaScriptCore/runtime/VM.h
r189454 r189575 34 34 #include "ExecutableAllocator.h" 35 35 #include "FunctionHasExecutedCache.h" 36 #if ENABLE(JIT) 37 #include "GPRInfo.h" 38 #endif 36 39 #include "Heap.h" 37 40 #include "Intrinsic.h" … … 73 76 namespace JSC { 74 77 75 class ArityCheckFailReturnThunks;76 78 class BuiltinExecutables; 77 79 class CodeBlock; … … 91 93 class NativeExecutable; 92 94 class RegExpCache; 95 class RegisterAtOffsetList; 93 96 class ScriptExecutable; 94 97 class SourceProvider; … … 358 361 Interpreter* interpreter; 359 362 #if ENABLE(JIT) 363 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 364 intptr_t calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS]; 365 366 static ptrdiff_t calleeSaveRegistersBufferOffset() 367 { 368 return OBJECT_OFFSETOF(VM, calleeSaveRegistersBuffer); 369 } 370 #endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 371 360 372 std::unique_ptr<JITThunks> jitStubs; 361 373 MacroAssemblerCodeRef getCTIStub(ThunkGenerator generator) … … 364 376 } 365 377 NativeExecutable* getHostFunction(NativeFunction, Intrinsic); 366 367 std::unique_ptr<ArityCheckFailReturnThunks> arityCheckFailReturnThunks; 378 379 std::unique_ptr<RegisterAtOffsetList> allCalleeSaveRegisterOffsets; 380 381 RegisterAtOffsetList* getAllCalleeSaveRegisterOffsets() { return allCalleeSaveRegisterOffsets.get(); } 382 368 383 #endif // ENABLE(JIT) 369 384 std::unique_ptr<CommonSlowPaths::ArityCheckData> arityCheckData;
Note:
See TracChangeset
for help on using the changeset viewer.