Changeset 189884 in webkit
- Timestamp:
- Sep 16, 2015 4:40:35 PM (9 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 43 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r189848 r189884 81 81 bytecode/CallLinkInfo.cpp 82 82 bytecode/CallLinkStatus.cpp 83 bytecode/CallMode.cpp 83 84 bytecode/CallVariant.cpp 84 85 bytecode/CodeBlock.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r189848 r189884 1 2015-09-16 Michael Saboff <msaboff@apple.com> 2 3 [ES6] Implement tail calls in the LLInt and Baseline JIT 4 https://bugs.webkit.org/show_bug.cgi?id=148661 5 6 Fix for the breakage of Speedometer/Full.html (https://bugs.webkit.org/show_bug.cgi?id=149162). 7 8 Reviewed by Filip Pizlo. 9 Changed SetupVarargsFrame.cpp::emitSetVarargsFrame to align the callframe size to be a 10 multiple of stackAlignmentRegisters() in addition to the location of the new frame. 11 12 Fixed Reviewed by Filip Pizlo. 13 14 * CMakeLists.txt: 15 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: 16 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: 17 * JavaScriptCore.xcodeproj/project.pbxproj: 18 * assembler/AbortReason.h: 19 * assembler/AbstractMacroAssembler.h: 20 (JSC::AbstractMacroAssembler::Call::Call): 21 (JSC::AbstractMacroAssembler::repatchNearCall): 22 (JSC::AbstractMacroAssembler::repatchCompact): 23 * assembler/CodeLocation.h: 24 (JSC::CodeLocationNearCall::CodeLocationNearCall): 25 (JSC::CodeLocationNearCall::callMode): 26 (JSC::CodeLocationCommon::callAtOffset): 27 (JSC::CodeLocationCommon::nearCallAtOffset): 28 (JSC::CodeLocationCommon::dataLabelPtrAtOffset): 29 * assembler/LinkBuffer.h: 30 (JSC::LinkBuffer::locationOfNearCall): 31 (JSC::LinkBuffer::locationOf): 32 * assembler/MacroAssemblerARM.h: 33 (JSC::MacroAssemblerARM::nearCall): 34 (JSC::MacroAssemblerARM::nearTailCall): 35 (JSC::MacroAssemblerARM::call): 36 (JSC::MacroAssemblerARM::linkCall): 37 * assembler/MacroAssemblerARM64.h: 38 (JSC::MacroAssemblerARM64::nearCall): 39 (JSC::MacroAssemblerARM64::nearTailCall): 40 (JSC::MacroAssemblerARM64::ret): 41 (JSC::MacroAssemblerARM64::linkCall): 42 * assembler/MacroAssemblerARMv7.h: 43 (JSC::MacroAssemblerARMv7::nearCall): 44 (JSC::MacroAssemblerARMv7::nearTailCall): 45 (JSC::MacroAssemblerARMv7::call): 46 (JSC::MacroAssemblerARMv7::linkCall): 47 * assembler/MacroAssemblerMIPS.h: 48 (JSC::MacroAssemblerMIPS::nearCall): 49 (JSC::MacroAssemblerMIPS::nearTailCall): 50 (JSC::MacroAssemblerMIPS::call): 51 (JSC::MacroAssemblerMIPS::linkCall): 52 (JSC::MacroAssemblerMIPS::repatchCall): 53 * assembler/MacroAssemblerSH4.h: 54 (JSC::MacroAssemblerSH4::call): 55 (JSC::MacroAssemblerSH4::nearTailCall): 56 (JSC::MacroAssemblerSH4::nearCall): 57 (JSC::MacroAssemblerSH4::linkCall): 58 (JSC::MacroAssemblerSH4::repatchCall): 59 * assembler/MacroAssemblerX86.h: 60 (JSC::MacroAssemblerX86::linkCall): 61 * assembler/MacroAssemblerX86Common.h: 62 (JSC::MacroAssemblerX86Common::breakpoint): 63 (JSC::MacroAssemblerX86Common::nearTailCall): 64 (JSC::MacroAssemblerX86Common::nearCall): 65 * assembler/MacroAssemblerX86_64.h: 66 (JSC::MacroAssemblerX86_64::linkCall): 67 * bytecode/BytecodeList.json: 68 * bytecode/BytecodeUseDef.h: 69 (JSC::computeUsesForBytecodeOffset): 70 (JSC::computeDefsForBytecodeOffset): 71 * bytecode/CallLinkInfo.h: 72 (JSC::CallLinkInfo::callTypeFor): 73 (JSC::CallLinkInfo::isVarargsCallType): 74 (JSC::CallLinkInfo::CallLinkInfo): 75 (JSC::CallLinkInfo::specializationKind): 76 (JSC::CallLinkInfo::callModeFor): 77 (JSC::CallLinkInfo::callMode): 78 (JSC::CallLinkInfo::isTailCall): 79 (JSC::CallLinkInfo::isVarargs): 80 (JSC::CallLinkInfo::registerPreservationMode): 81 * bytecode/CallLinkStatus.cpp: 82 (JSC::CallLinkStatus::computeFromLLInt): 83 * bytecode/CodeBlock.cpp: 84 (JSC::CodeBlock::dumpBytecode): 85 (JSC::CodeBlock::CodeBlock): 86 * bytecompiler/BytecodeGenerator.cpp: 87 (JSC::BytecodeGenerator::BytecodeGenerator): 88 (JSC::BytecodeGenerator::emitCallInTailPosition): 89 (JSC::BytecodeGenerator::emitCallEval): 90 (JSC::BytecodeGenerator::emitCall): 91 (JSC::BytecodeGenerator::emitCallVarargsInTailPosition): 92 (JSC::BytecodeGenerator::emitConstructVarargs): 93 * bytecompiler/NodesCodegen.cpp: 94 (JSC::CallArguments::CallArguments): 95 (JSC::LabelNode::emitBytecode): 96 * dfg/DFGByteCodeParser.cpp: 97 (JSC::DFG::ByteCodeParser::addCallWithoutSettingResult): 98 * ftl/FTLLowerDFGToLLVM.cpp: 99 (JSC::FTL::DFG::LowerDFGToLLVM::compileCallOrConstruct): 100 * interpreter/Interpreter.h: 101 (JSC::Interpreter::isCallBytecode): 102 (JSC::calleeFrameForVarargs): 103 * jit/CCallHelpers.h: 104 (JSC::CCallHelpers::jumpToExceptionHandler): 105 (JSC::CCallHelpers::prepareForTailCallSlow): 106 * jit/JIT.cpp: 107 (JSC::JIT::privateCompileMainPass): 108 (JSC::JIT::privateCompileSlowCases): 109 * jit/JIT.h: 110 * jit/JITCall.cpp: 111 (JSC::JIT::compileOpCall): 112 (JSC::JIT::compileOpCallSlowCase): 113 (JSC::JIT::emit_op_call): 114 (JSC::JIT::emit_op_tail_call): 115 (JSC::JIT::emit_op_call_eval): 116 (JSC::JIT::emit_op_call_varargs): 117 (JSC::JIT::emit_op_tail_call_varargs): 118 (JSC::JIT::emit_op_construct_varargs): 119 (JSC::JIT::emitSlow_op_call): 120 (JSC::JIT::emitSlow_op_tail_call): 121 (JSC::JIT::emitSlow_op_call_eval): 122 (JSC::JIT::emitSlow_op_call_varargs): 123 (JSC::JIT::emitSlow_op_tail_call_varargs): 124 (JSC::JIT::emitSlow_op_construct_varargs): 125 * jit/JITCall32_64.cpp: 126 (JSC::JIT::emitSlow_op_call): 127 (JSC::JIT::emitSlow_op_tail_call): 128 (JSC::JIT::emitSlow_op_call_eval): 129 (JSC::JIT::emitSlow_op_call_varargs): 130 (JSC::JIT::emitSlow_op_tail_call_varargs): 131 (JSC::JIT::emitSlow_op_construct_varargs): 132 (JSC::JIT::emit_op_call): 133 (JSC::JIT::emit_op_tail_call): 134 (JSC::JIT::emit_op_call_eval): 135 (JSC::JIT::emit_op_call_varargs): 136 (JSC::JIT::emit_op_tail_call_varargs): 137 (JSC::JIT::emit_op_construct_varargs): 138 (JSC::JIT::compileOpCall): 139 (JSC::JIT::compileOpCallSlowCase): 140 * jit/JITInlines.h: 141 (JSC::JIT::emitNakedCall): 142 (JSC::JIT::emitNakedTailCall): 143 (JSC::JIT::updateTopCallFrame): 144 * jit/JITOperations.cpp: 145 * jit/JITOperations.h: 146 * jit/Repatch.cpp: 147 (JSC::linkVirtualFor): 148 (JSC::linkPolymorphicCall): 149 * jit/SetupVarargsFrame.cpp: 150 (JSC::emitSetVarargsFrame): 151 * jit/ThunkGenerators.cpp: 152 (JSC::throwExceptionFromCallSlowPathGenerator): 153 (JSC::slowPathFor): 154 (JSC::linkCallThunkGenerator): 155 (JSC::virtualThunkFor): 156 (JSC::arityFixupGenerator): 157 (JSC::unreachableGenerator): 158 (JSC::baselineGetterReturnThunkGenerator): 159 * jit/ThunkGenerators.h: 160 * llint/LowLevelInterpreter.asm: 161 * llint/LowLevelInterpreter32_64.asm: 162 * llint/LowLevelInterpreter64.asm: 163 * runtime/CommonSlowPaths.h: 164 (JSC::CommonSlowPaths::arityCheckFor): 165 (JSC::CommonSlowPaths::opIn): 166 1 167 2015-09-15 Michael Saboff <msaboff@apple.com> 2 168 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
r189848 r189884 320 320 <ClCompile Include="..\bytecode\CallLinkInfo.cpp" /> 321 321 <ClCompile Include="..\bytecode\CallLinkStatus.cpp" /> 322 <ClCompile Include="..\bytecode\CallMode.cpp" /> 322 323 <ClCompile Include="..\bytecode\CallVariant.cpp" /> 323 324 <ClCompile Include="..\bytecode\CodeBlock.cpp" /> … … 1013 1014 <ClInclude Include="..\bytecode\CallLinkInfo.h" /> 1014 1015 <ClInclude Include="..\bytecode\CallLinkStatus.h" /> 1016 <ClInclude Include="..\bytecode\CallMode.h" /> 1015 1017 <ClInclude Include="..\bytecode\CallReturnOffsetToBytecodeOffset.h" /> 1016 1018 <ClInclude Include="..\bytecode\CallVariant.h" /> -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
r189848 r189884 163 163 <Filter>bytecode</Filter> 164 164 </ClCompile> 165 <ClCompile Include="..\bytecode\CallMode.cpp"> 166 <Filter>bytecode</Filter> 167 </ClCompile> 165 168 <ClCompile Include="..\bytecode\CodeBlock.cpp"> 166 169 <Filter>bytecode</Filter> … … 2033 2036 </ClInclude> 2034 2037 <ClInclude Include="..\bytecode\CallLinkStatus.h"> 2038 <Filter>bytecode</Filter> 2039 </ClInclude> 2040 <ClInclude Include="..\bytecode\CallMode.h"> 2035 2041 <Filter>bytecode</Filter> 2036 2042 </ClInclude> -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r189848 r189884 970 970 5DE6E5B30E1728EC00180407 /* create_hash_table in Headers */ = {isa = PBXBuildFile; fileRef = F692A8540255597D01FF60F7 /* create_hash_table */; settings = {ATTRIBUTES = (); }; }; 971 971 623A37EC1B87A7C000754209 /* RegisterMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 623A37EB1B87A7BD00754209 /* RegisterMap.h */; }; 972 627673231B680C1E00FD9F2E /* CallMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 627673211B680C1E00FD9F2E /* CallMode.cpp */; }; 973 627673241B680C1E00FD9F2E /* CallMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 627673221B680C1E00FD9F2E /* CallMode.h */; settings = {ATTRIBUTES = (Private, ); }; }; 972 974 62D2D38F1ADF103F000206C1 /* FunctionRareData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 62D2D38D1ADF103F000206C1 /* FunctionRareData.cpp */; }; 973 975 62D2D3901ADF103F000206C1 /* FunctionRareData.h in Headers */ = {isa = PBXBuildFile; fileRef = 62D2D38E1ADF103F000206C1 /* FunctionRareData.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2773 2775 5DE3D0F40DD8DDFB00468714 /* WebKitAvailability.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebKitAvailability.h; sourceTree = "<group>"; }; 2774 2776 623A37EB1B87A7BD00754209 /* RegisterMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterMap.h; sourceTree = "<group>"; }; 2777 627673211B680C1E00FD9F2E /* CallMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallMode.cpp; sourceTree = "<group>"; }; 2778 627673221B680C1E00FD9F2E /* CallMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallMode.h; sourceTree = "<group>"; }; 2775 2779 62A9A29E1B0BED4800BD54CA /* DFGLazyNode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGLazyNode.cpp; path = dfg/DFGLazyNode.cpp; sourceTree = "<group>"; }; 2776 2780 62A9A29F1B0BED4800BD54CA /* DFGLazyNode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGLazyNode.h; path = dfg/DFGLazyNode.h; sourceTree = "<group>"; }; … … 5539 5543 0F93329314CA7DC10085F3C6 /* CallLinkStatus.cpp */, 5540 5544 0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */, 5545 627673211B680C1E00FD9F2E /* CallMode.cpp */, 5546 627673221B680C1E00FD9F2E /* CallMode.h */, 5541 5547 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */, 5542 5548 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */, … … 6191 6197 A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */, 6192 6198 0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */, 6199 627673241B680C1E00FD9F2E /* CallMode.h in Headers */, 6193 6200 0FD8A32617D51F5700CA2C40 /* DFGOSREntrypointCreationPhase.h in Headers */, 6194 6201 62F2AA381B0BEDE300610C7A /* DFGLazyNode.h in Headers */, … … 7972 7979 9335F24D12E6765B002B5553 /* StringRecursionChecker.cpp in Sources */, 7973 7980 BCDE3B430E6C832D001453A7 /* Structure.cpp in Sources */, 7981 627673231B680C1E00FD9F2E /* CallMode.cpp in Sources */, 7974 7982 7E4EE70F0EBB7A5B005934AA /* StructureChain.cpp in Sources */, 7975 7983 C2F0F2D116BAEEE900187C19 /* StructureRareData.cpp in Sources */, -
trunk/Source/JavaScriptCore/assembler/AbortReason.h
r189848 r189884 59 59 DFGUnreasonableOSREntryJumpDestination = 230, 60 60 DFGVarargsThrowingPathDidNotThrow = 235, 61 JITDidReturnFromTailCall = 237, 61 62 JITDivOperandsAreNotNumbers = 240, 62 63 JITGetByValResultIsNotEmpty = 250, -
trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
r189848 r189884 508 508 Linkable = 0x1, 509 509 Near = 0x2, 510 Tail = 0x4, 510 511 LinkableNear = 0x3, 512 LinkableNearTail = 0x7, 511 513 }; 512 514 … … 963 965 static void repatchNearCall(CodeLocationNearCall nearCall, CodeLocationLabel destination) 964 966 { 965 AssemblerType::relinkCall(nearCall.dataLocation(), destination.executableAddress()); 967 switch (nearCall.callMode()) { 968 case NearCallMode::Tail: 969 AssemblerType::relinkJump(nearCall.dataLocation(), destination.executableAddress()); 970 return; 971 case NearCallMode::Regular: 972 AssemblerType::relinkCall(nearCall.dataLocation(), destination.executableAddress()); 973 return; 974 } 975 RELEASE_ASSERT_NOT_REACHED(); 966 976 } 967 977 -
trunk/Source/JavaScriptCore/assembler/CodeLocation.h
r189848 r189884 33 33 namespace JSC { 34 34 35 enum NearCallMode { Regular, Tail }; 36 35 37 class CodeLocationInstruction; 36 38 class CodeLocationLabel; … … 60 62 CodeLocationJump jumpAtOffset(int offset); 61 63 CodeLocationCall callAtOffset(int offset); 62 CodeLocationNearCall nearCallAtOffset(int offset );64 CodeLocationNearCall nearCallAtOffset(int offset, NearCallMode); 63 65 CodeLocationDataLabelPtr dataLabelPtrAtOffset(int offset); 64 66 CodeLocationDataLabel32 dataLabel32AtOffset(int offset); … … 116 118 public: 117 119 CodeLocationNearCall() {} 118 explicit CodeLocationNearCall(MacroAssemblerCodePtr location) 119 : CodeLocationCommon(location) {} 120 explicit CodeLocationNearCall(void* location) 121 : CodeLocationCommon(MacroAssemblerCodePtr(location)) {} 120 explicit CodeLocationNearCall(MacroAssemblerCodePtr location, NearCallMode callMode) 121 : CodeLocationCommon(location), m_callMode(callMode) { } 122 explicit CodeLocationNearCall(void* location, NearCallMode callMode) 123 : CodeLocationCommon(MacroAssemblerCodePtr(location)), m_callMode(callMode) { } 124 NearCallMode callMode() { return m_callMode; } 125 private: 126 NearCallMode m_callMode = NearCallMode::Regular; 122 127 }; 123 128 … … 182 187 } 183 188 184 inline CodeLocationNearCall CodeLocationCommon::nearCallAtOffset(int offset )185 { 186 ASSERT_VALID_CODE_OFFSET(offset); 187 return CodeLocationNearCall(reinterpret_cast<char*>(dataLocation()) + offset );189 inline CodeLocationNearCall CodeLocationCommon::nearCallAtOffset(int offset, NearCallMode callMode) 190 { 191 ASSERT_VALID_CODE_OFFSET(offset); 192 return CodeLocationNearCall(reinterpret_cast<char*>(dataLocation()) + offset, callMode); 188 193 } 189 194 -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.h
r189848 r189884 181 181 ASSERT(call.isFlagSet(Call::Linkable)); 182 182 ASSERT(call.isFlagSet(Call::Near)); 183 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label))); 183 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label)), 184 call.isFlagSet(Call::Tail) ? NearCallMode::Tail : NearCallMode::Regular); 184 185 } 185 186 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM.h
r189848 r189884 905 905 } 906 906 907 Call nearTailCall() 908 { 909 return Call(m_assembler.jmp(), Call::LinkableNearTail); 910 } 911 907 912 Call call(RegisterID target) 908 913 { … … 1489 1494 static void linkCall(void* code, Call call, FunctionPtr function) 1490 1495 { 1491 ARMAssembler::linkCall(code, call.m_label, function.value()); 1496 if (call.isFlagSet(Call::Tail)) 1497 ARMAssembler::linkJump(code, call.m_label, function.value()); 1498 else 1499 ARMAssembler::linkCall(code, call.m_label, function.value()); 1492 1500 } 1493 1501 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r189848 r189884 2205 2205 } 2206 2206 2207 ALWAYS_INLINE Call nearTailCall() 2208 { 2209 AssemblerLabel label = m_assembler.label(); 2210 m_assembler.b(); 2211 return Call(label, Call::LinkableNearTail); 2212 } 2213 2207 2214 ALWAYS_INLINE void ret() 2208 2215 { … … 2883 2890 static void linkCall(void* code, Call call, FunctionPtr function) 2884 2891 { 2885 if (call.isFlagSet(Call::Near)) 2892 if (!call.isFlagSet(Call::Near)) 2893 ARM64Assembler::linkPointer(code, call.m_label.labelAtOffset(REPATCH_OFFSET_CALL_TO_POINTER), function.value()); 2894 else if (call.isFlagSet(Call::Tail)) 2895 ARM64Assembler::linkJump(code, call.m_label, function.value()); 2896 else 2886 2897 ARM64Assembler::linkCall(code, call.m_label, function.value()); 2887 else2888 ARM64Assembler::linkPointer(code, call.m_label.labelAtOffset(REPATCH_OFFSET_CALL_TO_POINTER), function.value());2889 2898 } 2890 2899 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
r189848 r189884 1678 1678 } 1679 1679 1680 ALWAYS_INLINE Call nearTailCall() 1681 { 1682 moveFixedWidthEncoding(TrustedImm32(0), dataTempRegister); 1683 return Call(m_assembler.bx(dataTempRegister), Call::LinkableNearTail); 1684 } 1685 1680 1686 ALWAYS_INLINE Call call() 1681 1687 { … … 2013 2019 static void linkCall(void* code, Call call, FunctionPtr function) 2014 2020 { 2015 ARMv7Assembler::linkCall(code, call.m_label, function.value()); 2021 if (call.isFlagSet(Call::Tail)) 2022 ARMv7Assembler::linkJump(code, call.m_label, function.value()); 2023 else 2024 ARMv7Assembler::linkCall(code, call.m_label, function.value()); 2016 2025 } 2017 2026 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerMIPS.h
r189848 r189884 1974 1974 m_assembler.nop(); 1975 1975 return Call(m_assembler.label(), Call::LinkableNear); 1976 } 1977 1978 Call nearTailCall() 1979 { 1980 m_assembler.nop(); 1981 m_assembler.nop(); 1982 m_assembler.beq(MIPSRegisters::zero, MIPSRegisters::zero, 0); 1983 m_assembler.nop(); 1984 insertRelaxationWords(); 1985 return Call(m_assembler.label(), Call::LinkableNearTail); 1976 1986 } 1977 1987 … … 2801 2811 static void linkCall(void* code, Call call, FunctionPtr function) 2802 2812 { 2803 MIPSAssembler::linkCall(code, call.m_label, function.value()); 2813 if (call.isFlagSet(Call::Tail)) 2814 MIPSAssembler::linkJump(code, call.m_label, function.value()); 2815 else 2816 MIPSAssembler::linkCall(code, call.m_label, function.value()); 2804 2817 } 2805 2818 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerSH4.h
r189848 r189884 2404 2404 } 2405 2405 2406 Call nearTailCall() 2407 { 2408 return Call(m_assembler.jump(), Call::LinkableNearTail); 2409 } 2410 2406 2411 Call nearCall() 2407 2412 { … … 2609 2614 static void linkCall(void* code, Call call, FunctionPtr function) 2610 2615 { 2611 SH4Assembler::linkCall(code, call.m_label, function.value()); 2616 if (call.isFlagSet(Call::Tail)) 2617 SH4Assembler::linkJump(code, call.m_label, function.value()); 2618 else 2619 SH4Assembler::linkCall(code, call.m_label, function.value()); 2612 2620 } 2613 2621 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86.h
r189848 r189884 362 362 static void linkCall(void* code, Call call, FunctionPtr function) 363 363 { 364 X86Assembler::linkCall(code, call.m_label, function.value()); 364 if (call.isFlagSet(Call::Tail)) 365 X86Assembler::linkJump(code, call.m_label, function.value()); 366 else 367 X86Assembler::linkCall(code, call.m_label, function.value()); 365 368 } 366 369 }; -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h
r189848 r189884 1402 1402 { 1403 1403 m_assembler.int3(); 1404 } 1405 1406 Call nearTailCall() 1407 { 1408 return Call(m_assembler.jmp(), Call::LinkableNearTail); 1404 1409 } 1405 1410 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
r189848 r189884 873 873 if (!call.isFlagSet(Call::Near)) 874 874 X86Assembler::linkPointer(code, call.m_label.labelAtOffset(-REPATCH_OFFSET_CALL_R11), function.value()); 875 else if (call.isFlagSet(Call::Tail)) 876 X86Assembler::linkJump(code, call.m_label, function.value()); 875 877 else 876 878 X86Assembler::linkCall(code, call.m_label, function.value()); -
trunk/Source/JavaScriptCore/bytecode/BytecodeList.json
r189848 r189884 93 93 { "name" : "op_new_arrow_func_exp", "length" : 5 }, 94 94 { "name" : "op_call", "length" : 9 }, 95 { "name" : "op_tail_call", "length" : 9 }, 95 96 { "name" : "op_call_eval", "length" : 9 }, 96 97 { "name" : "op_call_varargs", "length" : 9 }, 98 { "name" : "op_tail_call_varargs", "length" : 9 }, 97 99 { "name" : "op_ret", "length" : 2 }, 98 100 { "name" : "op_construct", "length" : 9 }, … … 145 147 { "name" : "llint_cloop_did_return_from_js_6" }, 146 148 { "name" : "llint_cloop_did_return_from_js_7" }, 147 { "name" : "llint_cloop_did_return_from_js_8" } 149 { "name" : "llint_cloop_did_return_from_js_8" }, 150 { "name" : "llint_cloop_did_return_from_js_9" }, 151 { "name" : "llint_cloop_did_return_from_js_10" }, 152 { "name" : "llint_cloop_did_return_from_js_11" } 148 153 ] 149 154 }, -
trunk/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
r189848 r189884 192 192 case op_has_structure_property: 193 193 case op_construct_varargs: 194 case op_call_varargs: { 194 case op_call_varargs: 195 case op_tail_call_varargs: { 195 196 functor(codeBlock, instruction, opcodeID, instruction[2].u.operand); 196 197 functor(codeBlock, instruction, opcodeID, instruction[3].u.operand); … … 221 222 case op_construct: 222 223 case op_call_eval: 223 case op_call: { 224 case op_call: 225 case op_tail_call: { 224 226 functor(codeBlock, instruction, opcodeID, instruction[2].u.operand); 225 227 int argCount = instruction[3].u.operand; … … 312 314 case op_new_arrow_func_exp: 313 315 case op_call_varargs: 316 case op_tail_call_varargs: 314 317 case op_construct_varargs: 315 318 case op_get_from_scope: 316 319 case op_call: 320 case op_tail_call: 317 321 case op_call_eval: 318 322 case op_construct: -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h
r189848 r189884 27 27 #define CallLinkInfo_h 28 28 29 #include "CallMode.h" 29 30 #include "CodeLocation.h" 30 31 #include "CodeSpecializationKind.h" … … 42 43 class CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> { 43 44 public: 44 enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs };45 enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs, TailCall, TailCallVarargs }; 45 46 static CallType callTypeFor(OpcodeID opcodeID) 46 47 { 47 48 if (opcodeID == op_call || opcodeID == op_call_eval) 48 49 return Call; 50 if (opcodeID == op_call_varargs) 51 return CallVarargs; 49 52 if (opcodeID == op_construct) 50 53 return Construct; 51 54 if (opcodeID == op_construct_varargs) 52 55 return ConstructVarargs; 53 ASSERT(opcodeID == op_call_varargs); 54 return CallVarargs; 55 } 56 56 if (opcodeID == op_tail_call) 57 return TailCall; 58 ASSERT(opcodeID == op_tail_call_varargs); 59 return TailCallVarargs; 60 } 61 62 static bool isVarargsCallType(CallType callType) 63 { 64 switch (callType) { 65 case CallVarargs: 66 case ConstructVarargs: 67 case TailCallVarargs: 68 return true; 69 70 default: 71 return false; 72 } 73 } 74 57 75 CallLinkInfo() 58 76 : m_registerPreservationMode(static_cast<unsigned>(RegisterPreservationNotRequired)) … … 82 100 { 83 101 return specializationKindFor(static_cast<CallType>(m_callType)); 102 } 103 104 static CallMode callModeFor(CallType callType) 105 { 106 switch (callType) { 107 case Call: 108 case CallVarargs: 109 return CallMode::Regular; 110 case TailCall: 111 case TailCallVarargs: 112 return CallMode::Tail; 113 case Construct: 114 case ConstructVarargs: 115 return CallMode::Construct; 116 case None: 117 RELEASE_ASSERT_NOT_REACHED(); 118 } 119 120 RELEASE_ASSERT_NOT_REACHED(); 121 } 122 123 CallMode callMode() const 124 { 125 return callModeFor(static_cast<CallType>(m_callType)); 126 } 127 128 bool isTailCall() const 129 { 130 return callMode() == CallMode::Tail; 131 } 132 133 bool isVarargs() const 134 { 135 return isVarargsCallType(static_cast<CallType>(m_callType)); 84 136 } 85 137 -
trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
r189848 r189884 70 70 Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex; 71 71 OpcodeID op = vm.interpreter->getOpcodeID(instruction[0].u.opcode); 72 if (op != op_call && op != op_construct )72 if (op != op_call && op != op_construct && op != op_tail_call) 73 73 return CallLinkStatus(); 74 74 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r189848 r189884 1249 1249 break; 1250 1250 } 1251 case op_tail_call: { 1252 printCallOp(out, exec, location, it, "tail_call", DumpCaches, hasPrintedProfiling, callLinkInfos); 1253 break; 1254 } 1251 1255 case op_call_eval: { 1252 1256 printCallOp(out, exec, location, it, "call_eval", DontDumpCaches, hasPrintedProfiling, callLinkInfos); … … 1255 1259 1256 1260 case op_construct_varargs: 1257 case op_call_varargs: { 1261 case op_call_varargs: 1262 case op_tail_call_varargs: { 1258 1263 int result = (++it)->u.operand; 1259 1264 int callee = (++it)->u.operand; … … 1263 1268 int varArgOffset = (++it)->u.operand; 1264 1269 ++it; 1265 printLocationAndOp(out, exec, location, it, opcode == op_call_varargs ? "call_varargs" : "construct_varargs");1270 printLocationAndOp(out, exec, location, it, opcode == op_call_varargs ? "call_varargs" : opcode == op_construct_varargs ? "construct_varargs" : "tail_call_varargs"); 1266 1271 out.printf("%s, %s, %s, %s, %d, %d", registerName(result).data(), registerName(callee).data(), registerName(thisValue).data(), registerName(arguments).data(), firstFreeRegister, varArgOffset); 1267 1272 dumpValueProfiling(out, it, hasPrintedProfiling); … … 1830 1835 } 1831 1836 case op_call_varargs: 1837 case op_tail_call_varargs: 1832 1838 case op_construct_varargs: 1833 1839 case op_get_by_val: { … … 1879 1885 1880 1886 case op_call: 1887 case op_tail_call: 1881 1888 case op_call_eval: { 1882 1889 ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand]; -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
r189848 r189884 194 194 , m_isBuiltinFunction(codeBlock->isBuiltinFunction()) 195 195 , m_usesNonStrictEval(codeBlock->usesEval() && !codeBlock->isStrictMode()) 196 , m_inTailPosition(Options::enableTailCalls() && constructorKind() == ConstructorKind::None && isStrictMode()) 196 // FIXME: We should be able to have tail call elimination with the profiler 197 // enabled. This is currently not possible because the profiler expects 198 // op_will_call / op_did_call pairs before and after a call, which are not 199 // compatible with tail calls (we have no way of emitting op_did_call). 200 // https://bugs.webkit.org/show_bug.cgi?id=148819 201 , m_inTailPosition(Options::enableTailCalls() && constructorKind() == ConstructorKind::None && isStrictMode() && !m_shouldEmitProfileHooks) 197 202 { 198 203 for (auto& constantRegister : m_linkTimeConstantRegisters) … … 2514 2519 RegisterID* BytecodeGenerator::emitCallInTailPosition(RegisterID* dst, RegisterID* func, ExpectedFunction expectedFunction, CallArguments& callArguments, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2515 2520 { 2516 // FIXME: We should be emitting a new op_tail_call here when 2517 // m_inTailPosition is false 2518 // https://bugs.webkit.org/show_bug.cgi?id=148661 2519 return emitCall(op_call, dst, func, expectedFunction, callArguments, divot, divotStart, divotEnd); 2521 return emitCall(m_inTailPosition ? op_tail_call : op_call, dst, func, expectedFunction, callArguments, divot, divotStart, divotEnd); 2520 2522 } 2521 2523 … … 2602 2604 RegisterID* BytecodeGenerator::emitCall(OpcodeID opcodeID, RegisterID* dst, RegisterID* func, ExpectedFunction expectedFunction, CallArguments& callArguments, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2603 2605 { 2604 ASSERT(opcodeID == op_call || opcodeID == op_call_eval );2606 ASSERT(opcodeID == op_call || opcodeID == op_call_eval || opcodeID == op_tail_call); 2605 2607 ASSERT(func->refCount()); 2606 2608 … … 2618 2620 argumentRegister = expression->emitBytecode(*this, callArguments.argumentRegister(0)); 2619 2621 RefPtr<RegisterID> thisRegister = emitMove(newTemporary(), callArguments.thisRegister()); 2620 return emitCallVarargs( dst, func, callArguments.thisRegister(), argumentRegister.get(), newTemporary(), 0, callArguments.profileHookRegister(), divot, divotStart, divotEnd);2622 return emitCallVarargs(opcodeID == op_tail_call ? op_tail_call_varargs : op_call_varargs, dst, func, callArguments.thisRegister(), argumentRegister.get(), newTemporary(), 0, callArguments.profileHookRegister(), divot, divotStart, divotEnd); 2621 2623 } 2622 2624 for (; n; n = n->m_next) … … 2671 2673 RegisterID* BytecodeGenerator::emitCallVarargsInTailPosition(RegisterID* dst, RegisterID* func, RegisterID* thisRegister, RegisterID* arguments, RegisterID* firstFreeRegister, int32_t firstVarArgOffset, RegisterID* profileHookRegister, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2672 2674 { 2673 // FIXME: We should be emitting a new op_tail_call here when 2674 // m_inTailPosition is false 2675 // https://bugs.webkit.org/show_bug.cgi?id=148661 2676 return emitCallVarargs(op_call_varargs, dst, func, thisRegister, arguments, firstFreeRegister, firstVarArgOffset, profileHookRegister, divot, divotStart, divotEnd); 2675 return emitCallVarargs(m_inTailPosition ? op_tail_call_varargs : op_call_varargs, dst, func, thisRegister, arguments, firstFreeRegister, firstVarArgOffset, profileHookRegister, divot, divotStart, divotEnd); 2677 2676 } 2678 2677 -
trunk/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
r189848 r189884 678 678 m_argv[i] = generator.newTemporary(); 679 679 ASSERT(static_cast<size_t>(i) == m_argv.size() - 1 || m_argv[i]->index() == m_argv[i + 1]->index() - 1); 680 } 681 682 // We need to ensure that the frame size is stack-aligned 683 while ((JSStack::CallFrameHeaderSize + m_argv.size()) % stackAlignmentRegisters()) { 684 m_argv.insert(0, generator.newTemporary()); 685 m_padding++; 680 686 } 681 687 … … 2787 2793 2788 2794 LabelScopePtr scope = generator.newLabelScope(LabelScope::NamedLabel, &m_name); 2789 generator.emitNode (dst, m_statement);2795 generator.emitNodeInTailPosition(dst, m_statement); 2790 2796 2791 2797 generator.emitLabel(scope->breakTarget()); -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r189848 r189884 742 742 { 743 743 addVarArgChild(callee); 744 size_t parameterSlots = JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + argCount; 744 size_t frameSize = JSStack::CallFrameHeaderSize + argCount; 745 size_t alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize); 746 size_t parameterSlots = alignedFrameSize - JSStack::CallerFrameAndPCSize; 747 745 748 if (parameterSlots > m_parameterSlots) 746 749 m_parameterSlots = parameterSlots; -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
r189848 r189884 4330 4330 void compileCallOrConstruct() 4331 4331 { 4332 int numPassedArgs = m_node->numChildren() - 1; 4333 int numArgs = numPassedArgs; 4332 int numArgs = m_node->numChildren() - 1; 4334 4333 4335 4334 LValue jsCallee = lowJSValue(m_graph.varArgChild(m_node, 0)); 4336 4335 4337 4336 unsigned stackmapID = m_stackmapIDs++; 4338 4337 4338 unsigned frameSize = JSStack::CallFrameHeaderSize + numArgs; 4339 unsigned alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize); 4340 unsigned padding = alignedFrameSize - frameSize; 4341 4339 4342 Vector<LValue> arguments; 4340 4343 arguments.append(m_out.constInt64(stackmapID)); 4341 4344 arguments.append(m_out.constInt32(sizeOfCall())); 4342 4345 arguments.append(constNull(m_out.ref8)); 4343 arguments.append(m_out.constInt32(1 + JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + numArgs));4346 arguments.append(m_out.constInt32(1 + alignedFrameSize - JSStack::CallerFrameAndPCSize)); 4344 4347 arguments.append(jsCallee); // callee -> %rax 4345 4348 arguments.append(getUndef(m_out.int64)); // code block 4346 4349 arguments.append(jsCallee); // callee -> stack 4347 4350 arguments.append(m_out.constInt64(numArgs)); // argument count and zeros for the tag 4348 for (int i = 0; i < num PassedArgs; ++i)4351 for (int i = 0; i < numArgs; ++i) 4349 4352 arguments.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i))); 4353 for (unsigned i = 0; i < padding; ++i) 4354 arguments.append(getUndef(m_out.int64)); 4350 4355 4351 4356 callPreflight(); -
trunk/Source/JavaScriptCore/interpreter/Interpreter.h
r189848 r189884 254 254 void dumpRegisters(CallFrame*); 255 255 256 bool isCallBytecode(Opcode opcode) { return opcode == getOpcode(op_call) || opcode == getOpcode(op_construct) || opcode == getOpcode(op_call_eval) ; }256 bool isCallBytecode(Opcode opcode) { return opcode == getOpcode(op_call) || opcode == getOpcode(op_construct) || opcode == getOpcode(op_call_eval) || opcode == getOpcode(op_tail_call); } 257 257 258 258 void enableSampler(); … … 278 278 inline CallFrame* calleeFrameForVarargs(CallFrame* callFrame, unsigned numUsedStackSlots, unsigned argumentCountIncludingThis) 279 279 { 280 #if 1 281 // We want the new frame to be allocated on a stack aligned offset with a stack 282 // aligned size. Align the size here. 283 argumentCountIncludingThis = WTF::roundUpToMultipleOf( 284 stackAlignmentRegisters(), 285 argumentCountIncludingThis + JSStack::CallFrameHeaderSize) - JSStack::CallFrameHeaderSize; 286 287 // Align the frame offset here. 288 #endif 280 289 unsigned paddedCalleeFrameOffset = WTF::roundUpToMultipleOf( 281 290 stackAlignmentRegisters(), -
trunk/Source/JavaScriptCore/jit/CCallHelpers.h
r189848 r189884 31 31 #include "AssemblyHelpers.h" 32 32 #include "GPRInfo.h" 33 #include "StackAlignment.h" 33 34 34 35 namespace JSC { … … 2083 2084 jump(GPRInfo::regT1); 2084 2085 } 2086 2087 void prepareForTailCallSlow(GPRReg calleeGPR = InvalidGPRReg) 2088 { 2089 GPRReg temp1 = calleeGPR == GPRInfo::regT0 ? GPRInfo::regT3 : GPRInfo::regT0; 2090 GPRReg temp2 = calleeGPR == GPRInfo::regT1 ? GPRInfo::regT3 : GPRInfo::regT1; 2091 GPRReg temp3 = calleeGPR == GPRInfo::regT2 ? GPRInfo::regT3 : GPRInfo::regT2; 2092 2093 GPRReg newFramePointer = temp1; 2094 GPRReg newFrameSizeGPR = temp2; 2095 { 2096 // The old frame size is its number of arguments (or number of 2097 // parameters in case of arity fixup), plus the frame header size, 2098 // aligned 2099 GPRReg oldFrameSizeGPR = temp2; 2100 { 2101 GPRReg argCountGPR = oldFrameSizeGPR; 2102 load32(Address(framePointerRegister, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), argCountGPR); 2103 2104 { 2105 GPRReg numParametersGPR = temp1; 2106 { 2107 GPRReg codeBlockGPR = numParametersGPR; 2108 loadPtr(Address(framePointerRegister, JSStack::CodeBlock * static_cast<int>(sizeof(Register))), codeBlockGPR); 2109 load32(Address(codeBlockGPR, CodeBlock::offsetOfNumParameters()), numParametersGPR); 2110 } 2111 2112 ASSERT(numParametersGPR != argCountGPR); 2113 Jump argumentCountWasNotFixedUp = branch32(BelowOrEqual, numParametersGPR, argCountGPR); 2114 move(numParametersGPR, argCountGPR); 2115 argumentCountWasNotFixedUp.link(this); 2116 } 2117 2118 add32(TrustedImm32(stackAlignmentRegisters() + JSStack::CallFrameHeaderSize - 1), argCountGPR, oldFrameSizeGPR); 2119 and32(TrustedImm32(-stackAlignmentRegisters()), oldFrameSizeGPR); 2120 // We assume < 2^28 arguments 2121 mul32(TrustedImm32(sizeof(Register)), oldFrameSizeGPR, oldFrameSizeGPR); 2122 } 2123 2124 // The new frame pointer is at framePointer + oldFrameSize - newFrameSize 2125 ASSERT(newFramePointer != oldFrameSizeGPR); 2126 move(framePointerRegister, newFramePointer); 2127 addPtr(oldFrameSizeGPR, newFramePointer); 2128 2129 // The new frame size is just the number of arguments plus the 2130 // frame header size, aligned 2131 ASSERT(newFrameSizeGPR != newFramePointer); 2132 load32(Address(stackPointerRegister, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)), 2133 newFrameSizeGPR); 2134 add32(TrustedImm32(stackAlignmentRegisters() + JSStack::CallFrameHeaderSize - 1), newFrameSizeGPR); 2135 and32(TrustedImm32(-stackAlignmentRegisters()), newFrameSizeGPR); 2136 // We assume < 2^28 arguments 2137 mul32(TrustedImm32(sizeof(Register)), newFrameSizeGPR, newFrameSizeGPR); 2138 } 2139 2140 GPRReg tempGPR = temp3; 2141 ASSERT(tempGPR != newFramePointer && tempGPR != newFrameSizeGPR); 2142 2143 // We don't need the current frame beyond this point. Masquerade as our 2144 // caller. 2145 #if CPU(ARM) || CPU(SH4) || CPU(ARM64) 2146 loadPtr(Address(framePointerRegister, sizeof(void*)), linkRegister); 2147 subPtr(TrustedImm32(2 * sizeof(void*)), newFrameSizeGPR); 2148 #elif CPU(MIPS) 2149 loadPtr(Address(framePointerRegister, sizeof(void*)), returnAddressRegister); 2150 subPtr(TrustedImm32(2 * sizeof(void*)), newFrameSizeGPR); 2151 #elif CPU(X86) || CPU(X86_64) 2152 loadPtr(Address(framePointerRegister, sizeof(void*)), tempGPR); 2153 push(tempGPR); 2154 subPtr(TrustedImm32(sizeof(void*)), newFrameSizeGPR); 2155 #else 2156 UNREACHABLE_FOR_PLATFORM(); 2157 #endif 2158 subPtr(newFrameSizeGPR, newFramePointer); 2159 loadPtr(Address(framePointerRegister), framePointerRegister); 2160 2161 2162 // We need to move the newFrameSizeGPR slots above the stack pointer by 2163 // newFramePointer registers. We use pointer-sized chunks. 2164 MacroAssembler::Label copyLoop(label()); 2165 2166 subPtr(TrustedImm32(sizeof(void*)), newFrameSizeGPR); 2167 loadPtr(BaseIndex(stackPointerRegister, newFrameSizeGPR, TimesOne), tempGPR); 2168 storePtr(tempGPR, BaseIndex(newFramePointer, newFrameSizeGPR, TimesOne)); 2169 2170 branchTest32(MacroAssembler::NonZero, newFrameSizeGPR).linkTo(copyLoop, this); 2171 2172 // Ready for a jump! 2173 move(newFramePointer, stackPointerRegister); 2174 } 2085 2175 }; 2086 2176 -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r189848 r189884 198 198 DEFINE_OP(op_bitxor) 199 199 DEFINE_OP(op_call) 200 DEFINE_OP(op_tail_call) 200 201 DEFINE_OP(op_call_eval) 201 202 DEFINE_OP(op_call_varargs) 203 DEFINE_OP(op_tail_call_varargs) 202 204 DEFINE_OP(op_construct_varargs) 203 205 DEFINE_OP(op_catch) … … 372 374 DEFINE_SLOWCASE_OP(op_bitxor) 373 375 DEFINE_SLOWCASE_OP(op_call) 376 DEFINE_SLOWCASE_OP(op_tail_call) 374 377 DEFINE_SLOWCASE_OP(op_call_eval) 375 378 DEFINE_SLOWCASE_OP(op_call_varargs) 379 DEFINE_SLOWCASE_OP(op_tail_call_varargs) 376 380 DEFINE_SLOWCASE_OP(op_construct_varargs) 377 381 DEFINE_SLOWCASE_OP(op_construct) -
trunk/Source/JavaScriptCore/jit/JIT.h
r189848 r189884 488 488 void emit_op_bitxor(Instruction*); 489 489 void emit_op_call(Instruction*); 490 void emit_op_tail_call(Instruction*); 490 491 void emit_op_call_eval(Instruction*); 491 492 void emit_op_call_varargs(Instruction*); 493 void emit_op_tail_call_varargs(Instruction*); 492 494 void emit_op_construct_varargs(Instruction*); 493 495 void emit_op_catch(Instruction*); … … 601 603 void emitSlow_op_bitxor(Instruction*, Vector<SlowCaseEntry>::iterator&); 602 604 void emitSlow_op_call(Instruction*, Vector<SlowCaseEntry>::iterator&); 605 void emitSlow_op_tail_call(Instruction*, Vector<SlowCaseEntry>::iterator&); 603 606 void emitSlow_op_call_eval(Instruction*, Vector<SlowCaseEntry>::iterator&); 604 607 void emitSlow_op_call_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&); 608 void emitSlow_op_tail_call_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&); 605 609 void emitSlow_op_construct_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&); 606 610 void emitSlow_op_construct(Instruction*, Vector<SlowCaseEntry>::iterator&); … … 825 829 826 830 Call emitNakedCall(CodePtr function = CodePtr()); 831 Call emitNakedTailCall(CodePtr function = CodePtr()); 827 832 828 833 // Loads the character value of a single character string into dst. -
trunk/Source/JavaScriptCore/jit/JITCall.cpp
r189848 r189884 146 146 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length); 147 147 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length); 148 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call), call_and_tail_call_opcodes_must_be_same_length); 149 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_varargs), call_and_tail_call_varargs_opcodes_must_be_same_length); 148 150 CallLinkInfo* info; 149 151 if (opcodeID != op_call_eval) 150 152 info = m_codeBlock->addCallLinkInfo(); 151 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs )153 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs) 152 154 compileSetupVarargsFrame(instruction, info); 153 155 else { … … 173 175 174 176 store64(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 175 177 176 178 if (opcodeID == op_call_eval) { 177 179 compileCallEval(instruction); 178 180 return; 179 181 } 182 183 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) 184 emitRestoreCalleeSaves(); 180 185 181 186 DataLabelPtr addressOfLinkedFunctionCheck; … … 189 194 m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; 190 195 196 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { 197 prepareForTailCallSlow(); 198 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); 199 return; 200 } 201 191 202 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); 192 203 … … 209 220 210 221 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 222 211 223 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code()); 224 225 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { 226 abortWithReason(JITDidReturnFromTailCall); 227 return; 228 } 212 229 213 230 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); … … 224 241 } 225 242 243 void JIT::emit_op_tail_call(Instruction* currentInstruction) 244 { 245 compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); 246 } 247 226 248 void JIT::emit_op_call_eval(Instruction* currentInstruction) 227 249 { … … 233 255 compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); 234 256 } 235 257 258 void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) 259 { 260 compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); 261 } 262 236 263 void JIT::emit_op_construct_varargs(Instruction* currentInstruction) 237 264 { … … 247 274 { 248 275 compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); 276 } 277 278 void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 279 { 280 compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); 249 281 } 250 282 … … 258 290 compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 259 291 } 292 293 void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 294 { 295 compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 296 } 260 297 261 298 void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) -
trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp
r189848 r189884 70 70 } 71 71 72 void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 73 { 74 compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++); 75 } 76 72 77 void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 73 78 { … … 79 84 compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 80 85 } 86 87 void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 88 { 89 compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 90 } 81 91 82 92 void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) … … 95 105 } 96 106 107 void JIT::emit_op_tail_call(Instruction* currentInstruction) 108 { 109 compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++); 110 } 111 97 112 void JIT::emit_op_call_eval(Instruction* currentInstruction) 98 113 { … … 103 118 { 104 119 compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); 120 } 121 122 void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) 123 { 124 compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); 105 125 } 106 126 … … 211 231 if (opcodeID != op_call_eval) 212 232 info = m_codeBlock->addCallLinkInfo(); 213 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs )233 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs) 214 234 compileSetupVarargsFrame(instruction, info); 215 235 else { … … 242 262 } 243 263 264 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) 265 emitRestoreCalleeSaves(); 266 244 267 addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); 245 268 … … 256 279 257 280 checkStackPointerAlignment(); 281 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { 282 prepareForTailCallSlow(); 283 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall(); 284 return; 285 } 286 258 287 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); 259 288 … … 276 305 277 306 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 307 308 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) 309 emitRestoreCalleeSaves(); 310 278 311 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code()); 312 313 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { 314 abortWithReason(JITDidReturnFromTailCall); 315 return; 316 } 279 317 280 318 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); -
trunk/Source/JavaScriptCore/jit/JITInlines.h
r189848 r189884 126 126 } 127 127 128 ALWAYS_INLINE JIT::Call JIT::emitNakedTailCall(CodePtr function) 129 { 130 ASSERT(m_bytecodeOffset != std::numeric_limits<unsigned>::max()); // This method should only be called during hot/cold path generation, so that m_bytecodeOffset is set. 131 Call nakedCall = nearTailCall(); 132 m_calls.append(CallRecord(nakedCall, m_bytecodeOffset, function.executableAddress())); 133 return nakedCall; 134 } 135 128 136 ALWAYS_INLINE void JIT::updateTopCallFrame() 129 137 { -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r189849 r189884 683 683 } 684 684 685 static void* handleHostCall(ExecState* execCallee, JSValue callee, CodeSpecializationKind kind)685 static SlowPathReturnType handleHostCall(ExecState* execCallee, JSValue callee, CallLinkInfo* callLinkInfo) 686 686 { 687 687 ExecState* exec = execCallee->callerFrame(); … … 690 690 execCallee->setCodeBlock(0); 691 691 692 if ( kind== CodeForCall) {692 if (callLinkInfo->specializationKind() == CodeForCall) { 693 693 CallData callData; 694 694 CallType callType = getCallData(callee, callData); … … 700 700 execCallee->setCallee(asObject(callee)); 701 701 vm->hostCallReturnValue = JSValue::decode(callData.native.function(execCallee)); 702 if (vm->exception()) 703 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 704 705 return reinterpret_cast<void*>(getHostCallReturnValue); 702 if (vm->exception()) { 703 return encodeResult( 704 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 705 reinterpret_cast<void*>(KeepTheFrame)); 706 } 707 708 return encodeResult( 709 bitwise_cast<void*>(getHostCallReturnValue), 710 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 706 711 } 707 712 708 713 ASSERT(callType == CallTypeNone); 709 714 exec->vm().throwException(exec, createNotAFunctionError(exec, callee)); 710 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 711 } 712 713 ASSERT(kind == CodeForConstruct); 715 return encodeResult( 716 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 717 reinterpret_cast<void*>(KeepTheFrame)); 718 } 719 720 ASSERT(callLinkInfo->specializationKind() == CodeForConstruct); 714 721 715 722 ConstructData constructData; … … 722 729 execCallee->setCallee(asObject(callee)); 723 730 vm->hostCallReturnValue = JSValue::decode(constructData.native.function(execCallee)); 724 if (vm->exception()) 725 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 726 727 return reinterpret_cast<void*>(getHostCallReturnValue); 731 if (vm->exception()) { 732 return encodeResult( 733 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 734 reinterpret_cast<void*>(KeepTheFrame)); 735 } 736 737 return encodeResult(bitwise_cast<void*>(getHostCallReturnValue), reinterpret_cast<void*>(KeepTheFrame)); 728 738 } 729 739 730 740 ASSERT(constructType == ConstructTypeNone); 731 741 exec->vm().throwException(exec, createNotAConstructorError(exec, callee)); 732 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 733 } 734 735 char* JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 742 return encodeResult( 743 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 744 reinterpret_cast<void*>(KeepTheFrame)); 745 } 746 747 SlowPathReturnType JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 736 748 { 737 749 ExecState* exec = execCallee->callerFrame(); … … 746 758 // expensive. 747 759 // https://bugs.webkit.org/show_bug.cgi?id=144458 748 return reinterpret_cast<char*>(handleHostCall(execCallee, calleeAsValue, kind));760 return handleHostCall(execCallee, calleeAsValue, callLinkInfo); 749 761 } 750 762 … … 775 787 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 776 788 exec->vm().throwException(exec, createNotAConstructorError(exec, callee)); 777 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 789 return encodeResult( 790 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 791 reinterpret_cast<void*>(KeepTheFrame)); 778 792 } 779 793 … … 781 795 if (error) { 782 796 exec->vm().throwException(exec, error); 783 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 797 return encodeResult( 798 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 799 reinterpret_cast<void*>(KeepTheFrame)); 784 800 } 785 801 codeBlock = functionExecutable->codeBlockFor(kind); 786 802 ArityCheckMode arity; 787 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo-> callType() == CallLinkInfo::CallVarargs || callLinkInfo->callType() == CallLinkInfo::ConstructVarargs)803 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs()) 788 804 arity = MustCheckArity; 789 805 else … … 796 812 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr); 797 813 798 return reinterpret_cast<char*>(codePtr.executableAddress());799 } 800 801 inline char*virtualForWithFunction(814 return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 815 } 816 817 inline SlowPathReturnType virtualForWithFunction( 802 818 ExecState* execCallee, CallLinkInfo* callLinkInfo, JSCell*& calleeAsFunctionCell) 803 819 { … … 810 826 calleeAsFunctionCell = getJSFunction(calleeAsValue); 811 827 if (UNLIKELY(!calleeAsFunctionCell)) 812 return reinterpret_cast<char*>(handleHostCall(execCallee, calleeAsValue, kind));828 return handleHostCall(execCallee, calleeAsValue, callLinkInfo); 813 829 814 830 JSFunction* function = jsCast<JSFunction*>(calleeAsFunctionCell); … … 825 841 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 826 842 exec->vm().throwException(exec, createNotAConstructorError(exec, function)); 827 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 843 return encodeResult( 844 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 845 reinterpret_cast<void*>(KeepTheFrame)); 828 846 } 829 847 … … 831 849 if (error) { 832 850 exec->vm().throwException(exec, error); 833 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 851 return encodeResult( 852 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 853 reinterpret_cast<void*>(KeepTheFrame)); 834 854 } 835 855 } else { … … 845 865 } 846 866 } 847 return reinterpret_cast<char*>(executable->entrypointFor( 848 *vm, kind, MustCheckArity, callLinkInfo->registerPreservationMode()).executableAddress()); 849 } 850 851 char* JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 867 return encodeResult(executable->entrypointFor( 868 *vm, kind, MustCheckArity, callLinkInfo->registerPreservationMode()).executableAddress(), 869 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 870 } 871 872 SlowPathReturnType JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 852 873 { 853 874 ASSERT(callLinkInfo->specializationKind() == CodeForCall); 854 875 JSCell* calleeAsFunctionCell; 855 char*result = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell);876 SlowPathReturnType result = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell); 856 877 857 878 linkPolymorphicCall(execCallee, *callLinkInfo, CallVariant(calleeAsFunctionCell)); … … 860 881 } 861 882 862 char*JIT_OPERATION operationVirtualCall(ExecState* execCallee, CallLinkInfo* callLinkInfo)883 SlowPathReturnType JIT_OPERATION operationVirtualCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 863 884 { 864 885 JSCell* calleeAsFunctionCellIgnored; -
trunk/Source/JavaScriptCore/jit/JITOperations.h
r189848 r189884 238 238 typedef char* JIT_OPERATION (*P_JITOperation_EStZ)(ExecState*, Structure*, int32_t); 239 239 typedef char* JIT_OPERATION (*P_JITOperation_EZZ)(ExecState*, int32_t, int32_t); 240 typedef SlowPathReturnType JIT_OPERATION (*Sprt_JITOperation_ECli)(ExecState*, CallLinkInfo*); 240 241 typedef StringImpl* JIT_OPERATION (*T_JITOperation_EJss)(ExecState*, JSString*); 241 242 typedef JSString* JIT_OPERATION (*Jss_JITOperation_EZ)(ExecState*, int32_t); … … 279 280 void JIT_OPERATION operationDirectPutByValGeneric(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue, ByValInfo*) WTF_INTERNAL; 280 281 EncodedJSValue JIT_OPERATION operationCallEval(ExecState*, ExecState*) WTF_INTERNAL; 281 char*JIT_OPERATION operationLinkCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;282 char*JIT_OPERATION operationLinkPolymorphicCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;283 char*JIT_OPERATION operationVirtualCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;282 SlowPathReturnType JIT_OPERATION operationLinkCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 283 SlowPathReturnType JIT_OPERATION operationLinkPolymorphicCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 284 SlowPathReturnType JIT_OPERATION operationVirtualCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 284 285 285 286 size_t JIT_OPERATION operationCompareLess(ExecState*, EncodedJSValue, EncodedJSValue) WTF_INTERNAL; -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r189848 r189884 610 610 CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock(); 611 611 VM* vm = callerCodeBlock->vm(); 612 612 613 613 if (shouldShowDisassemblyFor(callerCodeBlock)) 614 614 dataLog("Linking virtual call at ", *callerCodeBlock, " ", exec->callerFrame()->codeOrigin(), "\n"); … … 681 681 // If we cannot handle a callee, assume that it's better for this whole thing to be a 682 682 // virtual call. 683 if (exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo. callType() == CallLinkInfo::CallVarargs || callLinkInfo.callType() == CallLinkInfo::ConstructVarargs) {683 if (exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) { 684 684 linkVirtualFor(exec, callLinkInfo); 685 685 return; … … 804 804 CCallHelpers::Address(fastCountsBaseGPR, caseIndex * sizeof(uint32_t))); 805 805 } 806 calls[caseIndex].call = stubJit.nearCall(); 806 if (callLinkInfo.isTailCall()) { 807 stubJit.prepareForTailCallSlow(); 808 calls[caseIndex].call = stubJit.nearTailCall(); 809 } else 810 calls[caseIndex].call = stubJit.nearCall(); 807 811 calls[caseIndex].codePtr = codePtr; 808 812 done.append(stubJit.jump()); -
trunk/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
r181993 r189884 38 38 { 39 39 jit.move(numUsedSlotsGPR, resultGPR); 40 // We really want to make sure the size of the new call frame is a multiple of 41 // stackAlignmentRegisters(), however it is easier to accomplish this by 42 // rounding numUsedSlotsGPR to the next multiple of stackAlignmentRegisters(). 43 // Together with the rounding below, we will assure that the new call frame is 44 // located on a stackAlignmentRegisters() boundary and a multiple of 45 // stackAlignmentRegisters() in size. 46 jit.addPtr(CCallHelpers::TrustedImm32(stackAlignmentRegisters() - 1), resultGPR); 47 jit.andPtr(CCallHelpers::TrustedImm32(~(stackAlignmentRegisters() - 1)), resultGPR); 48 40 49 jit.addPtr(lengthGPR, resultGPR); 41 50 jit.addPtr(CCallHelpers::TrustedImm32(JSStack::CallFrameHeaderSize + (lengthIncludesThis? 0 : 1)), resultGPR); -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r189848 r189884 80 80 81 81 static void slowPathFor( 82 CCallHelpers& jit, VM* vm, P_JITOperation_ECli slowPathFunction)82 CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction) 83 83 { 84 84 jit.emitFunctionPrologue(); 85 85 jit.storePtr(GPRInfo::callFrameRegister, &vm->topCallFrame); 86 #if OS(WINDOWS) && CPU(X86_64) 87 // Windows X86_64 needs some space pointed to by arg0 for return types larger than 64 bits. 88 // Other argument values are shift by 1. Use space on the stack for our two return values. 89 // Moving the stack down maxFrameExtentForSlowPathCall bytes gives us room for our 3 arguments 90 // and space for the 16 byte return area. 91 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 92 jit.move(GPRInfo::regT2, GPRInfo::argumentGPR2); 93 jit.addPtr(CCallHelpers::TrustedImm32(32), CCallHelpers::stackPointerRegister, GPRInfo::argumentGPR0); 94 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1); 95 jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0); 96 emitPointerValidation(jit, GPRInfo::nonArgGPR0); 97 jit.call(GPRInfo::nonArgGPR0); 98 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR, 8), GPRInfo::returnValueGPR2); 99 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR), GPRInfo::returnValueGPR); 100 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 101 #else 86 102 if (maxFrameExtentForSlowPathCall) 87 103 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); … … 92 108 if (maxFrameExtentForSlowPathCall) 93 109 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 94 110 #endif 111 95 112 // This slow call will return the address of one of the following: 96 113 // 1) Exception throwing thunk. 97 114 // 2) Host call return value returner thingy. 98 115 // 3) The function to call. 116 // The second return value GPR will hold a non-zero value for tail calls. 117 99 118 emitPointerValidation(jit, GPRInfo::returnValueGPR); 100 119 jit.emitFunctionEpilogue(); 120 121 RELEASE_ASSERT(reinterpret_cast<void*>(KeepTheFrame) == reinterpret_cast<void*>(0)); 122 CCallHelpers::Jump doNotTrash = jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::returnValueGPR2); 123 124 jit.preserveReturnAddressAfterCall(GPRInfo::nonPreservedNonReturnGPR); 125 jit.prepareForTailCallSlow(GPRInfo::returnValueGPR); 126 127 doNotTrash.link(&jit); 101 128 jit.jump(GPRInfo::returnValueGPR); 102 129 } … … 109 136 // to be in regT0/regT1 (payload/tag), the CallFrame to have already 110 137 // been adjusted, and all other registers to be available for use. 111 112 138 CCallHelpers jit(vm); 113 139 … … 186 212 // Make a tail call. This will return back to JIT code. 187 213 emitPointerValidation(jit, GPRInfo::regT4); 214 if (callLinkInfo.isTailCall()) { 215 jit.preserveReturnAddressAfterCall(GPRInfo::regT0); 216 jit.prepareForTailCallSlow(GPRInfo::regT4); 217 } 188 218 jit.jump(GPRInfo::regT4); 189 219 … … 197 227 return FINALIZE_CODE( 198 228 patchBuffer, 199 ("Virtual %s%s slow path thunk", 200 callLinkInfo.specializationKind() == CodeForCall ? "call" : "construct", 201 callLinkInfo.registerPreservationMode() == MustPreserveRegisters ? " that preserves registers" : "")); 229 ("Virtual %s slow path thunk", 230 callLinkInfo.callMode() == CallMode::Regular ? "call" : callLinkInfo.callMode() == CallMode::Tail ? "tail call" : "construct")); 202 231 } 203 232 … … 368 397 JSInterfaceJIT jit(vm); 369 398 370 // We enter with fixup count , in aligned stack units, in argumentGPR0 and the return thunk in argumentGPR1399 // We enter with fixup count in argumentGPR0 371 400 // We have the guarantee that a0, a1, a2, t3, t4 and t5 (or t0 for Windows) are all distinct :-) 372 401 #if USE(JSVALUE64) … … 379 408 jit.pop(JSInterfaceJIT::regT4); 380 409 # endif 381 jit.lshift32(JSInterfaceJIT::TrustedImm32(logStackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0);382 jit.neg64(JSInterfaceJIT::argumentGPR0);383 410 jit.move(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::regT3); 384 411 jit.load32(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, JSStack::ArgumentCount * sizeof(Register)), JSInterfaceJIT::argumentGPR2); 385 412 jit.add32(JSInterfaceJIT::TrustedImm32(JSStack::CallFrameHeaderSize), JSInterfaceJIT::argumentGPR2); 413 414 // Check to see if we have extra slots we can use 415 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR1); 416 jit.and32(JSInterfaceJIT::TrustedImm32(stackAlignmentRegisters() - 1), JSInterfaceJIT::argumentGPR1); 417 JSInterfaceJIT::Jump noExtraSlot = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR1); 418 jit.move(JSInterfaceJIT::TrustedImm64(ValueUndefined), extraTemp); 419 JSInterfaceJIT::Label fillExtraSlots(jit.label()); 420 jit.store64(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight)); 421 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2); 422 jit.branchSub32(JSInterfaceJIT::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR1).linkTo(fillExtraSlots, &jit); 423 jit.and32(JSInterfaceJIT::TrustedImm32(-stackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0); 424 JSInterfaceJIT::Jump done = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR0); 425 noExtraSlot.link(&jit); 426 427 jit.neg64(JSInterfaceJIT::argumentGPR0); 386 428 387 429 // Move current frame down argumentGPR0 number of slots … … 392 434 jit.branchSub32(MacroAssembler::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2).linkTo(copyLoop, &jit); 393 435 394 // Fill in argumentGPR0 - 1missing arg slots with undefined436 // Fill in argumentGPR0 missing arg slots with undefined 395 437 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR2); 396 438 jit.move(JSInterfaceJIT::TrustedImm64(ValueUndefined), extraTemp); 397 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2);398 439 JSInterfaceJIT::Label fillUndefinedLoop(jit.label()); 399 440 jit.store64(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight)); … … 407 448 jit.addPtr(extraTemp, JSInterfaceJIT::stackPointerRegister); 408 449 450 done.link(&jit); 451 409 452 # if CPU(X86_64) 410 453 jit.push(JSInterfaceJIT::regT4); … … 415 458 jit.pop(JSInterfaceJIT::regT4); 416 459 # endif 417 jit.lshift32(JSInterfaceJIT::TrustedImm32(logStackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0);418 jit.neg32(JSInterfaceJIT::argumentGPR0);419 460 jit.move(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::regT3); 420 461 jit.load32(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, JSStack::ArgumentCount * sizeof(Register)), JSInterfaceJIT::argumentGPR2); 421 462 jit.add32(JSInterfaceJIT::TrustedImm32(JSStack::CallFrameHeaderSize), JSInterfaceJIT::argumentGPR2); 422 463 464 // Check to see if we have extra slots we can use 465 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR1); 466 jit.and32(JSInterfaceJIT::TrustedImm32(stackAlignmentRegisters() - 1), JSInterfaceJIT::argumentGPR1); 467 JSInterfaceJIT::Jump noExtraSlot = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR1); 468 JSInterfaceJIT::Label fillExtraSlots(jit.label()); 469 jit.move(JSInterfaceJIT::TrustedImm32(0), JSInterfaceJIT::regT5); 470 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight, PayloadOffset)); 471 jit.move(JSInterfaceJIT::TrustedImm32(JSValue::UndefinedTag), JSInterfaceJIT::regT5); 472 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight, TagOffset)); 473 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2); 474 jit.branchSub32(JSInterfaceJIT::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR1).linkTo(fillExtraSlots, &jit); 475 jit.and32(JSInterfaceJIT::TrustedImm32(-stackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0); 476 JSInterfaceJIT::Jump done = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR0); 477 noExtraSlot.link(&jit); 478 479 jit.neg32(JSInterfaceJIT::argumentGPR0); 480 423 481 // Move current frame down argumentGPR0 number of slots 424 482 JSInterfaceJIT::Label copyLoop(jit.label()); 425 jit.load32( JSInterfaceJIT::regT3, JSInterfaceJIT::regT5);426 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight ));427 jit.load32(MacroAssembler::Address(JSInterfaceJIT::regT3, 4), JSInterfaceJIT::regT5);428 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, 4));483 jit.load32(MacroAssembler::Address(JSInterfaceJIT::regT3, PayloadOffset), JSInterfaceJIT::regT5); 484 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, PayloadOffset)); 485 jit.load32(MacroAssembler::Address(JSInterfaceJIT::regT3, TagOffset), JSInterfaceJIT::regT5); 486 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, TagOffset)); 429 487 jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::regT3); 430 488 jit.branchSub32(MacroAssembler::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2).linkTo(copyLoop, &jit); 431 489 432 // Fill in argumentGPR0 - 1missing arg slots with undefined490 // Fill in argumentGPR0 missing arg slots with undefined 433 491 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR2); 434 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2);435 492 JSInterfaceJIT::Label fillUndefinedLoop(jit.label()); 436 493 jit.move(JSInterfaceJIT::TrustedImm32(0), JSInterfaceJIT::regT5); 437 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight ));494 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, PayloadOffset)); 438 495 jit.move(JSInterfaceJIT::TrustedImm32(JSValue::UndefinedTag), JSInterfaceJIT::regT5); 439 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, 4));496 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, TagOffset)); 440 497 441 498 jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::regT3); … … 448 505 jit.addPtr(JSInterfaceJIT::regT5, JSInterfaceJIT::stackPointerRegister); 449 506 507 done.link(&jit); 508 450 509 # if CPU(X86) 451 510 jit.push(JSInterfaceJIT::regT4); … … 456 515 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 457 516 return FINALIZE_CODE(patchBuffer, ("fixup arity")); 517 } 518 519 MacroAssemblerCodeRef unreachableGenerator(VM* vm) 520 { 521 JSInterfaceJIT jit(vm); 522 523 jit.breakpoint(); 524 525 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 526 return FINALIZE_CODE(patchBuffer, ("unreachable thunk")); 458 527 } 459 528 -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.h
r189848 r189884 35 35 36 36 class CallLinkInfo; 37 class CCallHelpers; 37 38 38 39 MacroAssemblerCodeRef throwExceptionFromCallSlowPathGenerator(VM*); 39 40 41 MacroAssemblerCodeRef linkCallThunk(VM*, CallLinkInfo&, CodeSpecializationKind, RegisterPreservationMode); 40 42 MacroAssemblerCodeRef linkCallThunkGenerator(VM*); 41 43 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM*); … … 47 49 MacroAssemblerCodeRef nativeTailCallGenerator(VM*); 48 50 MacroAssemblerCodeRef arityFixupGenerator(VM*); 51 MacroAssemblerCodeRef unreachableGenerator(VM*); 49 52 50 53 MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm); -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r189848 r189884 168 168 169 169 const StackAlignment = 16 170 const StackAlignmentSlots = 2 170 171 const StackAlignmentMask = StackAlignment - 1 171 172 … … 698 699 end 699 700 700 macro callTargetFunction(callLinkInfo, calleeFramePtr) 701 move calleeFramePtr, sp 701 macro callTargetFunction(callee) 702 702 if C_LOOP 703 cloopCallJSFunction LLIntCallLinkInfo::machineCodeTarget[callLinkInfo]703 cloopCallJSFunction callee 704 704 else 705 call LLIntCallLinkInfo::machineCodeTarget[callLinkInfo]705 call callee 706 706 end 707 707 restoreStackPointerAfterCall() … … 709 709 end 710 710 711 macro slowPathForCall(slowPath) 711 macro prepareForRegularCall(callee, temp1, temp2, temp3) 712 addp CallerFrameAndPCSize, sp 713 end 714 715 # sp points to the new frame 716 macro prepareForTailCall(callee, temp1, temp2, temp3) 717 restoreCalleeSavesUsedByLLInt() 718 719 loadi PayloadOffset + ArgumentCount[cfr], temp2 720 loadp CodeBlock[cfr], temp1 721 loadp CodeBlock::m_numParameters[temp1], temp1 722 bilteq temp1, temp2, .noArityFixup 723 move temp1, temp2 724 725 .noArityFixup: 726 # We assume < 2^28 arguments 727 muli SlotSize, temp2 728 addi StackAlignment - 1 + CallFrameHeaderSize, temp2 729 andi ~StackAlignmentMask, temp2 730 731 move cfr, temp1 732 addp temp2, temp1 733 734 loadi PayloadOffset + ArgumentCount[sp], temp2 735 # We assume < 2^28 arguments 736 muli SlotSize, temp2 737 addi StackAlignment - 1 + CallFrameHeaderSize, temp2 738 andi ~StackAlignmentMask, temp2 739 740 if ARM or SH4 or ARM64 or C_LOOP or MIPS 741 addp 2 * PtrSize, sp 742 subi 2 * PtrSize, temp2 743 loadp PtrSize[cfr], lr 744 else 745 addp PtrSize, sp 746 subi PtrSize, temp2 747 loadp PtrSize[cfr], temp3 748 storep temp3, [sp] 749 end 750 751 subp temp2, temp1 752 loadp [cfr], cfr 753 754 .copyLoop: 755 subi PtrSize, temp2 756 loadp [sp, temp2, 1], temp3 757 storep temp3, [temp1, temp2, 1] 758 btinz temp2, .copyLoop 759 760 move temp1, sp 761 jmp callee 762 end 763 764 macro slowPathForCall(slowPath, prepareCall) 712 765 callCallSlowPath( 713 766 slowPath, 714 macro (callee, calleeFrame) 715 btpz calleeFrame, .dontUpdateSP 716 if ARMv7 717 addp CallerFrameAndPCSize, calleeFrame, calleeFrame 718 move calleeFrame, sp 719 else 720 addp CallerFrameAndPCSize, calleeFrame, sp 721 end 767 # Those are r0 and r1 768 macro (callee, calleeFramePtr) 769 btpz calleeFramePtr, .dontUpdateSP 770 move calleeFramePtr, sp 771 prepareCall(callee, t2, t3, t4) 722 772 .dontUpdateSP: 723 if C_LOOP 724 cloopCallJSFunction callee 725 else 726 call callee 727 end 728 restoreStackPointerAfterCall() 729 dispatchAfterCall() 773 callTargetFunction(callee) 730 774 end) 731 775 end … … 1416 1460 traceExecution() 1417 1461 arrayProfileForCall() 1418 doCall(_llint_slow_path_call) 1419 1462 doCall(_llint_slow_path_call, prepareForRegularCall) 1463 1464 _llint_op_tail_call: 1465 traceExecution() 1466 arrayProfileForCall() 1467 checkSwitchToJITForEpilogue() 1468 doCall(_llint_slow_path_call, prepareForTailCall) 1420 1469 1421 1470 _llint_op_construct: 1422 1471 traceExecution() 1423 doCall(_llint_slow_path_construct) 1424 1425 1426 _llint_op_call_varargs: 1427 traceExecution() 1472 doCall(_llint_slow_path_construct, prepareForRegularCall) 1473 1474 macro doCallVarargs(slowPath, prepareCall) 1428 1475 callSlowPath(_llint_slow_path_size_frame_for_varargs) 1429 1476 branchIfException(_llint_throw_from_slow_path_trampoline) … … 1440 1487 end 1441 1488 end 1442 slowPathForCall(_llint_slow_path_call_varargs) 1489 slowPathForCall(slowPath, prepareCall) 1490 end 1491 1492 _llint_op_call_varargs: 1493 traceExecution() 1494 doCallVarargs(_llint_slow_path_call_varargs, prepareForRegularCall) 1495 1496 _llint_op_tail_call_varargs: 1497 traceExecution() 1498 checkSwitchToJITForEpilogue() 1499 # We lie and perform the tail call instead of preparing it since we can't 1500 # prepare the frame for a call opcode 1501 doCallVarargs(_llint_slow_path_call_varargs, prepareForTailCall) 1443 1502 1444 1503 _llint_op_construct_varargs: 1445 1504 traceExecution() 1446 callSlowPath(_llint_slow_path_size_frame_for_varargs) 1447 branchIfException(_llint_throw_from_slow_path_trampoline) 1448 # calleeFrame in r1 1449 if JSVALUE64 1450 move r1, sp 1451 else 1452 # The calleeFrame is not stack aligned, move down by CallerFrameAndPCSize to align 1453 if ARMv7 1454 subp r1, CallerFrameAndPCSize, t2 1455 move t2, sp 1456 else 1457 subp r1, CallerFrameAndPCSize, sp 1458 end 1459 end 1460 slowPathForCall(_llint_slow_path_construct_varargs) 1505 doCallVarargs(_llint_slow_path_construct_varargs, prepareForRegularCall) 1461 1506 1462 1507 … … 1497 1542 # returns the JS value that the eval returned. 1498 1543 1499 slowPathForCall(_llint_slow_path_call_eval )1544 slowPathForCall(_llint_slow_path_call_eval, prepareForRegularCall) 1500 1545 1501 1546 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r189848 r189884 603 603 loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1 604 604 btiz t1, .continue 605 606 // Move frame up "t1 * 2" slots 607 lshiftp 1, t1 605 loadi PayloadOffset + ArgumentCount[cfr], t2 606 addi CallFrameHeaderSlots, t2 607 608 // Check if there are some unaligned slots we can use 609 move t1, t3 610 andi StackAlignmentSlots - 1, t3 611 btiz t3, .noExtraSlot 612 .fillExtraSlots: 613 move 0, t0 614 storei t0, PayloadOffset[cfr, t2, 8] 615 move UndefinedTag, t0 616 storei t0, TagOffset[cfr, t2, 8] 617 addi 1, t2 618 bsubinz 1, t3, .fillExtraSlots 619 andi ~(StackAlignmentSlots - 1), t1 620 btiz t1, .continue 621 622 .noExtraSlot: 623 // Move frame up t1 slots 608 624 negi t1 609 625 move cfr, t3 610 loadi PayloadOffset + ArgumentCount[cfr], t2611 addi CallFrameHeaderSlots, t2612 626 .copyLoop: 613 627 loadi PayloadOffset[t3], t0 … … 1765 1779 end 1766 1780 1767 macro doCall(slowPath )1781 macro doCall(slowPath, prepareCall) 1768 1782 loadi 8[PC], t0 1769 1783 loadi 20[PC], t1 … … 1780 1794 storei t2, ArgumentCount + PayloadOffset[t3] 1781 1795 storei CellTag, Callee + TagOffset[t3] 1782 addp CallerFrameAndPCSize, t3 1783 callTargetFunction(t1, t3) 1796 move t3, sp 1797 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4) 1798 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1]) 1784 1799 1785 1800 .opCallSlow: 1786 slowPathForCall(slowPath) 1787 end 1788 1801 slowPathForCall(slowPath, prepareCall) 1802 end 1789 1803 1790 1804 _llint_op_ret: -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r189848 r189884 511 511 loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1 512 512 btiz t1, .continue 513 514 // Move frame up "t1 * 2" slots 515 lshiftp 1, t1 513 loadi PayloadOffset + ArgumentCount[cfr], t2 514 addi CallFrameHeaderSlots, t2 515 516 // Check if there are some unaligned slots we can use 517 move t1, t3 518 andi StackAlignmentSlots - 1, t3 519 btiz t3, .noExtraSlot 520 move ValueUndefined, t0 521 .fillExtraSlots: 522 storeq t0, [cfr, t2, 8] 523 addi 1, t2 524 bsubinz 1, t3, .fillExtraSlots 525 andi ~(StackAlignmentSlots - 1), t1 526 btiz t1, .continue 527 528 .noExtraSlot: 529 // Move frame up t1 slots 516 530 negq t1 517 531 move cfr, t3 518 532 subp CalleeSaveSpaceAsVirtualRegisters * 8, t3 519 loadi PayloadOffset + ArgumentCount[cfr], t2 520 addi CallFrameHeaderSlots + CalleeSaveSpaceAsVirtualRegisters, t2 533 addi CalleeSaveSpaceAsVirtualRegisters, t2 521 534 .copyLoop: 522 535 loadq [t3], t0 … … 1654 1667 end 1655 1668 1656 macro doCall(slowPath )1669 macro doCall(slowPath, prepareCall) 1657 1670 loadisFromInstruction(2, t0) 1658 1671 loadpFromInstruction(5, t1) … … 1668 1681 storei PC, ArgumentCount + TagOffset[cfr] 1669 1682 storei t2, ArgumentCount + PayloadOffset[t3] 1670 addp CallerFrameAndPCSize, t3 1671 callTargetFunction(t1, t3) 1683 move t3, sp 1684 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4) 1685 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1]) 1672 1686 1673 1687 .opCallSlow: 1674 slowPathForCall(slowPath) 1675 end 1676 1688 slowPathForCall(slowPath, prepareCall) 1689 end 1677 1690 1678 1691 _llint_op_ret: -
trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.h
r189848 r189884 60 60 61 61 ASSERT(argumentCountIncludingThis < newCodeBlock->numParameters()); 62 int missingArgumentCount = newCodeBlock->numParameters() - argumentCountIncludingThis; 63 int neededStackSpace = missingArgumentCount + 1; // Allow space to save the original return PC. 64 int paddedStackSpace = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), neededStackSpace); 65 66 if (!stack->ensureCapacityFor(exec->registers() - paddedStackSpace)) 62 int frameSize = argumentCountIncludingThis + JSStack::CallFrameHeaderSize; 63 int alignedFrameSizeForParameters = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), 64 newCodeBlock->numParameters() + JSStack::CallFrameHeaderSize); 65 int paddedStackSpace = alignedFrameSizeForParameters - frameSize; 66 67 if (!stack->ensureCapacityFor(exec->registers() - paddedStackSpace % stackAlignmentRegisters())) 67 68 return -1; 68 return paddedStackSpace / stackAlignmentRegisters();69 return paddedStackSpace; 69 70 } 70 71
Note: See TracChangeset
for help on using the changeset viewer.