Changeset 189848 in webkit
- Timestamp:
- Sep 15, 2015 11:14:54 PM (9 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 5 deleted
- 42 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r189774 r189848 81 81 bytecode/CallLinkInfo.cpp 82 82 bytecode/CallLinkStatus.cpp 83 bytecode/CallMode.cpp84 83 bytecode/CallVariant.cpp 85 84 bytecode/CodeBlock.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r189846 r189848 1 2015-09-15 Michael Saboff <msaboff@apple.com> 2 3 Rollout r189774 and 189818. 4 5 Broke Speedometer/Full.html 6 7 Not reviewed. 8 9 * CMakeLists.txt: 10 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: 11 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: 12 * JavaScriptCore.xcodeproj/project.pbxproj: 13 * assembler/AbortReason.h: 14 * assembler/AbstractMacroAssembler.h: 15 (JSC::AbstractMacroAssembler::Call::Call): 16 (JSC::AbstractMacroAssembler::repatchNearCall): 17 (JSC::AbstractMacroAssembler::repatchCompact): 18 * assembler/CodeLocation.h: 19 (JSC::CodeLocationNearCall::CodeLocationNearCall): 20 (JSC::CodeLocationCommon::callAtOffset): 21 (JSC::CodeLocationCommon::nearCallAtOffset): 22 (JSC::CodeLocationCommon::dataLabelPtrAtOffset): 23 (JSC::CodeLocationNearCall::callMode): Deleted. 24 * assembler/LinkBuffer.h: 25 (JSC::LinkBuffer::locationOfNearCall): 26 (JSC::LinkBuffer::locationOf): 27 * assembler/MacroAssemblerARM.h: 28 (JSC::MacroAssemblerARM::nearCall): 29 (JSC::MacroAssemblerARM::call): 30 (JSC::MacroAssemblerARM::linkCall): 31 (JSC::MacroAssemblerARM::nearTailCall): Deleted. 32 * assembler/MacroAssemblerARM64.h: 33 (JSC::MacroAssemblerARM64::nearCall): 34 (JSC::MacroAssemblerARM64::ret): 35 (JSC::MacroAssemblerARM64::linkCall): 36 (JSC::MacroAssemblerARM64::nearTailCall): Deleted. 37 * assembler/MacroAssemblerARMv7.h: 38 (JSC::MacroAssemblerARMv7::nearCall): 39 (JSC::MacroAssemblerARMv7::call): 40 (JSC::MacroAssemblerARMv7::linkCall): 41 (JSC::MacroAssemblerARMv7::nearTailCall): Deleted. 42 * assembler/MacroAssemblerMIPS.h: 43 (JSC::MacroAssemblerMIPS::nearCall): 44 (JSC::MacroAssemblerMIPS::call): 45 (JSC::MacroAssemblerMIPS::linkCall): 46 (JSC::MacroAssemblerMIPS::repatchCall): 47 (JSC::MacroAssemblerMIPS::nearTailCall): Deleted. 48 * assembler/MacroAssemblerSH4.h: 49 (JSC::MacroAssemblerSH4::call): 50 (JSC::MacroAssemblerSH4::nearCall): 51 (JSC::MacroAssemblerSH4::linkCall): 52 (JSC::MacroAssemblerSH4::repatchCall): 53 (JSC::MacroAssemblerSH4::nearTailCall): Deleted. 54 * assembler/MacroAssemblerX86.h: 55 (JSC::MacroAssemblerX86::linkCall): 56 * assembler/MacroAssemblerX86Common.h: 57 (JSC::MacroAssemblerX86Common::breakpoint): 58 (JSC::MacroAssemblerX86Common::nearCall): 59 (JSC::MacroAssemblerX86Common::nearTailCall): Deleted. 60 * assembler/MacroAssemblerX86_64.h: 61 (JSC::MacroAssemblerX86_64::linkCall): 62 * bytecode/BytecodeList.json: 63 * bytecode/BytecodeUseDef.h: 64 (JSC::computeUsesForBytecodeOffset): 65 (JSC::computeDefsForBytecodeOffset): 66 * bytecode/CallLinkInfo.h: 67 (JSC::CallLinkInfo::callTypeFor): 68 (JSC::CallLinkInfo::CallLinkInfo): 69 (JSC::CallLinkInfo::specializationKind): 70 (JSC::CallLinkInfo::registerPreservationMode): 71 (JSC::CallLinkInfo::isVarargsCallType): Deleted. 72 (JSC::CallLinkInfo::callModeFor): Deleted. 73 (JSC::CallLinkInfo::callMode): Deleted. 74 (JSC::CallLinkInfo::isTailCall): Deleted. 75 (JSC::CallLinkInfo::isVarargs): Deleted. 76 * bytecode/CallLinkStatus.cpp: 77 (JSC::CallLinkStatus::computeFromLLInt): 78 * bytecode/CodeBlock.cpp: 79 (JSC::CodeBlock::dumpBytecode): 80 (JSC::CodeBlock::CodeBlock): 81 * bytecompiler/BytecodeGenerator.cpp: 82 (JSC::BytecodeGenerator::BytecodeGenerator): 83 (JSC::BytecodeGenerator::emitCallInTailPosition): 84 (JSC::BytecodeGenerator::emitCallEval): 85 (JSC::BytecodeGenerator::emitCall): 86 (JSC::BytecodeGenerator::emitCallVarargsInTailPosition): 87 (JSC::BytecodeGenerator::emitConstructVarargs): 88 * bytecompiler/NodesCodegen.cpp: 89 (JSC::CallArguments::CallArguments): 90 (JSC::LabelNode::emitBytecode): 91 * dfg/DFGByteCodeParser.cpp: 92 (JSC::DFG::ByteCodeParser::addCallWithoutSettingResult): 93 * ftl/FTLLowerDFGToLLVM.cpp: 94 (JSC::FTL::DFG::LowerDFGToLLVM::compileCallOrConstruct): 95 * interpreter/Interpreter.h: 96 (JSC::Interpreter::isCallBytecode): 97 * jit/CCallHelpers.h: 98 (JSC::CCallHelpers::jumpToExceptionHandler): 99 (JSC::CCallHelpers::prepareForTailCallSlow): Deleted. 100 * jit/JIT.cpp: 101 (JSC::JIT::privateCompileMainPass): 102 (JSC::JIT::privateCompileSlowCases): 103 * jit/JIT.h: 104 * jit/JITCall.cpp: 105 (JSC::JIT::compileOpCall): 106 (JSC::JIT::compileOpCallSlowCase): 107 (JSC::JIT::emit_op_call): 108 (JSC::JIT::emit_op_call_eval): 109 (JSC::JIT::emit_op_call_varargs): 110 (JSC::JIT::emit_op_construct_varargs): 111 (JSC::JIT::emitSlow_op_call): 112 (JSC::JIT::emitSlow_op_call_eval): 113 (JSC::JIT::emitSlow_op_call_varargs): 114 (JSC::JIT::emitSlow_op_construct_varargs): 115 (JSC::JIT::emit_op_tail_call): Deleted. 116 (JSC::JIT::emit_op_tail_call_varargs): Deleted. 117 (JSC::JIT::emitSlow_op_tail_call): Deleted. 118 (JSC::JIT::emitSlow_op_tail_call_varargs): Deleted. 119 * jit/JITCall32_64.cpp: 120 (JSC::JIT::emitSlow_op_call): 121 (JSC::JIT::emitSlow_op_call_eval): 122 (JSC::JIT::emitSlow_op_call_varargs): 123 (JSC::JIT::emitSlow_op_construct_varargs): 124 (JSC::JIT::emit_op_call): 125 (JSC::JIT::emit_op_call_eval): 126 (JSC::JIT::emit_op_call_varargs): 127 (JSC::JIT::emit_op_construct_varargs): 128 (JSC::JIT::compileOpCall): 129 (JSC::JIT::compileOpCallSlowCase): 130 (JSC::JIT::emitSlow_op_tail_call): Deleted. 131 (JSC::JIT::emitSlow_op_tail_call_varargs): Deleted. 132 (JSC::JIT::emit_op_tail_call): Deleted. 133 (JSC::JIT::emit_op_tail_call_varargs): Deleted. 134 * jit/JITInlines.h: 135 (JSC::JIT::emitNakedCall): 136 (JSC::JIT::updateTopCallFrame): 137 (JSC::JIT::emitNakedTailCall): Deleted. 138 * jit/JITOperations.cpp: 139 * jit/JITOperations.h: 140 * jit/Repatch.cpp: 141 (JSC::linkVirtualFor): 142 (JSC::linkPolymorphicCall): 143 * jit/ThunkGenerators.cpp: 144 (JSC::throwExceptionFromCallSlowPathGenerator): 145 (JSC::slowPathFor): 146 (JSC::linkCallThunkGenerator): 147 (JSC::virtualThunkFor): 148 (JSC::arityFixupGenerator): 149 (JSC::baselineGetterReturnThunkGenerator): 150 (JSC::unreachableGenerator): Deleted. 151 * jit/ThunkGenerators.h: 152 * llint/LowLevelInterpreter.asm: 153 * llint/LowLevelInterpreter32_64.asm: 154 * llint/LowLevelInterpreter64.asm: 155 * runtime/CommonSlowPaths.h: 156 (JSC::CommonSlowPaths::arityCheckFor): 157 (JSC::CommonSlowPaths::opIn): 158 * tests/stress/mutual-tail-call-no-stack-overflow.js: Removed. 159 * tests/stress/tail-call-no-stack-overflow.js: Removed. 160 * tests/stress/tail-call-recognize.js: Removed. 161 * tests/stress/tail-call-varargs-no-stack-overflow.js: Removed. 162 * tests/stress/tail-calls-dont-overwrite-live-stack.js: Removed. 163 1 164 2015-09-15 Sukolsak Sakshuwong <sukolsak@gmail.com> 2 165 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
r189774 r189848 320 320 <ClCompile Include="..\bytecode\CallLinkInfo.cpp" /> 321 321 <ClCompile Include="..\bytecode\CallLinkStatus.cpp" /> 322 <ClCompile Include="..\bytecode\CallMode.cpp" />323 322 <ClCompile Include="..\bytecode\CallVariant.cpp" /> 324 323 <ClCompile Include="..\bytecode\CodeBlock.cpp" /> … … 1014 1013 <ClInclude Include="..\bytecode\CallLinkInfo.h" /> 1015 1014 <ClInclude Include="..\bytecode\CallLinkStatus.h" /> 1016 <ClInclude Include="..\bytecode\CallMode.h" />1017 1015 <ClInclude Include="..\bytecode\CallReturnOffsetToBytecodeOffset.h" /> 1018 1016 <ClInclude Include="..\bytecode\CallVariant.h" /> -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
r189774 r189848 163 163 <Filter>bytecode</Filter> 164 164 </ClCompile> 165 <ClCompile Include="..\bytecode\CallMode.cpp">166 <Filter>bytecode</Filter>167 </ClCompile>168 165 <ClCompile Include="..\bytecode\CodeBlock.cpp"> 169 166 <Filter>bytecode</Filter> … … 2036 2033 </ClInclude> 2037 2034 <ClInclude Include="..\bytecode\CallLinkStatus.h"> 2038 <Filter>bytecode</Filter>2039 </ClInclude>2040 <ClInclude Include="..\bytecode\CallMode.h">2041 2035 <Filter>bytecode</Filter> 2042 2036 </ClInclude> -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r189774 r189848 970 970 5DE6E5B30E1728EC00180407 /* create_hash_table in Headers */ = {isa = PBXBuildFile; fileRef = F692A8540255597D01FF60F7 /* create_hash_table */; settings = {ATTRIBUTES = (); }; }; 971 971 623A37EC1B87A7C000754209 /* RegisterMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 623A37EB1B87A7BD00754209 /* RegisterMap.h */; }; 972 627673231B680C1E00FD9F2E /* CallMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 627673211B680C1E00FD9F2E /* CallMode.cpp */; };973 627673241B680C1E00FD9F2E /* CallMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 627673221B680C1E00FD9F2E /* CallMode.h */; settings = {ATTRIBUTES = (Private, ); }; };974 972 62D2D38F1ADF103F000206C1 /* FunctionRareData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 62D2D38D1ADF103F000206C1 /* FunctionRareData.cpp */; }; 975 973 62D2D3901ADF103F000206C1 /* FunctionRareData.h in Headers */ = {isa = PBXBuildFile; fileRef = 62D2D38E1ADF103F000206C1 /* FunctionRareData.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2775 2773 5DE3D0F40DD8DDFB00468714 /* WebKitAvailability.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebKitAvailability.h; sourceTree = "<group>"; }; 2776 2774 623A37EB1B87A7BD00754209 /* RegisterMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegisterMap.h; sourceTree = "<group>"; }; 2777 627673211B680C1E00FD9F2E /* CallMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallMode.cpp; sourceTree = "<group>"; };2778 627673221B680C1E00FD9F2E /* CallMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallMode.h; sourceTree = "<group>"; };2779 2775 62A9A29E1B0BED4800BD54CA /* DFGLazyNode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGLazyNode.cpp; path = dfg/DFGLazyNode.cpp; sourceTree = "<group>"; }; 2780 2776 62A9A29F1B0BED4800BD54CA /* DFGLazyNode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGLazyNode.h; path = dfg/DFGLazyNode.h; sourceTree = "<group>"; }; … … 5543 5539 0F93329314CA7DC10085F3C6 /* CallLinkStatus.cpp */, 5544 5540 0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */, 5545 627673211B680C1E00FD9F2E /* CallMode.cpp */,5546 627673221B680C1E00FD9F2E /* CallMode.h */,5547 5541 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */, 5548 5542 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */, … … 6197 6191 A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */, 6198 6192 0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */, 6199 627673241B680C1E00FD9F2E /* CallMode.h in Headers */,6200 6193 0FD8A32617D51F5700CA2C40 /* DFGOSREntrypointCreationPhase.h in Headers */, 6201 6194 62F2AA381B0BEDE300610C7A /* DFGLazyNode.h in Headers */, … … 7979 7972 9335F24D12E6765B002B5553 /* StringRecursionChecker.cpp in Sources */, 7980 7973 BCDE3B430E6C832D001453A7 /* Structure.cpp in Sources */, 7981 627673231B680C1E00FD9F2E /* CallMode.cpp in Sources */,7982 7974 7E4EE70F0EBB7A5B005934AA /* StructureChain.cpp in Sources */, 7983 7975 C2F0F2D116BAEEE900187C19 /* StructureRareData.cpp in Sources */, -
trunk/Source/JavaScriptCore/assembler/AbortReason.h
r189774 r189848 59 59 DFGUnreasonableOSREntryJumpDestination = 230, 60 60 DFGVarargsThrowingPathDidNotThrow = 235, 61 JITDidReturnFromTailCall = 237,62 61 JITDivOperandsAreNotNumbers = 240, 63 62 JITGetByValResultIsNotEmpty = 250, -
trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
r189774 r189848 508 508 Linkable = 0x1, 509 509 Near = 0x2, 510 Tail = 0x4,511 510 LinkableNear = 0x3, 512 LinkableNearTail = 0x7,513 511 }; 514 512 … … 965 963 static void repatchNearCall(CodeLocationNearCall nearCall, CodeLocationLabel destination) 966 964 { 967 switch (nearCall.callMode()) { 968 case NearCallMode::Tail: 969 AssemblerType::relinkJump(nearCall.dataLocation(), destination.executableAddress()); 970 return; 971 case NearCallMode::Regular: 972 AssemblerType::relinkCall(nearCall.dataLocation(), destination.executableAddress()); 973 return; 974 } 975 RELEASE_ASSERT_NOT_REACHED(); 965 AssemblerType::relinkCall(nearCall.dataLocation(), destination.executableAddress()); 976 966 } 977 967 -
trunk/Source/JavaScriptCore/assembler/CodeLocation.h
r189774 r189848 33 33 namespace JSC { 34 34 35 enum NearCallMode { Regular, Tail };36 37 35 class CodeLocationInstruction; 38 36 class CodeLocationLabel; … … 62 60 CodeLocationJump jumpAtOffset(int offset); 63 61 CodeLocationCall callAtOffset(int offset); 64 CodeLocationNearCall nearCallAtOffset(int offset , NearCallMode);62 CodeLocationNearCall nearCallAtOffset(int offset); 65 63 CodeLocationDataLabelPtr dataLabelPtrAtOffset(int offset); 66 64 CodeLocationDataLabel32 dataLabel32AtOffset(int offset); … … 118 116 public: 119 117 CodeLocationNearCall() {} 120 explicit CodeLocationNearCall(MacroAssemblerCodePtr location, NearCallMode callMode) 121 : CodeLocationCommon(location), m_callMode(callMode) { } 122 explicit CodeLocationNearCall(void* location, NearCallMode callMode) 123 : CodeLocationCommon(MacroAssemblerCodePtr(location)), m_callMode(callMode) { } 124 NearCallMode callMode() { return m_callMode; } 125 private: 126 NearCallMode m_callMode = NearCallMode::Regular; 118 explicit CodeLocationNearCall(MacroAssemblerCodePtr location) 119 : CodeLocationCommon(location) {} 120 explicit CodeLocationNearCall(void* location) 121 : CodeLocationCommon(MacroAssemblerCodePtr(location)) {} 127 122 }; 128 123 … … 187 182 } 188 183 189 inline CodeLocationNearCall CodeLocationCommon::nearCallAtOffset(int offset , NearCallMode callMode)190 { 191 ASSERT_VALID_CODE_OFFSET(offset); 192 return CodeLocationNearCall(reinterpret_cast<char*>(dataLocation()) + offset , callMode);184 inline CodeLocationNearCall CodeLocationCommon::nearCallAtOffset(int offset) 185 { 186 ASSERT_VALID_CODE_OFFSET(offset); 187 return CodeLocationNearCall(reinterpret_cast<char*>(dataLocation()) + offset); 193 188 } 194 189 -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.h
r189774 r189848 181 181 ASSERT(call.isFlagSet(Call::Linkable)); 182 182 ASSERT(call.isFlagSet(Call::Near)); 183 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label)), 184 call.isFlagSet(Call::Tail) ? NearCallMode::Tail : NearCallMode::Regular); 183 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label))); 185 184 } 186 185 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM.h
r189774 r189848 905 905 } 906 906 907 Call nearTailCall()908 {909 return Call(m_assembler.jmp(), Call::LinkableNearTail);910 }911 912 907 Call call(RegisterID target) 913 908 { … … 1494 1489 static void linkCall(void* code, Call call, FunctionPtr function) 1495 1490 { 1496 if (call.isFlagSet(Call::Tail)) 1497 ARMAssembler::linkJump(code, call.m_label, function.value()); 1498 else 1499 ARMAssembler::linkCall(code, call.m_label, function.value()); 1491 ARMAssembler::linkCall(code, call.m_label, function.value()); 1500 1492 } 1501 1493 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r189774 r189848 2205 2205 } 2206 2206 2207 ALWAYS_INLINE Call nearTailCall()2208 {2209 AssemblerLabel label = m_assembler.label();2210 m_assembler.b();2211 return Call(label, Call::LinkableNearTail);2212 }2213 2214 2207 ALWAYS_INLINE void ret() 2215 2208 { … … 2890 2883 static void linkCall(void* code, Call call, FunctionPtr function) 2891 2884 { 2892 if (!call.isFlagSet(Call::Near)) 2885 if (call.isFlagSet(Call::Near)) 2886 ARM64Assembler::linkCall(code, call.m_label, function.value()); 2887 else 2893 2888 ARM64Assembler::linkPointer(code, call.m_label.labelAtOffset(REPATCH_OFFSET_CALL_TO_POINTER), function.value()); 2894 else if (call.isFlagSet(Call::Tail))2895 ARM64Assembler::linkJump(code, call.m_label, function.value());2896 else2897 ARM64Assembler::linkCall(code, call.m_label, function.value());2898 2889 } 2899 2890 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
r189774 r189848 1678 1678 } 1679 1679 1680 ALWAYS_INLINE Call nearTailCall()1681 {1682 moveFixedWidthEncoding(TrustedImm32(0), dataTempRegister);1683 return Call(m_assembler.bx(dataTempRegister), Call::LinkableNearTail);1684 }1685 1686 1680 ALWAYS_INLINE Call call() 1687 1681 { … … 2019 2013 static void linkCall(void* code, Call call, FunctionPtr function) 2020 2014 { 2021 if (call.isFlagSet(Call::Tail)) 2022 ARMv7Assembler::linkJump(code, call.m_label, function.value()); 2023 else 2024 ARMv7Assembler::linkCall(code, call.m_label, function.value()); 2015 ARMv7Assembler::linkCall(code, call.m_label, function.value()); 2025 2016 } 2026 2017 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerMIPS.h
r189774 r189848 1974 1974 m_assembler.nop(); 1975 1975 return Call(m_assembler.label(), Call::LinkableNear); 1976 }1977 1978 Call nearTailCall()1979 {1980 m_assembler.nop();1981 m_assembler.nop();1982 m_assembler.beq(MIPSRegisters::zero, MIPSRegisters::zero, 0);1983 m_assembler.nop();1984 insertRelaxationWords();1985 return Call(m_assembler.label(), Call::LinkableNearTail);1986 1976 } 1987 1977 … … 2811 2801 static void linkCall(void* code, Call call, FunctionPtr function) 2812 2802 { 2813 if (call.isFlagSet(Call::Tail)) 2814 MIPSAssembler::linkJump(code, call.m_label, function.value()); 2815 else 2816 MIPSAssembler::linkCall(code, call.m_label, function.value()); 2803 MIPSAssembler::linkCall(code, call.m_label, function.value()); 2817 2804 } 2818 2805 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerSH4.h
r189774 r189848 2404 2404 } 2405 2405 2406 Call nearTailCall()2407 {2408 return Call(m_assembler.jump(), Call::LinkableNearTail);2409 }2410 2411 2406 Call nearCall() 2412 2407 { … … 2614 2609 static void linkCall(void* code, Call call, FunctionPtr function) 2615 2610 { 2616 if (call.isFlagSet(Call::Tail)) 2617 SH4Assembler::linkJump(code, call.m_label, function.value()); 2618 else 2619 SH4Assembler::linkCall(code, call.m_label, function.value()); 2611 SH4Assembler::linkCall(code, call.m_label, function.value()); 2620 2612 } 2621 2613 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86.h
r189774 r189848 362 362 static void linkCall(void* code, Call call, FunctionPtr function) 363 363 { 364 if (call.isFlagSet(Call::Tail)) 365 X86Assembler::linkJump(code, call.m_label, function.value()); 366 else 367 X86Assembler::linkCall(code, call.m_label, function.value()); 364 X86Assembler::linkCall(code, call.m_label, function.value()); 368 365 } 369 366 }; -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h
r189774 r189848 1402 1402 { 1403 1403 m_assembler.int3(); 1404 }1405 1406 Call nearTailCall()1407 {1408 return Call(m_assembler.jmp(), Call::LinkableNearTail);1409 1404 } 1410 1405 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
r189774 r189848 873 873 if (!call.isFlagSet(Call::Near)) 874 874 X86Assembler::linkPointer(code, call.m_label.labelAtOffset(-REPATCH_OFFSET_CALL_R11), function.value()); 875 else if (call.isFlagSet(Call::Tail))876 X86Assembler::linkJump(code, call.m_label, function.value());877 875 else 878 876 X86Assembler::linkCall(code, call.m_label, function.value()); -
trunk/Source/JavaScriptCore/bytecode/BytecodeList.json
r189818 r189848 93 93 { "name" : "op_new_arrow_func_exp", "length" : 5 }, 94 94 { "name" : "op_call", "length" : 9 }, 95 { "name" : "op_tail_call", "length" : 9 },96 95 { "name" : "op_call_eval", "length" : 9 }, 97 96 { "name" : "op_call_varargs", "length" : 9 }, 98 { "name" : "op_tail_call_varargs", "length" : 9 },99 97 { "name" : "op_ret", "length" : 2 }, 100 98 { "name" : "op_construct", "length" : 9 }, … … 147 145 { "name" : "llint_cloop_did_return_from_js_6" }, 148 146 { "name" : "llint_cloop_did_return_from_js_7" }, 149 { "name" : "llint_cloop_did_return_from_js_8" }, 150 { "name" : "llint_cloop_did_return_from_js_9" }, 151 { "name" : "llint_cloop_did_return_from_js_10" }, 152 { "name" : "llint_cloop_did_return_from_js_11" } 147 { "name" : "llint_cloop_did_return_from_js_8" } 153 148 ] 154 149 }, -
trunk/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
r189774 r189848 192 192 case op_has_structure_property: 193 193 case op_construct_varargs: 194 case op_call_varargs: 195 case op_tail_call_varargs: { 194 case op_call_varargs: { 196 195 functor(codeBlock, instruction, opcodeID, instruction[2].u.operand); 197 196 functor(codeBlock, instruction, opcodeID, instruction[3].u.operand); … … 222 221 case op_construct: 223 222 case op_call_eval: 224 case op_call: 225 case op_tail_call: { 223 case op_call: { 226 224 functor(codeBlock, instruction, opcodeID, instruction[2].u.operand); 227 225 int argCount = instruction[3].u.operand; … … 314 312 case op_new_arrow_func_exp: 315 313 case op_call_varargs: 316 case op_tail_call_varargs:317 314 case op_construct_varargs: 318 315 case op_get_from_scope: 319 316 case op_call: 320 case op_tail_call:321 317 case op_call_eval: 322 318 case op_construct: -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h
r189774 r189848 27 27 #define CallLinkInfo_h 28 28 29 #include "CallMode.h"30 29 #include "CodeLocation.h" 31 30 #include "CodeSpecializationKind.h" … … 43 42 class CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> { 44 43 public: 45 enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs , TailCall, TailCallVarargs};44 enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs }; 46 45 static CallType callTypeFor(OpcodeID opcodeID) 47 46 { 48 47 if (opcodeID == op_call || opcodeID == op_call_eval) 49 48 return Call; 50 if (opcodeID == op_call_varargs)51 return CallVarargs;52 49 if (opcodeID == op_construct) 53 50 return Construct; 54 51 if (opcodeID == op_construct_varargs) 55 52 return ConstructVarargs; 56 if (opcodeID == op_tail_call) 57 return TailCall; 58 ASSERT(opcodeID == op_tail_call_varargs); 59 return TailCallVarargs; 60 } 61 62 static bool isVarargsCallType(CallType callType) 63 { 64 switch (callType) { 65 case CallVarargs: 66 case ConstructVarargs: 67 case TailCallVarargs: 68 return true; 69 70 default: 71 return false; 72 } 73 } 74 53 ASSERT(opcodeID == op_call_varargs); 54 return CallVarargs; 55 } 56 75 57 CallLinkInfo() 76 58 : m_registerPreservationMode(static_cast<unsigned>(RegisterPreservationNotRequired)) … … 100 82 { 101 83 return specializationKindFor(static_cast<CallType>(m_callType)); 102 }103 104 static CallMode callModeFor(CallType callType)105 {106 switch (callType) {107 case Call:108 case CallVarargs:109 return CallMode::Regular;110 case TailCall:111 case TailCallVarargs:112 return CallMode::Tail;113 case Construct:114 case ConstructVarargs:115 return CallMode::Construct;116 case None:117 RELEASE_ASSERT_NOT_REACHED();118 }119 120 RELEASE_ASSERT_NOT_REACHED();121 }122 123 CallMode callMode() const124 {125 return callModeFor(static_cast<CallType>(m_callType));126 }127 128 bool isTailCall() const129 {130 return callMode() == CallMode::Tail;131 }132 133 bool isVarargs() const134 {135 return isVarargsCallType(static_cast<CallType>(m_callType));136 84 } 137 85 -
trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
r189774 r189848 70 70 Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex; 71 71 OpcodeID op = vm.interpreter->getOpcodeID(instruction[0].u.opcode); 72 if (op != op_call && op != op_construct && op != op_tail_call)72 if (op != op_call && op != op_construct) 73 73 return CallLinkStatus(); 74 74 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r189774 r189848 1249 1249 break; 1250 1250 } 1251 case op_tail_call: {1252 printCallOp(out, exec, location, it, "tail_call", DumpCaches, hasPrintedProfiling, callLinkInfos);1253 break;1254 }1255 1251 case op_call_eval: { 1256 1252 printCallOp(out, exec, location, it, "call_eval", DontDumpCaches, hasPrintedProfiling, callLinkInfos); … … 1259 1255 1260 1256 case op_construct_varargs: 1261 case op_call_varargs: 1262 case op_tail_call_varargs: { 1257 case op_call_varargs: { 1263 1258 int result = (++it)->u.operand; 1264 1259 int callee = (++it)->u.operand; … … 1268 1263 int varArgOffset = (++it)->u.operand; 1269 1264 ++it; 1270 printLocationAndOp(out, exec, location, it, opcode == op_call_varargs ? "call_varargs" : opcode == op_construct_varargs ? "construct_varargs" : "tail_call_varargs");1265 printLocationAndOp(out, exec, location, it, opcode == op_call_varargs ? "call_varargs" : "construct_varargs"); 1271 1266 out.printf("%s, %s, %s, %s, %d, %d", registerName(result).data(), registerName(callee).data(), registerName(thisValue).data(), registerName(arguments).data(), firstFreeRegister, varArgOffset); 1272 1267 dumpValueProfiling(out, it, hasPrintedProfiling); … … 1835 1830 } 1836 1831 case op_call_varargs: 1837 case op_tail_call_varargs:1838 1832 case op_construct_varargs: 1839 1833 case op_get_by_val: { … … 1885 1879 1886 1880 case op_call: 1887 case op_tail_call:1888 1881 case op_call_eval: { 1889 1882 ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand]; -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
r189774 r189848 194 194 , m_isBuiltinFunction(codeBlock->isBuiltinFunction()) 195 195 , m_usesNonStrictEval(codeBlock->usesEval() && !codeBlock->isStrictMode()) 196 // FIXME: We should be able to have tail call elimination with the profiler 197 // enabled. This is currently not possible because the profiler expects 198 // op_will_call / op_did_call pairs before and after a call, which are not 199 // compatible with tail calls (we have no way of emitting op_did_call). 200 // https://bugs.webkit.org/show_bug.cgi?id=148819 201 , m_inTailPosition(Options::enableTailCalls() && constructorKind() == ConstructorKind::None && isStrictMode() && !m_shouldEmitProfileHooks) 196 , m_inTailPosition(Options::enableTailCalls() && constructorKind() == ConstructorKind::None && isStrictMode()) 202 197 { 203 198 for (auto& constantRegister : m_linkTimeConstantRegisters) … … 2519 2514 RegisterID* BytecodeGenerator::emitCallInTailPosition(RegisterID* dst, RegisterID* func, ExpectedFunction expectedFunction, CallArguments& callArguments, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2520 2515 { 2521 return emitCall(m_inTailPosition ? op_tail_call : op_call, dst, func, expectedFunction, callArguments, divot, divotStart, divotEnd); 2516 // FIXME: We should be emitting a new op_tail_call here when 2517 // m_inTailPosition is false 2518 // https://bugs.webkit.org/show_bug.cgi?id=148661 2519 return emitCall(op_call, dst, func, expectedFunction, callArguments, divot, divotStart, divotEnd); 2522 2520 } 2523 2521 … … 2604 2602 RegisterID* BytecodeGenerator::emitCall(OpcodeID opcodeID, RegisterID* dst, RegisterID* func, ExpectedFunction expectedFunction, CallArguments& callArguments, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2605 2603 { 2606 ASSERT(opcodeID == op_call || opcodeID == op_call_eval || opcodeID == op_tail_call);2604 ASSERT(opcodeID == op_call || opcodeID == op_call_eval); 2607 2605 ASSERT(func->refCount()); 2608 2606 … … 2620 2618 argumentRegister = expression->emitBytecode(*this, callArguments.argumentRegister(0)); 2621 2619 RefPtr<RegisterID> thisRegister = emitMove(newTemporary(), callArguments.thisRegister()); 2622 return emitCallVarargs( opcodeID == op_tail_call ? op_tail_call_varargs : op_call_varargs,dst, func, callArguments.thisRegister(), argumentRegister.get(), newTemporary(), 0, callArguments.profileHookRegister(), divot, divotStart, divotEnd);2620 return emitCallVarargs(dst, func, callArguments.thisRegister(), argumentRegister.get(), newTemporary(), 0, callArguments.profileHookRegister(), divot, divotStart, divotEnd); 2623 2621 } 2624 2622 for (; n; n = n->m_next) … … 2673 2671 RegisterID* BytecodeGenerator::emitCallVarargsInTailPosition(RegisterID* dst, RegisterID* func, RegisterID* thisRegister, RegisterID* arguments, RegisterID* firstFreeRegister, int32_t firstVarArgOffset, RegisterID* profileHookRegister, const JSTextPosition& divot, const JSTextPosition& divotStart, const JSTextPosition& divotEnd) 2674 2672 { 2675 return emitCallVarargs(m_inTailPosition ? op_tail_call_varargs : op_call_varargs, dst, func, thisRegister, arguments, firstFreeRegister, firstVarArgOffset, profileHookRegister, divot, divotStart, divotEnd); 2673 // FIXME: We should be emitting a new op_tail_call here when 2674 // m_inTailPosition is false 2675 // https://bugs.webkit.org/show_bug.cgi?id=148661 2676 return emitCallVarargs(op_call_varargs, dst, func, thisRegister, arguments, firstFreeRegister, firstVarArgOffset, profileHookRegister, divot, divotStart, divotEnd); 2676 2677 } 2677 2678 -
trunk/Source/JavaScriptCore/bytecompiler/NodesCodegen.cpp
r189774 r189848 678 678 m_argv[i] = generator.newTemporary(); 679 679 ASSERT(static_cast<size_t>(i) == m_argv.size() - 1 || m_argv[i]->index() == m_argv[i + 1]->index() - 1); 680 }681 682 // We need to ensure that the frame size is stack-aligned683 while ((JSStack::CallFrameHeaderSize + m_argv.size()) % stackAlignmentRegisters()) {684 m_argv.insert(0, generator.newTemporary());685 m_padding++;686 680 } 687 681 … … 2793 2787 2794 2788 LabelScopePtr scope = generator.newLabelScope(LabelScope::NamedLabel, &m_name); 2795 generator.emitNode InTailPosition(dst, m_statement);2789 generator.emitNode(dst, m_statement); 2796 2790 2797 2791 generator.emitLabel(scope->breakTarget()); -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r189774 r189848 742 742 { 743 743 addVarArgChild(callee); 744 size_t frameSize = JSStack::CallFrameHeaderSize + argCount; 745 size_t alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize); 746 size_t parameterSlots = alignedFrameSize - JSStack::CallerFrameAndPCSize; 747 744 size_t parameterSlots = JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + argCount; 748 745 if (parameterSlots > m_parameterSlots) 749 746 m_parameterSlots = parameterSlots; -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
r189774 r189848 4330 4330 void compileCallOrConstruct() 4331 4331 { 4332 int numArgs = m_node->numChildren() - 1; 4332 int numPassedArgs = m_node->numChildren() - 1; 4333 int numArgs = numPassedArgs; 4333 4334 4334 4335 LValue jsCallee = lowJSValue(m_graph.varArgChild(m_node, 0)); 4335 4336 4336 4337 unsigned stackmapID = m_stackmapIDs++; 4337 4338 unsigned frameSize = JSStack::CallFrameHeaderSize + numArgs; 4339 unsigned alignedFrameSize = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), frameSize); 4340 unsigned padding = alignedFrameSize - frameSize; 4341 4338 4342 4339 Vector<LValue> arguments; 4343 4340 arguments.append(m_out.constInt64(stackmapID)); 4344 4341 arguments.append(m_out.constInt32(sizeOfCall())); 4345 4342 arguments.append(constNull(m_out.ref8)); 4346 arguments.append(m_out.constInt32(1 + alignedFrameSize - JSStack::CallerFrameAndPCSize));4343 arguments.append(m_out.constInt32(1 + JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + numArgs)); 4347 4344 arguments.append(jsCallee); // callee -> %rax 4348 4345 arguments.append(getUndef(m_out.int64)); // code block 4349 4346 arguments.append(jsCallee); // callee -> stack 4350 4347 arguments.append(m_out.constInt64(numArgs)); // argument count and zeros for the tag 4351 for (int i = 0; i < num Args; ++i)4348 for (int i = 0; i < numPassedArgs; ++i) 4352 4349 arguments.append(lowJSValue(m_graph.varArgChild(m_node, 1 + i))); 4353 for (unsigned i = 0; i < padding; ++i)4354 arguments.append(getUndef(m_out.int64));4355 4350 4356 4351 callPreflight(); -
trunk/Source/JavaScriptCore/interpreter/Interpreter.h
r189774 r189848 254 254 void dumpRegisters(CallFrame*); 255 255 256 bool isCallBytecode(Opcode opcode) { return opcode == getOpcode(op_call) || opcode == getOpcode(op_construct) || opcode == getOpcode(op_call_eval) || opcode == getOpcode(op_tail_call); }256 bool isCallBytecode(Opcode opcode) { return opcode == getOpcode(op_call) || opcode == getOpcode(op_construct) || opcode == getOpcode(op_call_eval); } 257 257 258 258 void enableSampler(); -
trunk/Source/JavaScriptCore/jit/CCallHelpers.h
r189775 r189848 31 31 #include "AssemblyHelpers.h" 32 32 #include "GPRInfo.h" 33 #include "StackAlignment.h"34 33 35 34 namespace JSC { … … 2084 2083 jump(GPRInfo::regT1); 2085 2084 } 2086 2087 void prepareForTailCallSlow(GPRReg calleeGPR = InvalidGPRReg)2088 {2089 GPRReg temp1 = calleeGPR == GPRInfo::regT0 ? GPRInfo::regT3 : GPRInfo::regT0;2090 GPRReg temp2 = calleeGPR == GPRInfo::regT1 ? GPRInfo::regT3 : GPRInfo::regT1;2091 GPRReg temp3 = calleeGPR == GPRInfo::regT2 ? GPRInfo::regT3 : GPRInfo::regT2;2092 2093 GPRReg newFramePointer = temp1;2094 GPRReg newFrameSizeGPR = temp2;2095 {2096 // The old frame size is its number of arguments (or number of2097 // parameters in case of arity fixup), plus the frame header size,2098 // aligned2099 GPRReg oldFrameSizeGPR = temp2;2100 {2101 GPRReg argCountGPR = oldFrameSizeGPR;2102 load32(Address(framePointerRegister, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), argCountGPR);2103 2104 {2105 GPRReg numParametersGPR = temp1;2106 {2107 GPRReg codeBlockGPR = numParametersGPR;2108 loadPtr(Address(framePointerRegister, JSStack::CodeBlock * static_cast<int>(sizeof(Register))), codeBlockGPR);2109 load32(Address(codeBlockGPR, CodeBlock::offsetOfNumParameters()), numParametersGPR);2110 }2111 2112 ASSERT(numParametersGPR != argCountGPR);2113 Jump argumentCountWasNotFixedUp = branch32(BelowOrEqual, numParametersGPR, argCountGPR);2114 move(numParametersGPR, argCountGPR);2115 argumentCountWasNotFixedUp.link(this);2116 }2117 2118 add32(TrustedImm32(stackAlignmentRegisters() + JSStack::CallFrameHeaderSize - 1), argCountGPR, oldFrameSizeGPR);2119 and32(TrustedImm32(-stackAlignmentRegisters()), oldFrameSizeGPR);2120 // We assume < 2^28 arguments2121 mul32(TrustedImm32(sizeof(Register)), oldFrameSizeGPR, oldFrameSizeGPR);2122 }2123 2124 // The new frame pointer is at framePointer + oldFrameSize - newFrameSize2125 ASSERT(newFramePointer != oldFrameSizeGPR);2126 move(framePointerRegister, newFramePointer);2127 addPtr(oldFrameSizeGPR, newFramePointer);2128 2129 // The new frame size is just the number of arguments plus the2130 // frame header size, aligned2131 ASSERT(newFrameSizeGPR != newFramePointer);2132 load32(Address(stackPointerRegister, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)),2133 newFrameSizeGPR);2134 add32(TrustedImm32(stackAlignmentRegisters() + JSStack::CallFrameHeaderSize - 1), newFrameSizeGPR);2135 and32(TrustedImm32(-stackAlignmentRegisters()), newFrameSizeGPR);2136 // We assume < 2^28 arguments2137 mul32(TrustedImm32(sizeof(Register)), newFrameSizeGPR, newFrameSizeGPR);2138 }2139 2140 GPRReg tempGPR = temp3;2141 ASSERT(tempGPR != newFramePointer && tempGPR != newFrameSizeGPR);2142 2143 // We don't need the current frame beyond this point. Masquerade as our2144 // caller.2145 #if CPU(ARM) || CPU(SH4) || CPU(ARM64)2146 loadPtr(Address(framePointerRegister, sizeof(void*)), linkRegister);2147 subPtr(TrustedImm32(2 * sizeof(void*)), newFrameSizeGPR);2148 #elif CPU(MIPS)2149 loadPtr(Address(framePointerRegister, sizeof(void*)), returnAddressRegister);2150 subPtr(TrustedImm32(2 * sizeof(void*)), newFrameSizeGPR);2151 #elif CPU(X86) || CPU(X86_64)2152 loadPtr(Address(framePointerRegister, sizeof(void*)), tempGPR);2153 push(tempGPR);2154 subPtr(TrustedImm32(sizeof(void*)), newFrameSizeGPR);2155 #else2156 UNREACHABLE_FOR_PLATFORM();2157 #endif2158 subPtr(newFrameSizeGPR, newFramePointer);2159 loadPtr(Address(framePointerRegister), framePointerRegister);2160 2161 2162 // We need to move the newFrameSizeGPR slots above the stack pointer by2163 // newFramePointer registers. We use pointer-sized chunks.2164 MacroAssembler::Label copyLoop(label());2165 2166 subPtr(TrustedImm32(sizeof(void*)), newFrameSizeGPR);2167 loadPtr(BaseIndex(stackPointerRegister, newFrameSizeGPR, TimesOne), tempGPR);2168 storePtr(tempGPR, BaseIndex(newFramePointer, newFrameSizeGPR, TimesOne));2169 2170 branchTest32(MacroAssembler::NonZero, newFrameSizeGPR).linkTo(copyLoop, this);2171 2172 // Ready for a jump!2173 move(newFramePointer, stackPointerRegister);2174 }2175 2085 }; 2176 2086 -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r189774 r189848 198 198 DEFINE_OP(op_bitxor) 199 199 DEFINE_OP(op_call) 200 DEFINE_OP(op_tail_call)201 200 DEFINE_OP(op_call_eval) 202 201 DEFINE_OP(op_call_varargs) 203 DEFINE_OP(op_tail_call_varargs)204 202 DEFINE_OP(op_construct_varargs) 205 203 DEFINE_OP(op_catch) … … 374 372 DEFINE_SLOWCASE_OP(op_bitxor) 375 373 DEFINE_SLOWCASE_OP(op_call) 376 DEFINE_SLOWCASE_OP(op_tail_call)377 374 DEFINE_SLOWCASE_OP(op_call_eval) 378 375 DEFINE_SLOWCASE_OP(op_call_varargs) 379 DEFINE_SLOWCASE_OP(op_tail_call_varargs)380 376 DEFINE_SLOWCASE_OP(op_construct_varargs) 381 377 DEFINE_SLOWCASE_OP(op_construct) -
trunk/Source/JavaScriptCore/jit/JIT.h
r189774 r189848 488 488 void emit_op_bitxor(Instruction*); 489 489 void emit_op_call(Instruction*); 490 void emit_op_tail_call(Instruction*);491 490 void emit_op_call_eval(Instruction*); 492 491 void emit_op_call_varargs(Instruction*); 493 void emit_op_tail_call_varargs(Instruction*);494 492 void emit_op_construct_varargs(Instruction*); 495 493 void emit_op_catch(Instruction*); … … 603 601 void emitSlow_op_bitxor(Instruction*, Vector<SlowCaseEntry>::iterator&); 604 602 void emitSlow_op_call(Instruction*, Vector<SlowCaseEntry>::iterator&); 605 void emitSlow_op_tail_call(Instruction*, Vector<SlowCaseEntry>::iterator&);606 603 void emitSlow_op_call_eval(Instruction*, Vector<SlowCaseEntry>::iterator&); 607 604 void emitSlow_op_call_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&); 608 void emitSlow_op_tail_call_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&);609 605 void emitSlow_op_construct_varargs(Instruction*, Vector<SlowCaseEntry>::iterator&); 610 606 void emitSlow_op_construct(Instruction*, Vector<SlowCaseEntry>::iterator&); … … 829 825 830 826 Call emitNakedCall(CodePtr function = CodePtr()); 831 Call emitNakedTailCall(CodePtr function = CodePtr());832 827 833 828 // Loads the character value of a single character string into dst. -
trunk/Source/JavaScriptCore/jit/JITCall.cpp
r189774 r189848 146 146 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length); 147 147 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length); 148 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call), call_and_tail_call_opcodes_must_be_same_length);149 COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_varargs), call_and_tail_call_varargs_opcodes_must_be_same_length);150 148 CallLinkInfo* info; 151 149 if (opcodeID != op_call_eval) 152 150 info = m_codeBlock->addCallLinkInfo(); 153 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs)151 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs) 154 152 compileSetupVarargsFrame(instruction, info); 155 153 else { … … 175 173 176 174 store64(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 177 175 178 176 if (opcodeID == op_call_eval) { 179 177 compileCallEval(instruction); 180 178 return; 181 179 } 182 183 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs)184 emitRestoreCalleeSaves();185 180 186 181 DataLabelPtr addressOfLinkedFunctionCheck; … … 194 189 m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info; 195 190 196 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {197 prepareForTailCallSlow();198 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall();199 return;200 }201 202 191 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); 203 192 … … 220 209 221 210 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 222 223 211 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code()); 224 212 225 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {226 abortWithReason(JITDidReturnFromTailCall);227 return;228 }229 230 213 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); 231 214 checkStackPointerAlignment(); … … 241 224 } 242 225 243 void JIT::emit_op_tail_call(Instruction* currentInstruction)244 {245 compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++);246 }247 248 226 void JIT::emit_op_call_eval(Instruction* currentInstruction) 249 227 { … … 255 233 compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); 256 234 } 257 258 void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction) 259 { 260 compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++); 261 } 262 235 263 236 void JIT::emit_op_construct_varargs(Instruction* currentInstruction) 264 237 { … … 274 247 { 275 248 compileOpCallSlowCase(op_call, currentInstruction, iter, m_callLinkInfoIndex++); 276 }277 278 void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)279 {280 compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++);281 249 } 282 250 … … 290 258 compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 291 259 } 292 293 void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)294 {295 compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++);296 }297 260 298 261 void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) -
trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp
r189774 r189848 70 70 } 71 71 72 void JIT::emitSlow_op_tail_call(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)73 {74 compileOpCallSlowCase(op_tail_call, currentInstruction, iter, m_callLinkInfoIndex++);75 }76 77 72 void JIT::emitSlow_op_call_eval(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) 78 73 { … … 84 79 compileOpCallSlowCase(op_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++); 85 80 } 86 87 void JIT::emitSlow_op_tail_call_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)88 {89 compileOpCallSlowCase(op_tail_call_varargs, currentInstruction, iter, m_callLinkInfoIndex++);90 }91 81 92 82 void JIT::emitSlow_op_construct_varargs(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) … … 105 95 } 106 96 107 void JIT::emit_op_tail_call(Instruction* currentInstruction)108 {109 compileOpCall(op_tail_call, currentInstruction, m_callLinkInfoIndex++);110 }111 112 97 void JIT::emit_op_call_eval(Instruction* currentInstruction) 113 98 { … … 118 103 { 119 104 compileOpCall(op_call_varargs, currentInstruction, m_callLinkInfoIndex++); 120 }121 122 void JIT::emit_op_tail_call_varargs(Instruction* currentInstruction)123 {124 compileOpCall(op_tail_call_varargs, currentInstruction, m_callLinkInfoIndex++);125 105 } 126 106 … … 231 211 if (opcodeID != op_call_eval) 232 212 info = m_codeBlock->addCallLinkInfo(); 233 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs)213 if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs) 234 214 compileSetupVarargsFrame(instruction, info); 235 215 else { … … 262 242 } 263 243 264 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs)265 emitRestoreCalleeSaves();266 267 244 addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::CellTag))); 268 245 … … 279 256 280 257 checkStackPointerAlignment(); 281 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {282 prepareForTailCallSlow();283 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedTailCall();284 return;285 }286 287 258 m_callCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(); 288 259 … … 305 276 306 277 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 307 308 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs)309 emitRestoreCalleeSaves();310 311 278 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code()); 312 313 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {314 abortWithReason(JITDidReturnFromTailCall);315 return;316 }317 279 318 280 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); -
trunk/Source/JavaScriptCore/jit/JITInlines.h
r189774 r189848 126 126 } 127 127 128 ALWAYS_INLINE JIT::Call JIT::emitNakedTailCall(CodePtr function)129 {130 ASSERT(m_bytecodeOffset != std::numeric_limits<unsigned>::max()); // This method should only be called during hot/cold path generation, so that m_bytecodeOffset is set.131 Call nakedCall = nearTailCall();132 m_calls.append(CallRecord(nakedCall, m_bytecodeOffset, function.executableAddress()));133 return nakedCall;134 }135 136 128 ALWAYS_INLINE void JIT::updateTopCallFrame() 137 129 { -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r189805 r189848 683 683 } 684 684 685 static SlowPathReturnType handleHostCall(ExecState* execCallee, JSValue callee, CallLinkInfo* callLinkInfo)685 static void* handleHostCall(ExecState* execCallee, JSValue callee, CodeSpecializationKind kind) 686 686 { 687 687 ExecState* exec = execCallee->callerFrame(); … … 690 690 execCallee->setCodeBlock(0); 691 691 692 if ( callLinkInfo->specializationKind()== CodeForCall) {692 if (kind == CodeForCall) { 693 693 CallData callData; 694 694 CallType callType = getCallData(callee, callData); … … 700 700 execCallee->setCallee(asObject(callee)); 701 701 vm->hostCallReturnValue = JSValue::decode(callData.native.function(execCallee)); 702 if (vm->exception()) { 703 return encodeResult( 704 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 705 reinterpret_cast<void*>(KeepTheFrame)); 706 } 707 708 return encodeResult( 709 bitwise_cast<void*>(getHostCallReturnValue), 710 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 702 if (vm->exception()) 703 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 704 705 return reinterpret_cast<void*>(getHostCallReturnValue); 711 706 } 712 707 713 708 ASSERT(callType == CallTypeNone); 714 709 exec->vm().throwException(exec, createNotAFunctionError(exec, callee)); 715 return encodeResult( 716 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 717 reinterpret_cast<void*>(KeepTheFrame)); 718 } 719 720 ASSERT(callLinkInfo->specializationKind() == CodeForConstruct); 710 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 711 } 712 713 ASSERT(kind == CodeForConstruct); 721 714 722 715 ConstructData constructData; … … 729 722 execCallee->setCallee(asObject(callee)); 730 723 vm->hostCallReturnValue = JSValue::decode(constructData.native.function(execCallee)); 731 if (vm->exception()) { 732 return encodeResult( 733 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 734 reinterpret_cast<void*>(KeepTheFrame)); 735 } 736 737 return encodeResult(bitwise_cast<void*>(getHostCallReturnValue), reinterpret_cast<void*>(KeepTheFrame)); 724 if (vm->exception()) 725 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 726 727 return reinterpret_cast<void*>(getHostCallReturnValue); 738 728 } 739 729 740 730 ASSERT(constructType == ConstructTypeNone); 741 731 exec->vm().throwException(exec, createNotAConstructorError(exec, callee)); 742 return encodeResult( 743 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 744 reinterpret_cast<void*>(KeepTheFrame)); 745 } 746 747 SlowPathReturnType JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 732 return vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(); 733 } 734 735 char* JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 748 736 { 749 737 ExecState* exec = execCallee->callerFrame(); … … 758 746 // expensive. 759 747 // https://bugs.webkit.org/show_bug.cgi?id=144458 760 return handleHostCall(execCallee, calleeAsValue, callLinkInfo);748 return reinterpret_cast<char*>(handleHostCall(execCallee, calleeAsValue, kind)); 761 749 } 762 750 … … 787 775 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 788 776 exec->vm().throwException(exec, createNotAConstructorError(exec, callee)); 789 return encodeResult( 790 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 791 reinterpret_cast<void*>(KeepTheFrame)); 777 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 792 778 } 793 779 … … 795 781 if (error) { 796 782 exec->vm().throwException(exec, error); 797 return encodeResult( 798 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 799 reinterpret_cast<void*>(KeepTheFrame)); 783 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 800 784 } 801 785 codeBlock = functionExecutable->codeBlockFor(kind); 802 786 ArityCheckMode arity; 803 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo-> isVarargs())787 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->callType() == CallLinkInfo::CallVarargs || callLinkInfo->callType() == CallLinkInfo::ConstructVarargs) 804 788 arity = MustCheckArity; 805 789 else … … 812 796 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr); 813 797 814 return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));815 } 816 817 inline SlowPathReturnTypevirtualForWithFunction(798 return reinterpret_cast<char*>(codePtr.executableAddress()); 799 } 800 801 inline char* virtualForWithFunction( 818 802 ExecState* execCallee, CallLinkInfo* callLinkInfo, JSCell*& calleeAsFunctionCell) 819 803 { … … 826 810 calleeAsFunctionCell = getJSFunction(calleeAsValue); 827 811 if (UNLIKELY(!calleeAsFunctionCell)) 828 return handleHostCall(execCallee, calleeAsValue, callLinkInfo);812 return reinterpret_cast<char*>(handleHostCall(execCallee, calleeAsValue, kind)); 829 813 830 814 JSFunction* function = jsCast<JSFunction*>(calleeAsFunctionCell); … … 841 825 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 842 826 exec->vm().throwException(exec, createNotAConstructorError(exec, function)); 843 return encodeResult( 844 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 845 reinterpret_cast<void*>(KeepTheFrame)); 827 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 846 828 } 847 829 … … 849 831 if (error) { 850 832 exec->vm().throwException(exec, error); 851 return encodeResult( 852 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 853 reinterpret_cast<void*>(KeepTheFrame)); 833 return reinterpret_cast<char*>(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress()); 854 834 } 855 835 } else { … … 867 847 } 868 848 } 869 return encodeResult(executable->entrypointFor( 870 *vm, kind, MustCheckArity, callLinkInfo->registerPreservationMode()).executableAddress(), 871 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 872 } 873 874 SlowPathReturnType JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 849 return reinterpret_cast<char*>(executable->entrypointFor( 850 *vm, kind, MustCheckArity, callLinkInfo->registerPreservationMode()).executableAddress()); 851 } 852 853 char* JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 875 854 { 876 855 ASSERT(callLinkInfo->specializationKind() == CodeForCall); 877 856 JSCell* calleeAsFunctionCell; 878 SlowPathReturnTyperesult = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell);857 char* result = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell); 879 858 880 859 linkPolymorphicCall(execCallee, *callLinkInfo, CallVariant(calleeAsFunctionCell)); … … 883 862 } 884 863 885 SlowPathReturnTypeJIT_OPERATION operationVirtualCall(ExecState* execCallee, CallLinkInfo* callLinkInfo)864 char* JIT_OPERATION operationVirtualCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 886 865 { 887 866 JSCell* calleeAsFunctionCellIgnored; -
trunk/Source/JavaScriptCore/jit/JITOperations.h
r189774 r189848 238 238 typedef char* JIT_OPERATION (*P_JITOperation_EStZ)(ExecState*, Structure*, int32_t); 239 239 typedef char* JIT_OPERATION (*P_JITOperation_EZZ)(ExecState*, int32_t, int32_t); 240 typedef SlowPathReturnType JIT_OPERATION (*Sprt_JITOperation_ECli)(ExecState*, CallLinkInfo*);241 240 typedef StringImpl* JIT_OPERATION (*T_JITOperation_EJss)(ExecState*, JSString*); 242 241 typedef JSString* JIT_OPERATION (*Jss_JITOperation_EZ)(ExecState*, int32_t); … … 280 279 void JIT_OPERATION operationDirectPutByValGeneric(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue, ByValInfo*) WTF_INTERNAL; 281 280 EncodedJSValue JIT_OPERATION operationCallEval(ExecState*, ExecState*) WTF_INTERNAL; 282 SlowPathReturnTypeJIT_OPERATION operationLinkCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;283 SlowPathReturnTypeJIT_OPERATION operationLinkPolymorphicCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;284 SlowPathReturnTypeJIT_OPERATION operationVirtualCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;281 char* JIT_OPERATION operationLinkCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 282 char* JIT_OPERATION operationLinkPolymorphicCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 283 char* JIT_OPERATION operationVirtualCall(ExecState*, CallLinkInfo*) WTF_INTERNAL; 285 284 286 285 size_t JIT_OPERATION operationCompareLess(ExecState*, EncodedJSValue, EncodedJSValue) WTF_INTERNAL; -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r189774 r189848 610 610 CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock(); 611 611 VM* vm = callerCodeBlock->vm(); 612 612 613 613 if (shouldShowDisassemblyFor(callerCodeBlock)) 614 614 dataLog("Linking virtual call at ", *callerCodeBlock, " ", exec->callerFrame()->codeOrigin(), "\n"); … … 681 681 // If we cannot handle a callee, assume that it's better for this whole thing to be a 682 682 // virtual call. 683 if (exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo. isVarargs()) {683 if (exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.callType() == CallLinkInfo::CallVarargs || callLinkInfo.callType() == CallLinkInfo::ConstructVarargs) { 684 684 linkVirtualFor(exec, callLinkInfo); 685 685 return; … … 804 804 CCallHelpers::Address(fastCountsBaseGPR, caseIndex * sizeof(uint32_t))); 805 805 } 806 if (callLinkInfo.isTailCall()) { 807 stubJit.prepareForTailCallSlow(); 808 calls[caseIndex].call = stubJit.nearTailCall(); 809 } else 810 calls[caseIndex].call = stubJit.nearCall(); 806 calls[caseIndex].call = stubJit.nearCall(); 811 807 calls[caseIndex].codePtr = codePtr; 812 808 done.append(stubJit.jump()); -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r189774 r189848 80 80 81 81 static void slowPathFor( 82 CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction)82 CCallHelpers& jit, VM* vm, P_JITOperation_ECli slowPathFunction) 83 83 { 84 84 jit.emitFunctionPrologue(); 85 85 jit.storePtr(GPRInfo::callFrameRegister, &vm->topCallFrame); 86 #if OS(WINDOWS) && CPU(X86_64)87 // Windows X86_64 needs some space pointed to by arg0 for return types larger than 64 bits.88 // Other argument values are shift by 1. Use space on the stack for our two return values.89 // Moving the stack down maxFrameExtentForSlowPathCall bytes gives us room for our 3 arguments90 // and space for the 16 byte return area.91 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);92 jit.move(GPRInfo::regT2, GPRInfo::argumentGPR2);93 jit.addPtr(CCallHelpers::TrustedImm32(32), CCallHelpers::stackPointerRegister, GPRInfo::argumentGPR0);94 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);95 jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0);96 emitPointerValidation(jit, GPRInfo::nonArgGPR0);97 jit.call(GPRInfo::nonArgGPR0);98 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR, 8), GPRInfo::returnValueGPR2);99 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR), GPRInfo::returnValueGPR);100 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);101 #else102 86 if (maxFrameExtentForSlowPathCall) 103 87 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); … … 108 92 if (maxFrameExtentForSlowPathCall) 109 93 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 110 #endif 111 94 112 95 // This slow call will return the address of one of the following: 113 96 // 1) Exception throwing thunk. 114 97 // 2) Host call return value returner thingy. 115 98 // 3) The function to call. 116 // The second return value GPR will hold a non-zero value for tail calls.117 118 99 emitPointerValidation(jit, GPRInfo::returnValueGPR); 119 100 jit.emitFunctionEpilogue(); 120 121 RELEASE_ASSERT(reinterpret_cast<void*>(KeepTheFrame) == reinterpret_cast<void*>(0));122 CCallHelpers::Jump doNotTrash = jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::returnValueGPR2);123 124 jit.preserveReturnAddressAfterCall(GPRInfo::nonPreservedNonReturnGPR);125 jit.prepareForTailCallSlow(GPRInfo::returnValueGPR);126 127 doNotTrash.link(&jit);128 101 jit.jump(GPRInfo::returnValueGPR); 129 102 } … … 136 109 // to be in regT0/regT1 (payload/tag), the CallFrame to have already 137 110 // been adjusted, and all other registers to be available for use. 111 138 112 CCallHelpers jit(vm); 139 113 … … 212 186 // Make a tail call. This will return back to JIT code. 213 187 emitPointerValidation(jit, GPRInfo::regT4); 214 if (callLinkInfo.isTailCall()) {215 jit.preserveReturnAddressAfterCall(GPRInfo::regT0);216 jit.prepareForTailCallSlow(GPRInfo::regT4);217 }218 188 jit.jump(GPRInfo::regT4); 219 189 … … 227 197 return FINALIZE_CODE( 228 198 patchBuffer, 229 ("Virtual %s slow path thunk", 230 callLinkInfo.callMode() == CallMode::Regular ? "call" : callLinkInfo.callMode() == CallMode::Tail ? "tail call" : "construct")); 199 ("Virtual %s%s slow path thunk", 200 callLinkInfo.specializationKind() == CodeForCall ? "call" : "construct", 201 callLinkInfo.registerPreservationMode() == MustPreserveRegisters ? " that preserves registers" : "")); 231 202 } 232 203 … … 397 368 JSInterfaceJIT jit(vm); 398 369 399 // We enter with fixup count in argumentGPR0370 // We enter with fixup count, in aligned stack units, in argumentGPR0 and the return thunk in argumentGPR1 400 371 // We have the guarantee that a0, a1, a2, t3, t4 and t5 (or t0 for Windows) are all distinct :-) 401 372 #if USE(JSVALUE64) … … 408 379 jit.pop(JSInterfaceJIT::regT4); 409 380 # endif 381 jit.lshift32(JSInterfaceJIT::TrustedImm32(logStackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0); 382 jit.neg64(JSInterfaceJIT::argumentGPR0); 410 383 jit.move(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::regT3); 411 384 jit.load32(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, JSStack::ArgumentCount * sizeof(Register)), JSInterfaceJIT::argumentGPR2); 412 385 jit.add32(JSInterfaceJIT::TrustedImm32(JSStack::CallFrameHeaderSize), JSInterfaceJIT::argumentGPR2); 413 414 // Check to see if we have extra slots we can use415 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR1);416 jit.and32(JSInterfaceJIT::TrustedImm32(stackAlignmentRegisters() - 1), JSInterfaceJIT::argumentGPR1);417 JSInterfaceJIT::Jump noExtraSlot = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR1);418 jit.move(JSInterfaceJIT::TrustedImm64(ValueUndefined), extraTemp);419 JSInterfaceJIT::Label fillExtraSlots(jit.label());420 jit.store64(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight));421 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2);422 jit.branchSub32(JSInterfaceJIT::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR1).linkTo(fillExtraSlots, &jit);423 jit.and32(JSInterfaceJIT::TrustedImm32(-stackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0);424 JSInterfaceJIT::Jump done = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR0);425 noExtraSlot.link(&jit);426 427 jit.neg64(JSInterfaceJIT::argumentGPR0);428 386 429 387 // Move current frame down argumentGPR0 number of slots … … 434 392 jit.branchSub32(MacroAssembler::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2).linkTo(copyLoop, &jit); 435 393 436 // Fill in argumentGPR0 missing arg slots with undefined394 // Fill in argumentGPR0 - 1 missing arg slots with undefined 437 395 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR2); 438 396 jit.move(JSInterfaceJIT::TrustedImm64(ValueUndefined), extraTemp); 397 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2); 439 398 JSInterfaceJIT::Label fillUndefinedLoop(jit.label()); 440 399 jit.store64(extraTemp, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight)); … … 448 407 jit.addPtr(extraTemp, JSInterfaceJIT::stackPointerRegister); 449 408 450 done.link(&jit);451 452 409 # if CPU(X86_64) 453 410 jit.push(JSInterfaceJIT::regT4); … … 458 415 jit.pop(JSInterfaceJIT::regT4); 459 416 # endif 417 jit.lshift32(JSInterfaceJIT::TrustedImm32(logStackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0); 418 jit.neg32(JSInterfaceJIT::argumentGPR0); 460 419 jit.move(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::regT3); 461 420 jit.load32(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, JSStack::ArgumentCount * sizeof(Register)), JSInterfaceJIT::argumentGPR2); 462 421 jit.add32(JSInterfaceJIT::TrustedImm32(JSStack::CallFrameHeaderSize), JSInterfaceJIT::argumentGPR2); 463 422 464 // Check to see if we have extra slots we can use465 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR1);466 jit.and32(JSInterfaceJIT::TrustedImm32(stackAlignmentRegisters() - 1), JSInterfaceJIT::argumentGPR1);467 JSInterfaceJIT::Jump noExtraSlot = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR1);468 JSInterfaceJIT::Label fillExtraSlots(jit.label());469 jit.move(JSInterfaceJIT::TrustedImm32(0), JSInterfaceJIT::regT5);470 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight, PayloadOffset));471 jit.move(JSInterfaceJIT::TrustedImm32(JSValue::UndefinedTag), JSInterfaceJIT::regT5);472 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::argumentGPR2, JSInterfaceJIT::TimesEight, TagOffset));473 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2);474 jit.branchSub32(JSInterfaceJIT::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR1).linkTo(fillExtraSlots, &jit);475 jit.and32(JSInterfaceJIT::TrustedImm32(-stackAlignmentRegisters()), JSInterfaceJIT::argumentGPR0);476 JSInterfaceJIT::Jump done = jit.branchTest32(MacroAssembler::Zero, JSInterfaceJIT::argumentGPR0);477 noExtraSlot.link(&jit);478 479 jit.neg32(JSInterfaceJIT::argumentGPR0);480 481 423 // Move current frame down argumentGPR0 number of slots 482 424 JSInterfaceJIT::Label copyLoop(jit.label()); 483 jit.load32( MacroAssembler::Address(JSInterfaceJIT::regT3, PayloadOffset), JSInterfaceJIT::regT5);484 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight , PayloadOffset));485 jit.load32(MacroAssembler::Address(JSInterfaceJIT::regT3, TagOffset), JSInterfaceJIT::regT5);486 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, TagOffset));425 jit.load32(JSInterfaceJIT::regT3, JSInterfaceJIT::regT5); 426 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight)); 427 jit.load32(MacroAssembler::Address(JSInterfaceJIT::regT3, 4), JSInterfaceJIT::regT5); 428 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, 4)); 487 429 jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::regT3); 488 430 jit.branchSub32(MacroAssembler::NonZero, JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2).linkTo(copyLoop, &jit); 489 431 490 // Fill in argumentGPR0 missing arg slots with undefined432 // Fill in argumentGPR0 - 1 missing arg slots with undefined 491 433 jit.move(JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::argumentGPR2); 434 jit.add32(JSInterfaceJIT::TrustedImm32(1), JSInterfaceJIT::argumentGPR2); 492 435 JSInterfaceJIT::Label fillUndefinedLoop(jit.label()); 493 436 jit.move(JSInterfaceJIT::TrustedImm32(0), JSInterfaceJIT::regT5); 494 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight , PayloadOffset));437 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight)); 495 438 jit.move(JSInterfaceJIT::TrustedImm32(JSValue::UndefinedTag), JSInterfaceJIT::regT5); 496 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, TagOffset));439 jit.store32(JSInterfaceJIT::regT5, MacroAssembler::BaseIndex(JSInterfaceJIT::regT3, JSInterfaceJIT::argumentGPR0, JSInterfaceJIT::TimesEight, 4)); 497 440 498 441 jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::regT3); … … 505 448 jit.addPtr(JSInterfaceJIT::regT5, JSInterfaceJIT::stackPointerRegister); 506 449 507 done.link(&jit);508 509 450 # if CPU(X86) 510 451 jit.push(JSInterfaceJIT::regT4); … … 515 456 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 516 457 return FINALIZE_CODE(patchBuffer, ("fixup arity")); 517 }518 519 MacroAssemblerCodeRef unreachableGenerator(VM* vm)520 {521 JSInterfaceJIT jit(vm);522 523 jit.breakpoint();524 525 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);526 return FINALIZE_CODE(patchBuffer, ("unreachable thunk"));527 458 } 528 459 -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.h
r189774 r189848 35 35 36 36 class CallLinkInfo; 37 class CCallHelpers;38 37 39 38 MacroAssemblerCodeRef throwExceptionFromCallSlowPathGenerator(VM*); 40 39 41 MacroAssemblerCodeRef linkCallThunk(VM*, CallLinkInfo&, CodeSpecializationKind, RegisterPreservationMode);42 40 MacroAssemblerCodeRef linkCallThunkGenerator(VM*); 43 41 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM*); … … 49 47 MacroAssemblerCodeRef nativeTailCallGenerator(VM*); 50 48 MacroAssemblerCodeRef arityFixupGenerator(VM*); 51 MacroAssemblerCodeRef unreachableGenerator(VM*);52 49 53 50 MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm); -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r189774 r189848 168 168 169 169 const StackAlignment = 16 170 const StackAlignmentSlots = 2171 170 const StackAlignmentMask = StackAlignment - 1 172 171 … … 699 698 end 700 699 701 macro callTargetFunction(callee) 700 macro callTargetFunction(callLinkInfo, calleeFramePtr) 701 move calleeFramePtr, sp 702 702 if C_LOOP 703 cloopCallJSFunction callee703 cloopCallJSFunction LLIntCallLinkInfo::machineCodeTarget[callLinkInfo] 704 704 else 705 call callee705 call LLIntCallLinkInfo::machineCodeTarget[callLinkInfo] 706 706 end 707 707 restoreStackPointerAfterCall() … … 709 709 end 710 710 711 macro prepareForRegularCall(callee, temp1, temp2, temp3) 712 addp CallerFrameAndPCSize, sp 713 end 714 715 # sp points to the new frame 716 macro prepareForTailCall(callee, temp1, temp2, temp3) 717 restoreCalleeSavesUsedByLLInt() 718 719 loadi PayloadOffset + ArgumentCount[cfr], temp2 720 loadp CodeBlock[cfr], temp1 721 loadp CodeBlock::m_numParameters[temp1], temp1 722 bilteq temp1, temp2, .noArityFixup 723 move temp1, temp2 724 725 .noArityFixup: 726 # We assume < 2^28 arguments 727 muli SlotSize, temp2 728 addi StackAlignment - 1 + CallFrameHeaderSize, temp2 729 andi ~StackAlignmentMask, temp2 730 731 move cfr, temp1 732 addp temp2, temp1 733 734 loadi PayloadOffset + ArgumentCount[sp], temp2 735 # We assume < 2^28 arguments 736 muli SlotSize, temp2 737 addi StackAlignment - 1 + CallFrameHeaderSize, temp2 738 andi ~StackAlignmentMask, temp2 739 740 if ARM or SH4 or ARM64 or C_LOOP or MIPS 741 addp 2 * PtrSize, sp 742 subi 2 * PtrSize, temp2 743 loadp PtrSize[cfr], lr 744 else 745 addp PtrSize, sp 746 subi PtrSize, temp2 747 loadp PtrSize[cfr], temp3 748 storep temp3, [sp] 749 end 750 751 subp temp2, temp1 752 loadp [cfr], cfr 753 754 .copyLoop: 755 subi PtrSize, temp2 756 loadp [sp, temp2, 1], temp3 757 storep temp3, [temp1, temp2, 1] 758 btinz temp2, .copyLoop 759 760 move temp1, sp 761 jmp callee 762 end 763 764 macro slowPathForCall(slowPath, prepareCall) 711 macro slowPathForCall(slowPath) 765 712 callCallSlowPath( 766 713 slowPath, 767 # Those are r0 and r1 768 macro (callee, calleeFramePtr) 769 btpz calleeFramePtr, .dontUpdateSP 770 move calleeFramePtr, sp 771 prepareCall(callee, t2, t3, t4) 714 macro (callee, calleeFrame) 715 btpz calleeFrame, .dontUpdateSP 716 if ARMv7 717 addp CallerFrameAndPCSize, calleeFrame, calleeFrame 718 move calleeFrame, sp 719 else 720 addp CallerFrameAndPCSize, calleeFrame, sp 721 end 772 722 .dontUpdateSP: 773 callTargetFunction(callee) 723 if C_LOOP 724 cloopCallJSFunction callee 725 else 726 call callee 727 end 728 restoreStackPointerAfterCall() 729 dispatchAfterCall() 774 730 end) 775 731 end … … 1460 1416 traceExecution() 1461 1417 arrayProfileForCall() 1462 doCall(_llint_slow_path_call, prepareForRegularCall) 1463 1464 _llint_op_tail_call: 1465 traceExecution() 1466 arrayProfileForCall() 1467 checkSwitchToJITForEpilogue() 1468 doCall(_llint_slow_path_call, prepareForTailCall) 1418 doCall(_llint_slow_path_call) 1419 1469 1420 1470 1421 _llint_op_construct: 1471 1422 traceExecution() 1472 doCall(_llint_slow_path_construct, prepareForRegularCall) 1473 1474 macro doCallVarargs(slowPath, prepareCall) 1423 doCall(_llint_slow_path_construct) 1424 1425 1426 _llint_op_call_varargs: 1427 traceExecution() 1475 1428 callSlowPath(_llint_slow_path_size_frame_for_varargs) 1476 1429 branchIfException(_llint_throw_from_slow_path_trampoline) … … 1487 1440 end 1488 1441 end 1489 slowPathForCall(slowPath, prepareCall) 1490 end 1491 1492 _llint_op_call_varargs: 1493 traceExecution() 1494 doCallVarargs(_llint_slow_path_call_varargs, prepareForRegularCall) 1495 1496 _llint_op_tail_call_varargs: 1497 traceExecution() 1498 checkSwitchToJITForEpilogue() 1499 # We lie and perform the tail call instead of preparing it since we can't 1500 # prepare the frame for a call opcode 1501 doCallVarargs(_llint_slow_path_call_varargs, prepareForTailCall) 1442 slowPathForCall(_llint_slow_path_call_varargs) 1502 1443 1503 1444 _llint_op_construct_varargs: 1504 1445 traceExecution() 1505 doCallVarargs(_llint_slow_path_construct_varargs, prepareForRegularCall) 1446 callSlowPath(_llint_slow_path_size_frame_for_varargs) 1447 branchIfException(_llint_throw_from_slow_path_trampoline) 1448 # calleeFrame in r1 1449 if JSVALUE64 1450 move r1, sp 1451 else 1452 # The calleeFrame is not stack aligned, move down by CallerFrameAndPCSize to align 1453 if ARMv7 1454 subp r1, CallerFrameAndPCSize, t2 1455 move t2, sp 1456 else 1457 subp r1, CallerFrameAndPCSize, sp 1458 end 1459 end 1460 slowPathForCall(_llint_slow_path_construct_varargs) 1506 1461 1507 1462 … … 1542 1497 # returns the JS value that the eval returned. 1543 1498 1544 slowPathForCall(_llint_slow_path_call_eval , prepareForRegularCall)1499 slowPathForCall(_llint_slow_path_call_eval) 1545 1500 1546 1501 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r189775 r189848 603 603 loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1 604 604 btiz t1, .continue 605 606 // Move frame up "t1 * 2" slots 607 lshiftp 1, t1 608 negi t1 609 move cfr, t3 605 610 loadi PayloadOffset + ArgumentCount[cfr], t2 606 611 addi CallFrameHeaderSlots, t2 607 608 // Check if there are some unaligned slots we can use609 move t1, t3610 andi StackAlignmentSlots - 1, t3611 btiz t3, .noExtraSlot612 .fillExtraSlots:613 move 0, t0614 storei t0, PayloadOffset[cfr, t2, 8]615 move UndefinedTag, t0616 storei t0, TagOffset[cfr, t2, 8]617 addi 1, t2618 bsubinz 1, t3, .fillExtraSlots619 andi ~(StackAlignmentSlots - 1), t1620 btiz t1, .continue621 622 .noExtraSlot:623 // Move frame up t1 slots624 negi t1625 move cfr, t3626 612 .copyLoop: 627 613 loadi PayloadOffset[t3], t0 … … 1779 1765 end 1780 1766 1781 macro doCall(slowPath , prepareCall)1767 macro doCall(slowPath) 1782 1768 loadi 8[PC], t0 1783 1769 loadi 20[PC], t1 … … 1794 1780 storei t2, ArgumentCount + PayloadOffset[t3] 1795 1781 storei CellTag, Callee + TagOffset[t3] 1796 move t3, sp 1797 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4) 1798 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1]) 1782 addp CallerFrameAndPCSize, t3 1783 callTargetFunction(t1, t3) 1799 1784 1800 1785 .opCallSlow: 1801 slowPathForCall(slowPath, prepareCall) 1802 end 1786 slowPathForCall(slowPath) 1787 end 1788 1803 1789 1804 1790 _llint_op_ret: -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r189775 r189848 511 511 loadi CommonSlowPaths::ArityCheckData::paddedStackSpace[r1], t1 512 512 btiz t1, .continue 513 loadi PayloadOffset + ArgumentCount[cfr], t2 514 addi CallFrameHeaderSlots, t2 515 516 // Check if there are some unaligned slots we can use 517 move t1, t3 518 andi StackAlignmentSlots - 1, t3 519 btiz t3, .noExtraSlot 520 move ValueUndefined, t0 521 .fillExtraSlots: 522 storeq t0, [cfr, t2, 8] 523 addi 1, t2 524 bsubinz 1, t3, .fillExtraSlots 525 andi ~(StackAlignmentSlots - 1), t1 526 btiz t1, .continue 527 528 .noExtraSlot: 529 // Move frame up t1 slots 513 514 // Move frame up "t1 * 2" slots 515 lshiftp 1, t1 530 516 negq t1 531 517 move cfr, t3 532 518 subp CalleeSaveSpaceAsVirtualRegisters * 8, t3 533 addi CalleeSaveSpaceAsVirtualRegisters, t2 519 loadi PayloadOffset + ArgumentCount[cfr], t2 520 addi CallFrameHeaderSlots + CalleeSaveSpaceAsVirtualRegisters, t2 534 521 .copyLoop: 535 522 loadq [t3], t0 … … 1667 1654 end 1668 1655 1669 macro doCall(slowPath , prepareCall)1656 macro doCall(slowPath) 1670 1657 loadisFromInstruction(2, t0) 1671 1658 loadpFromInstruction(5, t1) … … 1681 1668 storei PC, ArgumentCount + TagOffset[cfr] 1682 1669 storei t2, ArgumentCount + PayloadOffset[t3] 1683 move t3, sp 1684 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4) 1685 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1]) 1670 addp CallerFrameAndPCSize, t3 1671 callTargetFunction(t1, t3) 1686 1672 1687 1673 .opCallSlow: 1688 slowPathForCall(slowPath, prepareCall) 1689 end 1674 slowPathForCall(slowPath) 1675 end 1676 1690 1677 1691 1678 _llint_op_ret: -
trunk/Source/JavaScriptCore/runtime/CommonSlowPaths.h
r189774 r189848 60 60 61 61 ASSERT(argumentCountIncludingThis < newCodeBlock->numParameters()); 62 int frameSize = argumentCountIncludingThis + JSStack::CallFrameHeaderSize; 63 int alignedFrameSizeForParameters = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), 64 newCodeBlock->numParameters() + JSStack::CallFrameHeaderSize); 65 int paddedStackSpace = alignedFrameSizeForParameters - frameSize; 66 67 if (!stack->ensureCapacityFor(exec->registers() - paddedStackSpace % stackAlignmentRegisters())) 62 int missingArgumentCount = newCodeBlock->numParameters() - argumentCountIncludingThis; 63 int neededStackSpace = missingArgumentCount + 1; // Allow space to save the original return PC. 64 int paddedStackSpace = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), neededStackSpace); 65 66 if (!stack->ensureCapacityFor(exec->registers() - paddedStackSpace)) 68 67 return -1; 69 return paddedStackSpace ;68 return paddedStackSpace / stackAlignmentRegisters(); 70 69 } 71 70
Note: See TracChangeset
for help on using the changeset viewer.