Changeset 190220 in webkit
- Timestamp:
- Sep 24, 2015, 2:42:59 PM (10 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 33 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r190217 r190220 1 2015-09-24 Michael Saboff <msaboff@apple.com> 2 3 [ES6] Implement tail calls in the DFG 4 https://bugs.webkit.org/show_bug.cgi?id=148663 5 6 Reviewed by Filip Pizlo. 7 8 jsc-tailcall: Implement the tail call opcodes in the DFG 9 https://bugs.webkit.org/show_bug.cgi?id=146850 10 11 This patch adds support for tail calls in the DFG. This requires a slightly high number of nodes: 12 13 - TailCall and TailCallVarargs are straightforward. They are terminal 14 nodes and have the semantics of an actual tail call. 15 16 - TailCallInlinedCaller and TailCallVarargsInlinedCaller are here to perform a 17 tail call inside an inlined function. They are non terminal nodes, 18 and are performing the call as a regular call after popping an 19 appropriate number of inlined tail call frames. 20 21 - TailCallForwardVarargs and TailCallForwardVarargsInlinedCaller are the 22 extension of TailCallVarargs and TailCallVarargsInlinedCaller to enable 23 the varargs forwarding optimization so that we don't lose 24 performance with a tail call instead of a regular call. 25 26 This also required two broad kind of changes: 27 28 - Changes in the JIT itself (DFGSpeculativeJIT) are pretty 29 straightforward since they are just an extension of the baseline JIT 30 changes introduced previously. 31 32 - Changes in the runtime are mostly related with handling inline call 33 frames. The idea here is that we have a special TailCall type for 34 call frames that indicates to the various pieces of code walking the 35 inline call frame that they should (recursively) skip the caller in 36 their analysis. 37 38 * bytecode/CallMode.h: 39 (JSC::specializationKindFor): 40 * bytecode/CodeOrigin.cpp: 41 (JSC::CodeOrigin::inlineDepthForCallFrame): 42 (JSC::CodeOrigin::isApproximatelyEqualTo): 43 (JSC::CodeOrigin::approximateHash): 44 (JSC::CodeOrigin::inlineStack): 45 * bytecode/CodeOrigin.h: 46 * bytecode/InlineCallFrame.cpp: 47 (JSC::InlineCallFrame::dumpInContext): 48 (WTF::printInternal): 49 * bytecode/InlineCallFrame.h: 50 (JSC::InlineCallFrame::callModeFor): 51 (JSC::InlineCallFrame::kindFor): 52 (JSC::InlineCallFrame::varargsKindFor): 53 (JSC::InlineCallFrame::specializationKindFor): 54 (JSC::InlineCallFrame::isVarargs): 55 (JSC::InlineCallFrame::isTail): 56 (JSC::InlineCallFrame::computeCallerSkippingDeadFrames): 57 (JSC::InlineCallFrame::getCallerSkippingDeadFrames): 58 (JSC::InlineCallFrame::getCallerInlineFrameSkippingDeadFrames): 59 * dfg/DFGAbstractInterpreterInlines.h: 60 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 61 * dfg/DFGArgumentsEliminationPhase.cpp: 62 * dfg/DFGBasicBlock.h: 63 (JSC::DFG::BasicBlock::findTerminal): 64 * dfg/DFGByteCodeParser.cpp: 65 (JSC::DFG::ByteCodeParser::inlineCallFrame): 66 (JSC::DFG::ByteCodeParser::allInlineFramesAreTailCalls): 67 (JSC::DFG::ByteCodeParser::currentCodeOrigin): 68 (JSC::DFG::ByteCodeParser::addCallWithoutSettingResult): 69 (JSC::DFG::ByteCodeParser::addCall): 70 (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit): 71 (JSC::DFG::ByteCodeParser::getPrediction): 72 (JSC::DFG::ByteCodeParser::handleCall): 73 (JSC::DFG::ByteCodeParser::handleVarargsCall): 74 (JSC::DFG::ByteCodeParser::emitArgumentPhantoms): 75 (JSC::DFG::ByteCodeParser::inliningCost): 76 (JSC::DFG::ByteCodeParser::inlineCall): 77 (JSC::DFG::ByteCodeParser::attemptToInlineCall): 78 (JSC::DFG::ByteCodeParser::parseBlock): 79 (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry): 80 (JSC::DFG::ByteCodeParser::parseCodeBlock): 81 * dfg/DFGCapabilities.cpp: 82 (JSC::DFG::capabilityLevel): 83 * dfg/DFGClobberize.h: 84 (JSC::DFG::clobberize): 85 * dfg/DFGDoesGC.cpp: 86 (JSC::DFG::doesGC): 87 * dfg/DFGFixupPhase.cpp: 88 (JSC::DFG::FixupPhase::fixupNode): 89 * dfg/DFGGraph.cpp: 90 (JSC::DFG::Graph::isLiveInBytecode): 91 * dfg/DFGGraph.h: 92 (JSC::DFG::Graph::forAllLocalsLiveInBytecode): 93 * dfg/DFGInPlaceAbstractState.cpp: 94 (JSC::DFG::InPlaceAbstractState::mergeToSuccessors): 95 * dfg/DFGJITCompiler.cpp: 96 (JSC::DFG::JITCompiler::willCatchExceptionInMachineFrame): 97 * dfg/DFGLiveCatchVariablePreservationPhase.cpp: 98 (JSC::DFG::FlushLiveCatchVariablesInsertionPhase::willCatchException): 99 * dfg/DFGNode.h: 100 (JSC::DFG::Node::hasCallVarargsData): 101 (JSC::DFG::Node::isTerminal): 102 (JSC::DFG::Node::hasHeapPrediction): 103 * dfg/DFGNodeType.h: 104 * dfg/DFGOSRExitCompilerCommon.cpp: 105 (JSC::DFG::handleExitCounts): 106 (JSC::DFG::reifyInlinedCallFrames): 107 (JSC::DFG::osrWriteBarrier): 108 * dfg/DFGOSRExitPreparation.cpp: 109 (JSC::DFG::prepareCodeOriginForOSRExit): 110 * dfg/DFGOperations.cpp: 111 * dfg/DFGPreciseLocalClobberize.h: 112 (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop): 113 * dfg/DFGPredictionPropagationPhase.cpp: 114 (JSC::DFG::PredictionPropagationPhase::propagate): 115 * dfg/DFGSafeToExecute.h: 116 (JSC::DFG::safeToExecute): 117 * dfg/DFGSpeculativeJIT32_64.cpp: 118 (JSC::DFG::SpeculativeJIT::emitCall): 119 (JSC::DFG::SpeculativeJIT::compile): 120 * dfg/DFGSpeculativeJIT64.cpp: 121 (JSC::DFG::SpeculativeJIT::emitCall): 122 (JSC::DFG::SpeculativeJIT::compile): 123 * dfg/DFGValidate.cpp: 124 (JSC::DFG::Validate::validateSSA): 125 * dfg/DFGVarargsForwardingPhase.cpp: 126 * interpreter/CallFrame.cpp: 127 (JSC::CallFrame::bytecodeOffset): 128 * interpreter/StackVisitor.cpp: 129 (JSC::StackVisitor::gotoNextFrame): 130 1 131 2015-09-23 Filip Pizlo <fpizlo@apple.com> 2 132 -
trunk/Source/JavaScriptCore/bytecode/CallMode.h
r189774 r190220 27 27 #define CallMode_h 28 28 29 #include "CodeSpecializationKind.h" 30 29 31 namespace JSC { 30 32 … … 32 34 33 35 enum FrameAction { KeepTheFrame = 0, ReuseTheFrame }; 36 37 inline CodeSpecializationKind specializationKindFor(CallMode callMode) 38 { 39 if (callMode == CallMode::Construct) 40 return CodeForConstruct; 41 42 return CodeForCall; 43 } 34 44 35 45 } // namespace JSC -
trunk/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
r188585 r190220 38 38 { 39 39 unsigned result = 1; 40 for (InlineCallFrame* current = inlineCallFrame; current; current = current-> caller.inlineCallFrame)40 for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame) 41 41 result++; 42 42 return result; … … 79 79 return false; 80 80 81 a = a.inlineCallFrame-> caller;82 b = b.inlineCallFrame-> caller;81 a = a.inlineCallFrame->directCaller; 82 b = b.inlineCallFrame->directCaller; 83 83 } 84 84 } … … 101 101 result += WTF::PtrHash<JSCell*>::hash(codeOrigin.inlineCallFrame->executable.get()); 102 102 103 codeOrigin = codeOrigin.inlineCallFrame-> caller;103 codeOrigin = codeOrigin.inlineCallFrame->directCaller; 104 104 } 105 105 } … … 110 110 result.last() = *this; 111 111 unsigned index = result.size() - 2; 112 for (InlineCallFrame* current = inlineCallFrame; current; current = current-> caller.inlineCallFrame)113 result[index--] = current-> caller;112 for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame) 113 result[index--] = current->directCaller; 114 114 RELEASE_ASSERT(!result[0].inlineCallFrame); 115 115 return result; -
trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h
r190073 r190220 27 27 #define CodeOrigin_h 28 28 29 #include "CallMode.h" 29 30 #include "CodeBlockHash.h" 30 31 #include "CodeSpecializationKind.h" -
trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp
r189518 r190220 78 78 if (executable->isStrictMode()) 79 79 out.print(" (StrictMode)"); 80 out.print(", bc#", caller.bytecodeIndex, ", ", kind);80 out.print(", bc#", directCaller.bytecodeIndex, ", ", static_cast<Kind>(kind)); 81 81 if (isClosureCall) 82 82 out.print(", closure call"); … … 106 106 out.print("Construct"); 107 107 return; 108 case JSC::InlineCallFrame::TailCall: 109 out.print("TailCall"); 110 return; 108 111 case JSC::InlineCallFrame::CallVarargs: 109 112 out.print("CallVarargs"); … … 111 114 case JSC::InlineCallFrame::ConstructVarargs: 112 115 out.print("ConstructVarargs"); 116 return; 117 case JSC::InlineCallFrame::TailCallVarargs: 118 out.print("TailCallVarargs"); 113 119 return; 114 120 case JSC::InlineCallFrame::GetterCall: -
trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.h
r189518 r190220 50 50 Call, 51 51 Construct, 52 TailCall, 52 53 CallVarargs, 53 54 ConstructVarargs, 55 TailCallVarargs, 54 56 55 57 // For these, the stackOffset incorporates the argument count plus the true return PC … … 58 60 SetterCall 59 61 }; 60 61 static Kind kindFor(CodeSpecializationKind kind) 62 { 63 switch (kind) { 64 case CodeForCall: 65 return Call; 66 case CodeForConstruct: 67 return Construct; 68 } 69 RELEASE_ASSERT_NOT_REACHED(); 70 return Call; 71 } 72 73 static Kind varargsKindFor(CodeSpecializationKind kind) 74 { 75 switch (kind) { 76 case CodeForCall: 77 return CallVarargs; 78 case CodeForConstruct: 79 return ConstructVarargs; 80 } 81 RELEASE_ASSERT_NOT_REACHED(); 82 return Call; 83 } 84 85 static CodeSpecializationKind specializationKindFor(Kind kind) 62 63 static CallMode callModeFor(Kind kind) 86 64 { 87 65 switch (kind) { 88 66 case Call: 89 67 case CallVarargs: 68 case GetterCall: 69 case SetterCall: 70 return CallMode::Regular; 71 case TailCall: 72 case TailCallVarargs: 73 return CallMode::Tail; 74 case Construct: 75 case ConstructVarargs: 76 return CallMode::Construct; 77 } 78 RELEASE_ASSERT_NOT_REACHED(); 79 } 80 81 static Kind kindFor(CallMode callMode) 82 { 83 switch (callMode) { 84 case CallMode::Regular: 85 return Call; 86 case CallMode::Construct: 87 return Construct; 88 case CallMode::Tail: 89 return TailCall; 90 } 91 RELEASE_ASSERT_NOT_REACHED(); 92 } 93 94 static Kind varargsKindFor(CallMode callMode) 95 { 96 switch (callMode) { 97 case CallMode::Regular: 98 return CallVarargs; 99 case CallMode::Construct: 100 return ConstructVarargs; 101 case CallMode::Tail: 102 return TailCallVarargs; 103 } 104 RELEASE_ASSERT_NOT_REACHED(); 105 } 106 107 static CodeSpecializationKind specializationKindFor(Kind kind) 108 { 109 switch (kind) { 110 case Call: 111 case CallVarargs: 112 case TailCall: 113 case TailCallVarargs: 90 114 case GetterCall: 91 115 case SetterCall: … … 96 120 } 97 121 RELEASE_ASSERT_NOT_REACHED(); 98 return CodeForCall;99 122 } 100 123 … … 103 126 switch (kind) { 104 127 case CallVarargs: 128 case TailCallVarargs: 105 129 case ConstructVarargs: 106 130 return true; … … 109 133 } 110 134 } 135 136 static bool isTail(Kind kind) 137 { 138 switch (kind) { 139 case TailCall: 140 case TailCallVarargs: 141 return true; 142 default: 143 return false; 144 } 145 } 146 bool isTail() const 147 { 148 return isTail(static_cast<Kind>(kind)); 149 } 150 151 static CodeOrigin* computeCallerSkippingDeadFrames(InlineCallFrame* inlineCallFrame) 152 { 153 CodeOrigin* codeOrigin; 154 bool tailCallee; 155 do { 156 tailCallee = inlineCallFrame->isTail(); 157 codeOrigin = &inlineCallFrame->directCaller; 158 inlineCallFrame = codeOrigin->inlineCallFrame; 159 } while (inlineCallFrame && tailCallee); 160 if (tailCallee) 161 return nullptr; 162 return codeOrigin; 163 } 164 165 CodeOrigin* getCallerSkippingDeadFrames() 166 { 167 return computeCallerSkippingDeadFrames(this); 168 } 169 170 InlineCallFrame* getCallerInlineFrameSkippingDeadFrames() 171 { 172 CodeOrigin* caller = getCallerSkippingDeadFrames(); 173 return caller ? caller->inlineCallFrame : nullptr; 174 } 111 175 112 176 Vector<ValueRecovery> arguments; // Includes 'this'. 113 177 WriteBarrier<ScriptExecutable> executable; 114 178 ValueRecovery calleeRecovery; 115 CodeOrigin caller;179 CodeOrigin directCaller; 116 180 117 181 signed stackOffset : 28; -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r190076 r190220 1551 1551 1552 1552 case Return: 1553 m_state.setIsValid(false); 1554 break; 1555 1556 case TailCall: 1557 case TailCallVarargs: 1558 case TailCallForwardVarargs: 1559 clobberWorld(node->origin.semantic, clobberLimit); 1553 1560 m_state.setIsValid(false); 1554 1561 break; … … 2443 2450 2444 2451 case Call: 2452 case TailCallInlinedCaller: 2445 2453 case Construct: 2446 2454 case CallVarargs: 2447 2455 case CallForwardVarargs: 2456 case TailCallVarargsInlinedCaller: 2448 2457 case ConstructVarargs: 2449 2458 case ConstructForwardVarargs: 2459 case TailCallForwardVarargsInlinedCaller: 2450 2460 clobberWorld(node->origin.semantic, clobberLimit); 2451 2461 forNode(node).makeHeapTop(); -
trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
r188979 r190220 168 168 case CallVarargs: 169 169 case ConstructVarargs: 170 case TailCallVarargs: 171 case TailCallVarargsInlinedCaller: 170 172 escape(node->child1()); 171 173 escape(node->child3()); … … 561 563 562 564 case CallVarargs: 563 case ConstructVarargs: { 565 case ConstructVarargs: 566 case TailCallVarargs: 567 case TailCallVarargsInlinedCaller: { 564 568 Node* candidate = node->child2().node(); 565 569 if (!m_candidates.contains(candidate)) … … 586 590 for (Node* argument : arguments) 587 591 m_graph.m_varArgChildren.append(Edge(argument)); 588 node->setOpAndDefaultFlags( 589 node->op() == CallVarargs ? Call : Construct); 592 switch (node->op()) { 593 case CallVarargs: 594 node->setOpAndDefaultFlags(Call); 595 break; 596 case ConstructVarargs: 597 node->setOpAndDefaultFlags(Construct); 598 break; 599 case TailCallVarargs: 600 node->setOpAndDefaultFlags(TailCall); 601 break; 602 case TailCallVarargsInlinedCaller: 603 node->setOpAndDefaultFlags(TailCallInlinedCaller); 604 break; 605 default: 606 RELEASE_ASSERT_NOT_REACHED(); 607 } 590 608 node->children = AdjacencyList( 591 609 AdjacencyList::Variable, … … 594 612 } 595 613 596 node->setOpAndDefaultFlags( 597 node->op() == CallVarargs ? CallForwardVarargs : ConstructForwardVarargs); 614 switch (node->op()) { 615 case CallVarargs: 616 node->setOpAndDefaultFlags(CallForwardVarargs); 617 break; 618 case ConstructVarargs: 619 node->setOpAndDefaultFlags(ConstructForwardVarargs); 620 break; 621 case TailCallVarargs: 622 node->setOpAndDefaultFlags(TailCallForwardVarargs); 623 break; 624 case TailCallVarargsInlinedCaller: 625 node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller); 626 break; 627 default: 628 RELEASE_ASSERT_NOT_REACHED(); 629 } 598 630 break; 599 631 } -
trunk/Source/JavaScriptCore/dfg/DFGBasicBlock.h
r189531 r190220 93 93 case Switch: 94 94 case Return: 95 case TailCall: 96 case TailCallVarargs: 97 case TailCallForwardVarargs: 95 98 case Unreachable: 96 99 return NodeAndIndex(node, i); -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r190076 r190220 184 184 SpeculatedType prediction); 185 185 void handleCall( 186 int result, NodeType op, InlineCallFrame::Kind, unsigned instructionSize,186 int result, NodeType op, CallMode, unsigned instructionSize, 187 187 Node* callTarget, int argCount, int registerOffset, CallLinkStatus); 188 void handleCall(int result, NodeType op, C odeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);189 void handleCall(Instruction* pc, NodeType op, C odeSpecializationKind);190 void handleVarargsCall(Instruction* pc, NodeType op, C odeSpecializationKind);188 void handleCall(int result, NodeType op, CallMode, unsigned instructionSize, int callee, int argCount, int registerOffset); 189 void handleCall(Instruction* pc, NodeType op, CallMode); 190 void handleVarargsCall(Instruction* pc, NodeType op, CallMode); 191 191 void emitFunctionChecks(CallVariant, Node* callTarget, VirtualRegister thisArgumnt); 192 192 void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis); 193 unsigned inliningCost(CallVariant, int argumentCountIncludingThis, C odeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1.193 unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CallMode); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1. 194 194 // Handle inlining. Return true if it succeeded, false if we need to plant a call. 195 195 bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction); … … 649 649 } 650 650 651 bool allInlineFramesAreTailCalls() 652 { 653 return !inlineCallFrame() || !inlineCallFrame()->getCallerSkippingDeadFrames(); 654 } 655 651 656 CodeOrigin currentCodeOrigin() 652 657 { … … 737 742 Node* addCallWithoutSettingResult( 738 743 NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset, 739 SpeculatedTypeprediction)744 OpInfo prediction) 740 745 { 741 746 addVarArgChild(callee); … … 750 755 addVarArgChild(get(virtualRegisterForArgument(i, registerOffset))); 751 756 752 return addToGraph(Node::VarArg, op, opInfo, OpInfo(prediction));757 return addToGraph(Node::VarArg, op, opInfo, prediction); 753 758 } 754 759 … … 757 762 SpeculatedType prediction) 758 763 { 764 if (op == TailCall) { 765 if (allInlineFramesAreTailCalls()) 766 return addCallWithoutSettingResult(op, OpInfo(), callee, argCount, registerOffset, OpInfo()); 767 op = TailCallInlinedCaller; 768 } 769 770 759 771 Node* call = addCallWithoutSettingResult( 760 op, opInfo, callee, argCount, registerOffset, prediction);772 op, opInfo, callee, argCount, registerOffset, OpInfo(prediction)); 761 773 VirtualRegister resultReg(result); 762 774 if (resultReg.isValid()) … … 777 789 SpeculatedType getPredictionWithoutOSRExit(unsigned bytecodeIndex) 778 790 { 779 ConcurrentJITLocker locker(m_inlineStackTop->m_profiledBlock->m_lock); 780 return m_inlineStackTop->m_profiledBlock->valueProfilePredictionForBytecodeOffset(locker, bytecodeIndex); 791 SpeculatedType prediction; 792 CodeBlock* profiledBlock = nullptr; 793 794 { 795 ConcurrentJITLocker locker(m_inlineStackTop->m_profiledBlock->m_lock); 796 prediction = m_inlineStackTop->m_profiledBlock->valueProfilePredictionForBytecodeOffset(locker, bytecodeIndex); 797 798 if (prediction == SpecNone) { 799 // If we have no information about the values this 800 // node generates, we check if by any chance it is 801 // a tail call opcode. In that case, we walk up the 802 // inline frames to find a call higher in the call 803 // chain and use its prediction. If we only have 804 // inlined tail call frames, we use SpecFullTop 805 // to avoid a spurious OSR exit. 806 Instruction* instruction = m_inlineStackTop->m_profiledBlock->instructions().begin() + bytecodeIndex; 807 OpcodeID opcodeID = m_vm->interpreter->getOpcodeID(instruction->u.opcode); 808 809 switch (opcodeID) { 810 case op_tail_call: 811 case op_tail_call_varargs: { 812 if (!inlineCallFrame()) { 813 prediction = SpecFullTop; 814 break; 815 } 816 CodeOrigin* codeOrigin = inlineCallFrame()->getCallerSkippingDeadFrames(); 817 if (!codeOrigin) { 818 prediction = SpecFullTop; 819 break; 820 } 821 InlineStackEntry* stack = m_inlineStackTop; 822 while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame) 823 stack = stack->m_caller; 824 bytecodeIndex = codeOrigin->bytecodeIndex; 825 profiledBlock = stack->m_profiledBlock; 826 break; 827 } 828 829 default: 830 break; 831 } 832 } 833 } 834 835 if (profiledBlock) { 836 ConcurrentJITLocker locker(profiledBlock->m_lock); 837 prediction = profiledBlock->valueProfilePredictionForBytecodeOffset(locker, bytecodeIndex); 838 } 839 840 return prediction; 781 841 } 782 842 … … 784 844 { 785 845 SpeculatedType prediction = getPredictionWithoutOSRExit(bytecodeIndex); 786 846 787 847 if (prediction == SpecNone) { 788 848 // We have no information about what values this node generates. Give up … … 1070 1130 return shouldContinueParsing 1071 1131 1072 void ByteCodeParser::handleCall(Instruction* pc, NodeType op, C odeSpecializationKind kind)1132 void ByteCodeParser::handleCall(Instruction* pc, NodeType op, CallMode callMode) 1073 1133 { 1074 1134 ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct)); 1135 ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call)); 1075 1136 handleCall( 1076 pc[1].u.operand, op, kind, OPCODE_LENGTH(op_call),1137 pc[1].u.operand, op, callMode, OPCODE_LENGTH(op_call), 1077 1138 pc[2].u.operand, pc[3].u.operand, -pc[4].u.operand); 1078 1139 } 1079 1140 1080 1141 void ByteCodeParser::handleCall( 1081 int result, NodeType op, C odeSpecializationKind kind, unsigned instructionSize,1142 int result, NodeType op, CallMode callMode, unsigned instructionSize, 1082 1143 int callee, int argumentCountIncludingThis, int registerOffset) 1083 1144 { … … 1089 1150 1090 1151 handleCall( 1091 result, op, InlineCallFrame::kindFor(kind), instructionSize, callTarget,1152 result, op, callMode, instructionSize, callTarget, 1092 1153 argumentCountIncludingThis, registerOffset, callLinkStatus); 1093 1154 } 1094 1155 1095 1156 void ByteCodeParser::handleCall( 1096 int result, NodeType op, InlineCallFrame::Kind kind, unsigned instructionSize,1157 int result, NodeType op, CallMode callMode, unsigned instructionSize, 1097 1158 Node* callTarget, int argumentCountIncludingThis, int registerOffset, 1098 1159 CallLinkStatus callLinkStatus) 1099 1160 { 1100 1161 handleCall( 1101 result, op, kind, instructionSize, callTarget, argumentCountIncludingThis,1162 result, op, InlineCallFrame::kindFor(callMode), instructionSize, callTarget, argumentCountIncludingThis, 1102 1163 registerOffset, callLinkStatus, getPrediction()); 1103 1164 } … … 1119 1180 // Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically 1120 1181 // that we cannot optimize them. 1121 1182 1122 1183 addCall(result, op, OpInfo(), callTarget, argumentCountIncludingThis, registerOffset, prediction); 1123 1184 return; … … 1137 1198 } 1138 1199 1139 void ByteCodeParser::handleVarargsCall(Instruction* pc, NodeType op, C odeSpecializationKind kind)1200 void ByteCodeParser::handleVarargsCall(Instruction* pc, NodeType op, CallMode callMode) 1140 1201 { 1141 1202 ASSERT(OPCODE_LENGTH(op_call_varargs) == OPCODE_LENGTH(op_construct_varargs)); 1203 ASSERT(OPCODE_LENGTH(op_call_varargs) == OPCODE_LENGTH(op_tail_call_varargs)); 1142 1204 1143 1205 int result = pc[1].u.operand; … … 1162 1224 1163 1225 if (callLinkStatus.canOptimize() 1164 && handleInlining(callTarget, result, callLinkStatus, firstFreeReg, VirtualRegister(thisReg), VirtualRegister(arguments), firstVarArgOffset, 0, m_currentIndex + OPCODE_LENGTH(op_call_varargs), op, InlineCallFrame::varargsKindFor( kind), prediction)) {1226 && handleInlining(callTarget, result, callLinkStatus, firstFreeReg, VirtualRegister(thisReg), VirtualRegister(arguments), firstVarArgOffset, 0, m_currentIndex + OPCODE_LENGTH(op_call_varargs), op, InlineCallFrame::varargsKindFor(callMode), prediction)) { 1165 1227 if (m_graph.compilation()) 1166 1228 m_graph.compilation()->noticeInlinedCall(); … … 1172 1234 1173 1235 Node* thisChild = get(VirtualRegister(thisReg)); 1174 1236 1237 if (op == TailCallVarargs) { 1238 if (allInlineFramesAreTailCalls()) { 1239 addToGraph(op, OpInfo(data), OpInfo(), callTarget, get(VirtualRegister(arguments)), thisChild); 1240 return; 1241 } 1242 op = TailCallVarargsInlinedCaller; 1243 } 1244 1175 1245 Node* call = addToGraph(op, OpInfo(data), OpInfo(prediction), callTarget, get(VirtualRegister(arguments)), thisChild); 1176 1246 VirtualRegister resultReg(result); … … 1207 1277 } 1208 1278 1209 unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountIncludingThis, C odeSpecializationKind kind)1279 unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountIncludingThis, CallMode callMode) 1210 1280 { 1281 CodeSpecializationKind kind = specializationKindFor(callMode); 1211 1282 if (verbose) 1212 1283 dataLog("Considering inlining ", callee, " into ", currentCodeOrigin(), "\n"); … … 1250 1321 codeBlock, kind, callee.isClosureCall()); 1251 1322 if (verbose) { 1252 dataLog(" Kind: ", kind, "\n");1323 dataLog(" Call mode: ", callMode, "\n"); 1253 1324 dataLog(" Is closure call: ", callee.isClosureCall(), "\n"); 1254 1325 dataLog(" Capability level: ", capabilityLevel, "\n"); … … 1321 1392 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind); 1322 1393 1323 ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX);1394 ASSERT(inliningCost(callee, argumentCountIncludingThis, InlineCallFrame::callModeFor(kind)) != UINT_MAX); 1324 1395 1325 1396 CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind); … … 1425 1496 return; 1426 1497 } 1427 1498 1428 1499 if (Options::verboseDFGByteCodeParsing()) 1429 1500 dataLog(" Creating new block after inlining.\n"); … … 1532 1603 } 1533 1604 1534 unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind);1605 unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, InlineCallFrame::callModeFor(kind)); 1535 1606 if (myInliningCost > inliningBalance) 1536 1607 return false; … … 3004 3075 // be true anyway except for op_loop_hint, which emits a Phantom to force this 3005 3076 // to be true. 3006 if (!m_currentBlock->isEmpty()) 3077 // We also don't insert a jump if the block already has a terminal, 3078 // which could happen after a tail call. 3079 ASSERT(m_currentBlock->isEmpty() || !m_currentBlock->terminal() 3080 || m_currentBlock->terminal()->op() == TailCall || m_currentBlock->terminal()->op() == TailCallVarargs); 3081 if (!m_currentBlock->isEmpty() && !m_currentBlock->terminal()) 3007 3082 addToGraph(Jump, OpInfo(m_currentIndex)); 3008 3083 return shouldContinueParsing; … … 3586 3661 3587 3662 case op_jmp: { 3663 if (m_currentBlock->terminal()) { 3664 // We could be the dummy jump to a return after a non-inlined, non-emulated tail call in a ternary operator 3665 Node* terminal = m_currentBlock->terminal(); 3666 ASSERT_UNUSED(terminal, terminal->op() == TailCall || terminal->op() == TailCallVarargs); 3667 LAST_OPCODE(op_ret); 3668 } 3588 3669 int relativeOffset = currentInstruction[1].u.operand; 3589 3670 addToGraph(Jump, OpInfo(m_currentIndex + relativeOffset)); … … 3757 3838 3758 3839 case op_ret: 3840 if (m_currentBlock->terminal()) { 3841 // We could be the dummy return after a non-inlined, non-emulated tail call 3842 Node* terminal = m_currentBlock->terminal(); 3843 ASSERT_UNUSED(terminal, terminal->op() == TailCall || terminal->op() == TailCallVarargs); 3844 LAST_OPCODE(op_ret); 3845 } 3759 3846 if (inlineCallFrame()) { 3760 3847 flushForReturn(); … … 3808 3895 3809 3896 case op_call: 3810 handleCall(currentInstruction, Call, C odeForCall);3897 handleCall(currentInstruction, Call, CallMode::Regular); 3811 3898 // Verify that handleCall(), which could have inlined the callee, didn't trash m_currentInstruction 3812 3899 ASSERT(m_currentInstruction == currentInstruction); 3813 3900 NEXT_OPCODE(op_call); 3814 3901 3902 case op_tail_call: 3903 flushForReturn(); 3904 handleCall(currentInstruction, TailCall, CallMode::Tail); 3905 // Verify that handleCall(), which could have inlined the callee, didn't trash m_currentInstruction 3906 ASSERT(m_currentInstruction == currentInstruction); 3907 // We let the following op_ret handle cases related to 3908 // inlining to keep things simple. 3909 NEXT_OPCODE(op_tail_call); 3910 3815 3911 case op_construct: 3816 handleCall(currentInstruction, Construct, C odeForConstruct);3912 handleCall(currentInstruction, Construct, CallMode::Construct); 3817 3913 NEXT_OPCODE(op_construct); 3818 3914 3819 3915 case op_call_varargs: { 3820 handleVarargsCall(currentInstruction, CallVarargs, C odeForCall);3916 handleVarargsCall(currentInstruction, CallVarargs, CallMode::Regular); 3821 3917 NEXT_OPCODE(op_call_varargs); 3822 3918 } 3919 3920 case op_tail_call_varargs: { 3921 flushForReturn(); 3922 handleVarargsCall(currentInstruction, TailCallVarargs, CallMode::Tail); 3923 NEXT_OPCODE(op_tail_call_varargs); 3924 } 3823 3925 3824 3926 case op_construct_varargs: { 3825 handleVarargsCall(currentInstruction, ConstructVarargs, C odeForConstruct);3927 handleVarargsCall(currentInstruction, ConstructVarargs, CallMode::Construct); 3826 3928 NEXT_OPCODE(op_construct_varargs); 3827 3929 } … … 4534 4636 } else 4535 4637 m_inlineCallFrame->isClosureCall = true; 4536 m_inlineCallFrame-> caller = byteCodeParser->currentCodeOrigin();4638 m_inlineCallFrame->directCaller = byteCodeParser->currentCodeOrigin(); 4537 4639 m_inlineCallFrame->arguments.resizeToFit(argumentCountIncludingThis); // Set the number of arguments including this, but don't configure the value recoveries, yet. 4538 4640 m_inlineCallFrame->kind = kind; … … 4605 4707 Vector<DeferredSourceDump>& deferredSourceDump = m_graph.m_plan.callback->ensureDeferredSourceDump(); 4606 4708 if (inlineCallFrame()) { 4607 DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()-> caller);4709 DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller); 4608 4710 deferredSourceDump.append(dump); 4609 4711 } else … … 4616 4718 dataLog( 4617 4719 " for inlining at ", CodeBlockWithJITType(m_codeBlock, JITCode::DFGJIT), 4618 " ", inlineCallFrame()-> caller);4720 " ", inlineCallFrame()->directCaller); 4619 4721 } 4620 4722 dataLog( … … 4700 4802 } 4701 4803 4702 m_currentBlock = 0;4804 m_currentBlock = nullptr; 4703 4805 } while (m_currentIndex < limit); 4704 4806 } -
trunk/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
r189995 r190220 180 180 case op_throw_static_error: 181 181 case op_call: 182 case op_tail_call: 182 183 case op_construct: 183 184 case op_call_varargs: 185 case op_tail_call_varargs: 184 186 case op_construct_varargs: 185 187 case op_create_direct_arguments: -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r189279 r190220 383 383 case ArrayPop: 384 384 case Call: 385 case TailCallInlinedCaller: 385 386 case Construct: 386 387 case CallVarargs: 387 388 case CallForwardVarargs: 389 case TailCallVarargsInlinedCaller: 390 case TailCallForwardVarargsInlinedCaller: 388 391 case ConstructVarargs: 389 392 case ConstructForwardVarargs: … … 393 396 read(World); 394 397 write(Heap); 398 return; 399 400 case TailCall: 401 case TailCallVarargs: 402 case TailCallForwardVarargs: 403 read(World); 404 write(SideState); 395 405 return; 396 406 -
trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
r189279 r190220 121 121 case CompareStrictEq: 122 122 case Call: 123 case TailCallInlinedCaller: 123 124 case Construct: 124 125 case CallVarargs: 126 case TailCallVarargsInlinedCaller: 125 127 case ConstructVarargs: 126 128 case LoadVarargs: 127 129 case CallForwardVarargs: 128 130 case ConstructForwardVarargs: 131 case TailCallForwardVarargs: 132 case TailCallForwardVarargsInlinedCaller: 129 133 case Breakpoint: 130 134 case ProfileWillCall: … … 151 155 case Switch: 152 156 case Return: 157 case TailCall: 158 case TailCallVarargs: 153 159 case Throw: 154 160 case CountExecution: -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r190076 r190220 1344 1344 case VarInjectionWatchpoint: 1345 1345 case Call: 1346 case TailCallInlinedCaller: 1346 1347 case Construct: 1347 1348 case CallVarargs: 1349 case TailCallVarargsInlinedCaller: 1348 1350 case ConstructVarargs: 1349 1351 case CallForwardVarargs: 1350 1352 case ConstructForwardVarargs: 1353 case TailCallForwardVarargs: 1354 case TailCallForwardVarargsInlinedCaller: 1351 1355 case LoadVarargs: 1352 1356 case ProfileControlFlow: … … 1366 1370 case Jump: 1367 1371 case Return: 1372 case TailCall: 1373 case TailCallVarargs: 1368 1374 case Throw: 1369 1375 case ThrowReferenceError: -
trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp
r190076 r190220 962 962 bool Graph::isLiveInBytecode(VirtualRegister operand, CodeOrigin codeOrigin) 963 963 { 964 CodeOrigin* codeOriginPtr = &codeOrigin; 964 965 for (;;) { 965 966 VirtualRegister reg = VirtualRegister( 966 operand.offset() - codeOrigin .stackOffset());967 968 if (operand.offset() < codeOrigin .stackOffset() + JSStack::CallFrameHeaderSize) {967 operand.offset() - codeOriginPtr->stackOffset()); 968 969 if (operand.offset() < codeOriginPtr->stackOffset() + JSStack::CallFrameHeaderSize) { 969 970 if (reg.isArgument()) { 970 971 RELEASE_ASSERT(reg.offset() < JSStack::CallFrameHeaderSize); 971 972 972 if (codeOrigin .inlineCallFrame->isClosureCall973 if (codeOriginPtr->inlineCallFrame->isClosureCall 973 974 && reg.offset() == JSStack::Callee) 974 975 return true; 975 976 976 if (codeOrigin .inlineCallFrame->isVarargs()977 if (codeOriginPtr->inlineCallFrame->isVarargs() 977 978 && reg.offset() == JSStack::ArgumentCount) 978 979 return true; … … 981 982 } 982 983 983 return livenessFor(codeOrigin .inlineCallFrame).operandIsLive(984 reg.offset(), codeOrigin .bytecodeIndex);985 } 986 987 InlineCallFrame* inlineCallFrame = codeOrigin .inlineCallFrame;984 return livenessFor(codeOriginPtr->inlineCallFrame).operandIsLive( 985 reg.offset(), codeOriginPtr->bytecodeIndex); 986 } 987 988 InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame; 988 989 if (!inlineCallFrame) 989 990 break; … … 995 996 return true; 996 997 997 codeOrigin = inlineCallFrame->caller; 998 codeOriginPtr = inlineCallFrame->getCallerSkippingDeadFrames(); 999 1000 // The first inline call frame could be an inline tail call 1001 if (!codeOriginPtr) 1002 break; 998 1003 } 999 1004 -
trunk/Source/JavaScriptCore/dfg/DFGGraph.h
r190076 r190220 718 718 VirtualRegister exclusionStart; 719 719 VirtualRegister exclusionEnd; 720 721 CodeOrigin* codeOriginPtr = &codeOrigin; 720 722 721 723 for (;;) { 722 InlineCallFrame* inlineCallFrame = codeOrigin .inlineCallFrame;724 InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame; 723 725 VirtualRegister stackOffset(inlineCallFrame ? inlineCallFrame->stackOffset : 0); 724 726 … … 732 734 CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame); 733 735 FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock); 734 const FastBitVector& liveness = fullLiveness.getLiveness(codeOrigin .bytecodeIndex);736 const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex); 735 737 for (unsigned relativeLocal = codeBlock->m_numCalleeRegisters; relativeLocal--;) { 736 738 VirtualRegister reg = stackOffset + virtualRegisterForLocal(relativeLocal); … … 759 761 functor(reg); 760 762 761 codeOrigin = inlineCallFrame->caller; 763 codeOriginPtr = inlineCallFrame->getCallerSkippingDeadFrames(); 764 765 // The first inline call frame could be an inline tail call 766 if (!codeOriginPtr) 767 break; 762 768 } 763 769 } -
trunk/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
r189138 r190220 365 365 366 366 case Return: 367 case TailCall: 368 case TailCallVarargs: 369 case TailCallForwardVarargs: 367 370 case Unreachable: 368 371 ASSERT(basicBlock->cfaBranchDirection == InvalidBranchDirection); -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r189995 r190220 576 576 return false; 577 577 578 bytecodeIndexToCheck = inlineCallFrame-> caller.bytecodeIndex;579 codeOrigin = codeOrigin.inlineCallFrame-> caller;578 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex; 579 codeOrigin = codeOrigin.inlineCallFrame->directCaller; 580 580 } 581 581 -
trunk/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
r189995 r190220 82 82 return false; 83 83 84 bytecodeIndexToCheck = inlineCallFrame-> caller.bytecodeIndex;85 origin = inlineCallFrame-> caller;84 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex; 85 origin = inlineCallFrame->directCaller; 86 86 } 87 87 } -
trunk/Source/JavaScriptCore/dfg/DFGNode.h
r190076 r190220 1008 1008 case CallVarargs: 1009 1009 case CallForwardVarargs: 1010 case TailCallVarargs: 1011 case TailCallForwardVarargs: 1012 case TailCallVarargsInlinedCaller: 1013 case TailCallForwardVarargsInlinedCaller: 1010 1014 case ConstructVarargs: 1011 1015 case ConstructForwardVarargs: … … 1105 1109 case Switch: 1106 1110 case Return: 1111 case TailCall: 1112 case TailCallVarargs: 1113 case TailCallForwardVarargs: 1107 1114 case Unreachable: 1108 1115 return true; … … 1255 1262 case GetByVal: 1256 1263 case Call: 1264 case TailCallInlinedCaller: 1257 1265 case Construct: 1258 1266 case CallVarargs: 1267 case TailCallVarargsInlinedCaller: 1259 1268 case ConstructVarargs: 1260 1269 case CallForwardVarargs: 1270 case TailCallForwardVarargsInlinedCaller: 1261 1271 case GetByOffset: 1262 1272 case MultiGetByOffset: -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r189279 r190220 246 246 macro(ConstructVarargs, NodeResultJS | NodeMustGenerate) \ 247 247 macro(ConstructForwardVarargs, NodeResultJS | NodeMustGenerate) \ 248 macro(TailCallInlinedCaller, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \ 249 macro(TailCallVarargsInlinedCaller, NodeResultJS | NodeMustGenerate) \ 250 macro(TailCallForwardVarargsInlinedCaller, NodeResultJS | NodeMustGenerate) \ 248 251 \ 249 252 /* Allocations. */\ … … 311 314 macro(Switch, NodeMustGenerate) \ 312 315 macro(Return, NodeMustGenerate) \ 316 macro(TailCall, NodeMustGenerate | NodeHasVarArgs) \ 317 macro(TailCallVarargs, NodeMustGenerate) \ 318 macro(TailCallForwardVarargs, NodeMustGenerate) \ 313 319 macro(Unreachable, NodeMustGenerate) \ 314 320 \ -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r190129 r190220 65 65 AssemblyHelpers::JumpList loopThreshold; 66 66 67 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame-> caller.inlineCallFrame) {67 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) { 68 68 loopThreshold.append( 69 69 jit.branchTest8( … … 137 137 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit) 138 138 { 139 // FIXME: We shouldn't leave holes on the stack when performing an OSR exit 140 // in presence of inlined tail calls. 141 // https://bugs.webkit.org/show_bug.cgi?id=147511 139 142 ASSERT(jit.baselineCodeBlock()->jitType() == JITCode::BaselineJIT); 140 143 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)JSStack::CodeBlock)); 141 144 142 CodeOrigin codeOrigin; 143 for (codeOrigin = exit.m_codeOrigin; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->caller) { 144 InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame; 145 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin); 146 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(inlineCallFrame->caller); 147 void* jumpTarget = nullptr; 145 const CodeOrigin* codeOrigin; 146 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingDeadFrames()) { 147 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame; 148 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(*codeOrigin); 149 CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingDeadFrames(); 148 150 void* trueReturnPC = nullptr; 149 150 unsigned callBytecodeIndex = inlineCallFrame->caller.bytecodeIndex; 151 152 switch (inlineCallFrame->kind) { 153 case InlineCallFrame::Call: 154 case InlineCallFrame::Construct: 155 case InlineCallFrame::CallVarargs: 156 case InlineCallFrame::ConstructVarargs: { 157 CallLinkInfo* callLinkInfo = 158 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 159 RELEASE_ASSERT(callLinkInfo); 160 161 jumpTarget = callLinkInfo->callReturnLocation().executableAddress(); 162 break; 163 } 164 165 case InlineCallFrame::GetterCall: 166 case InlineCallFrame::SetterCall: { 167 StructureStubInfo* stubInfo = 168 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 169 RELEASE_ASSERT(stubInfo); 170 151 GPRReg callerFrameGPR = GPRInfo::callFrameRegister; 152 153 if (!trueCaller) { 154 ASSERT(inlineCallFrame->isTail()); 155 jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT3); 156 jit.storePtr(GPRInfo::regT3, AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset())); 157 jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::callerFrameOffset()), GPRInfo::regT3); 158 callerFrameGPR = GPRInfo::regT3; 159 } else { 160 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); 161 unsigned callBytecodeIndex = trueCaller->bytecodeIndex; 162 void* jumpTarget = nullptr; 163 171 164 switch (inlineCallFrame->kind) { 172 case InlineCallFrame::GetterCall: 173 jumpTarget = jit.vm()->getCTIStub(baselineGetterReturnThunkGenerator).code().executableAddress(); 174 break; 175 case InlineCallFrame::SetterCall: 176 jumpTarget = jit.vm()->getCTIStub(baselineSetterReturnThunkGenerator).code().executableAddress(); 177 break; 178 default: 179 RELEASE_ASSERT_NOT_REACHED(); 165 case InlineCallFrame::Call: 166 case InlineCallFrame::Construct: 167 case InlineCallFrame::CallVarargs: 168 case InlineCallFrame::ConstructVarargs: 169 case InlineCallFrame::TailCall: 170 case InlineCallFrame::TailCallVarargs: { 171 CallLinkInfo* callLinkInfo = 172 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 173 RELEASE_ASSERT(callLinkInfo); 174 175 jumpTarget = callLinkInfo->callReturnLocation().executableAddress(); 180 176 break; 181 177 } 182 183 trueReturnPC = stubInfo->callReturnLocation.labelAtOffset( 184 stubInfo->patch.deltaCallToDone).executableAddress(); 185 break; 186 } } 187 188 GPRReg callerFrameGPR; 189 if (inlineCallFrame->caller.inlineCallFrame) { 190 jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->caller.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister, GPRInfo::regT3); 191 callerFrameGPR = GPRInfo::regT3; 192 } else 193 callerFrameGPR = GPRInfo::callFrameRegister; 194 195 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset())); 178 179 case InlineCallFrame::GetterCall: 180 case InlineCallFrame::SetterCall: { 181 StructureStubInfo* stubInfo = 182 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 183 RELEASE_ASSERT(stubInfo); 184 185 switch (inlineCallFrame->kind) { 186 case InlineCallFrame::GetterCall: 187 jumpTarget = jit.vm()->getCTIStub(baselineGetterReturnThunkGenerator).code().executableAddress(); 188 break; 189 case InlineCallFrame::SetterCall: 190 jumpTarget = jit.vm()->getCTIStub(baselineSetterReturnThunkGenerator).code().executableAddress(); 191 break; 192 default: 193 RELEASE_ASSERT_NOT_REACHED(); 194 break; 195 } 196 197 trueReturnPC = stubInfo->callReturnLocation.labelAtOffset( 198 stubInfo->patch.deltaCallToDone).executableAddress(); 199 break; 200 } } 201 202 if (trueCaller->inlineCallFrame) { 203 jit.addPtr( 204 AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), 205 GPRInfo::callFrameRegister, 206 GPRInfo::regT3); 207 callerFrameGPR = GPRInfo::regT3; 208 } 209 210 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset())); 211 } 212 196 213 if (trueReturnPC) 197 214 jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset())); … … 203 220 #if USE(JSVALUE64) 204 221 jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset())); 205 uint32_t locationBits = CallSiteIndex(codeOrigin .bytecodeIndex).bits();222 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits(); 206 223 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount))); 207 224 if (!inlineCallFrame->isClosureCall) … … 209 226 #else // USE(JSVALUE64) // so this is the 32-bit part 210 227 jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset())); 211 Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin .bytecodeIndex;228 Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex; 212 229 uint32_t locationBits = CallSiteIndex(instruction).bits(); 213 230 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount))); … … 218 235 } 219 236 237 // Don't need to set the toplevel code origin if we only did inline tail calls 238 if (codeOrigin) { 220 239 #if USE(JSVALUE64) 221 uint32_t locationBits = CallSiteIndex(codeOrigin .bytecodeIndex).bits();240 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits(); 222 241 #else 223 Instruction* instruction = jit.baselineCodeBlock()->instructions().begin() + codeOrigin .bytecodeIndex;242 Instruction* instruction = jit.baselineCodeBlock()->instructions().begin() + codeOrigin->bytecodeIndex; 224 243 uint32_t locationBits = CallSiteIndex(instruction).bits(); 225 244 #endif 226 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(JSStack::ArgumentCount))); 245 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(JSStack::ArgumentCount))); 246 } 227 247 } 228 248 -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
r189889 r190220 42 42 DeferGC deferGC(vm.heap); 43 43 44 for (; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame-> caller) {44 for (; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->directCaller) { 45 45 CodeBlock* codeBlock = codeOrigin.inlineCallFrame->baselineCodeBlock(); 46 46 if (codeBlock->jitType() == JSC::JITCode::BaselineJIT) -
trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp
r189523 r190220 1307 1307 1308 1308 bool didTryToEnterIntoInlinedLoops = false; 1309 for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame-> caller.inlineCallFrame) {1309 for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) { 1310 1310 if (inlineCallFrame->executable->didTryToEnterInLoop()) { 1311 1311 didTryToEnterIntoInlinedLoops = true; -
trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
r184776 r190220 112 112 case ForwardVarargs: 113 113 case CallForwardVarargs: 114 case ConstructForwardVarargs: { 114 case ConstructForwardVarargs: 115 case TailCallForwardVarargs: 116 case TailCallForwardVarargsInlinedCaller: { 115 117 InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame; 116 118 if (!inlineCallFrame) { … … 139 141 140 142 // Read all of the inline arguments and call frame headers that we didn't already capture. 141 for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame-> caller.inlineCallFrame) {143 for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingDeadFrames()) { 142 144 for (unsigned i = inlineCallFrame->arguments.size(); i-- > 1;) 143 145 m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset())); -
trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
r189279 r190220 195 195 case GetDirectPname: 196 196 case Call: 197 case TailCallInlinedCaller: 197 198 case Construct: 198 199 case CallVarargs: 200 case TailCallVarargsInlinedCaller: 199 201 case ConstructVarargs: 200 202 case CallForwardVarargs: 201 203 case ConstructForwardVarargs: 204 case TailCallForwardVarargsInlinedCaller: 202 205 case GetGlobalVar: 203 206 case GetGlobalLexicalVariable: … … 635 638 case PutToArguments: 636 639 case Return: 640 case TailCall: 641 case TailCallVarargs: 642 case TailCallForwardVarargs: 637 643 case Throw: 638 644 case PutById: -
trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
r190076 r190220 217 217 case CompareStrictEq: 218 218 case Call: 219 case TailCallInlinedCaller: 219 220 case Construct: 220 221 case CallVarargs: 222 case TailCallVarargsInlinedCaller: 223 case TailCallForwardVarargsInlinedCaller: 221 224 case ConstructVarargs: 222 225 case LoadVarargs: … … 263 266 case Switch: 264 267 case Return: 268 case TailCall: 269 case TailCallVarargs: 270 case TailCallForwardVarargs: 265 271 case Throw: 266 272 case ThrowReferenceError: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r190129 r190220 31 31 32 32 #include "ArrayPrototype.h" 33 #include "CallFrameShuffler.h" 33 34 #include "DFGAbstractInterpreterInlines.h" 34 35 #include "DFGCallArrayAllocatorSlowPathGenerator.h" … … 633 634 bool isVarargs = false; 634 635 bool isForwardVarargs = false; 636 bool isTail = false; 637 bool isEmulatedTail = false; 635 638 switch (node->op()) { 636 639 case Call: 637 640 callType = CallLinkInfo::Call; 638 641 break; 642 case TailCall: 643 callType = CallLinkInfo::TailCall; 644 isTail = true; 645 break; 646 case TailCallInlinedCaller: 647 callType = CallLinkInfo::Call; 648 isEmulatedTail = true; 649 break; 639 650 case Construct: 640 651 callType = CallLinkInfo::Construct; … … 644 655 isVarargs = true; 645 656 break; 657 case TailCallVarargs: 658 callType = CallLinkInfo::TailCallVarargs; 659 isVarargs = true; 660 isTail = true; 661 break; 662 case TailCallVarargsInlinedCaller: 663 callType = CallLinkInfo::CallVarargs; 664 isVarargs = true; 665 isEmulatedTail = true; 666 break; 646 667 case ConstructVarargs: 647 668 callType = CallLinkInfo::ConstructVarargs; … … 652 673 isForwardVarargs = true; 653 674 break; 675 case TailCallForwardVarargs: 676 callType = CallLinkInfo::TailCallVarargs; 677 isTail = true; 678 isForwardVarargs = true; 679 break; 680 case TailCallForwardVarargsInlinedCaller: 681 callType = CallLinkInfo::CallVarargs; 682 isEmulatedTail = true; 683 isForwardVarargs = true; 684 break; 654 685 case ConstructForwardVarargs: 655 686 callType = CallLinkInfo::ConstructVarargs; … … 662 693 663 694 Edge calleeEdge = m_jit.graph().child(node, 0); 695 GPRReg calleeTagGPR; 696 GPRReg calleePayloadGPR; 697 CallFrameShuffleData shuffleData; 664 698 665 699 // Gotta load the arguments somehow. Varargs is trickier. … … 756 790 int numPassedArgs = node->numChildren() - 1; 757 791 758 m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount)); 759 760 for (int i = 0; i < numPassedArgs; i++) { 761 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 762 JSValueOperand arg(this, argEdge); 763 GPRReg argTagGPR = arg.tagGPR(); 764 GPRReg argPayloadGPR = arg.payloadGPR(); 765 use(argEdge); 766 767 m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i)); 768 m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i)); 769 } 770 } 771 772 JSValueOperand callee(this, calleeEdge); 773 GPRReg calleeTagGPR = callee.tagGPR(); 774 GPRReg calleePayloadGPR = callee.payloadGPR(); 775 use(calleeEdge); 776 m_jit.store32(calleePayloadGPR, m_jit.calleeFramePayloadSlot(JSStack::Callee)); 777 m_jit.store32(calleeTagGPR, m_jit.calleeFrameTagSlot(JSStack::Callee)); 778 779 flushRegisters(); 792 if (node->op() == TailCall) { 793 JSValueOperand callee(this, calleeEdge); 794 calleeTagGPR = callee.tagGPR(); 795 calleePayloadGPR = callee.payloadGPR(); 796 use(calleeEdge); 797 798 shuffleData.numLocals = m_jit.graph().frameRegisterCount(); 799 shuffleData.callee = ValueRecovery::inPair(calleeTagGPR, calleePayloadGPR); 800 shuffleData.args.resize(numPassedArgs); 801 802 for (int i = 0; i < numPassedArgs; ++i) { 803 Edge argEdge = m_jit.graph().varArgChild(node, i + 1); 804 GenerationInfo& info = generationInfo(argEdge.node()); 805 use(argEdge); 806 shuffleData.args[i] = info.recovery(argEdge->virtualRegister()); 807 } 808 } else { 809 m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount)); 810 811 for (int i = 0; i < numPassedArgs; i++) { 812 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 813 JSValueOperand arg(this, argEdge); 814 GPRReg argTagGPR = arg.tagGPR(); 815 GPRReg argPayloadGPR = arg.payloadGPR(); 816 use(argEdge); 817 818 m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i)); 819 m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i)); 820 } 821 } 822 } 823 824 if (node->op() != TailCall) { 825 JSValueOperand callee(this, calleeEdge); 826 calleeTagGPR = callee.tagGPR(); 827 calleePayloadGPR = callee.payloadGPR(); 828 use(calleeEdge); 829 m_jit.store32(calleePayloadGPR, m_jit.calleeFramePayloadSlot(JSStack::Callee)); 830 m_jit.store32(calleeTagGPR, m_jit.calleeFrameTagSlot(JSStack::Callee)); 831 832 if (!isTail) 833 flushRegisters(); 834 } 780 835 781 836 GPRFlushedCallResult resultPayload(this); … … 787 842 JITCompiler::JumpList slowPath; 788 843 789 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(node->origin.semantic, m_stream->size()); 844 CodeOrigin staticOrigin = node->origin.semantic; 845 ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames()); 846 ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames())); 847 CodeOrigin dynamicOrigin = 848 isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames() : staticOrigin; 849 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size()); 790 850 m_jit.emitStoreCallSiteIndex(callSite); 791 851 792 852 CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo(); 793 853 794 slowPath.append(m_jit.branchIfNotCell( callee.jsValueRegs()));854 slowPath.append(m_jit.branchIfNotCell(JSValueRegs(calleeTagGPR, calleePayloadGPR))); 795 855 slowPath.append(m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleePayloadGPR, targetToCheck)); 796 856 797 JITCompiler::Call fastCall = m_jit.nearCall(); 857 if (isTail) { 858 if (node->op() == TailCall) { 859 info->setFrameShuffleData(shuffleData); 860 CallFrameShuffler(m_jit, shuffleData).prepareForTailCall(); 861 } else { 862 m_jit.emitRestoreCalleeSaves(); 863 m_jit.prepareForTailCallSlow(); 864 } 865 } 866 867 JITCompiler::Call fastCall = isTail ? m_jit.nearTailCall() : m_jit.nearCall(); 798 868 799 869 JITCompiler::Jump done = m_jit.jump(); … … 801 871 slowPath.link(&m_jit); 802 872 803 // Callee payload needs to be in regT0, tag in regT1 804 if (calleeTagGPR == GPRInfo::regT0) { 805 if (calleePayloadGPR == GPRInfo::regT1) 806 m_jit.swap(GPRInfo::regT1, GPRInfo::regT0); 807 else { 873 if (node->op() == TailCall) { 874 CallFrameShuffler callFrameShuffler(m_jit, shuffleData); 875 callFrameShuffler.setCalleeJSValueRegs(JSValueRegs( 876 GPRInfo::regT1, GPRInfo::regT0)); 877 callFrameShuffler.prepareForSlowPath(); 878 } else { 879 // Callee payload needs to be in regT0, tag in regT1 880 if (calleeTagGPR == GPRInfo::regT0) { 881 if (calleePayloadGPR == GPRInfo::regT1) 882 m_jit.swap(GPRInfo::regT1, GPRInfo::regT0); 883 else { 884 m_jit.move(calleeTagGPR, GPRInfo::regT1); 885 m_jit.move(calleePayloadGPR, GPRInfo::regT0); 886 } 887 } else { 888 m_jit.move(calleePayloadGPR, GPRInfo::regT0); 808 889 m_jit.move(calleeTagGPR, GPRInfo::regT1); 809 m_jit.move(calleePayloadGPR, GPRInfo::regT0);810 } 811 } else {812 m_jit.move(calleePayloadGPR, GPRInfo::regT0);813 m_jit.move(calleeTagGPR, GPRInfo::regT1);814 } 890 } 891 892 if (isTail) 893 m_jit.emitRestoreCalleeSaves(); 894 } 895 815 896 m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::regT2); 816 897 JITCompiler::Call slowCall = m_jit.nearCall(); … … 818 899 done.link(&m_jit); 819 900 820 m_jit.setupResults(resultPayloadGPR, resultTagGPR); 821 822 jsValueResult(resultTagGPR, resultPayloadGPR, node, DataFormatJS, UseChildrenCalledExplicitly); 901 if (isTail) 902 m_jit.abortWithReason(JITDidReturnFromTailCall); 903 else { 904 m_jit.setupResults(resultPayloadGPR, resultTagGPR); 905 906 jsValueResult(resultTagGPR, resultPayloadGPR, node, DataFormatJS, UseChildrenCalledExplicitly); 907 // After the calls are done, we need to reestablish our stack 908 // pointer. We rely on this for varargs calls, calls with arity 909 // mismatch (the callframe is slided) and tail calls. 910 m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister); 911 } 823 912 824 913 info->setUpCall(callType, node->origin.semantic, calleePayloadGPR); 825 914 m_jit.addJSCall(fastCall, slowCall, targetToCheck, info); 826 827 // After the calls are done, we need to reestablish our stack828 // pointer. We rely on this for varargs calls, calls with arity829 // mismatch (the callframe is slided) and tail calls.830 m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);831 915 } 832 916 … … 4254 4338 4255 4339 case Call: 4340 case TailCall: 4341 case TailCallInlinedCaller: 4256 4342 case Construct: 4257 4343 case CallVarargs: 4344 case TailCallVarargs: 4345 case TailCallVarargsInlinedCaller: 4346 case ConstructVarargs: 4258 4347 case CallForwardVarargs: 4259 case ConstructVarargs: 4348 case TailCallForwardVarargs: 4349 case TailCallForwardVarargsInlinedCaller: 4260 4350 case ConstructForwardVarargs: 4261 4351 emitCall(node); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r190129 r190220 30 30 31 31 #include "ArrayPrototype.h" 32 #include "CallFrameShuffler.h" 32 33 #include "DFGAbstractInterpreterInlines.h" 33 34 #include "DFGCallArrayAllocatorSlowPathGenerator.h" … … 614 615 bool isVarargs = false; 615 616 bool isForwardVarargs = false; 617 bool isTail = false; 618 bool isEmulatedTail = false; 616 619 switch (node->op()) { 617 620 case Call: 618 621 callType = CallLinkInfo::Call; 619 622 break; 623 case TailCall: 624 callType = CallLinkInfo::TailCall; 625 isTail = true; 626 break; 627 case TailCallInlinedCaller: 628 callType = CallLinkInfo::Call; 629 isEmulatedTail = true; 630 break; 620 631 case Construct: 621 632 callType = CallLinkInfo::Construct; … … 625 636 isVarargs = true; 626 637 break; 638 case TailCallVarargs: 639 callType = CallLinkInfo::TailCallVarargs; 640 isVarargs = true; 641 isTail = true; 642 break; 643 case TailCallVarargsInlinedCaller: 644 callType = CallLinkInfo::CallVarargs; 645 isVarargs = true; 646 isEmulatedTail = true; 647 break; 627 648 case ConstructVarargs: 628 649 callType = CallLinkInfo::ConstructVarargs; … … 637 658 isForwardVarargs = true; 638 659 break; 660 case TailCallForwardVarargs: 661 callType = CallLinkInfo::TailCallVarargs; 662 isTail = true; 663 isForwardVarargs = true; 664 break; 665 case TailCallForwardVarargsInlinedCaller: 666 callType = CallLinkInfo::CallVarargs; 667 isEmulatedTail = true; 668 isForwardVarargs = true; 669 break; 639 670 default: 640 671 DFG_CRASH(m_jit.graph(), node, "bad node type"); … … 642 673 } 643 674 644 Edge calleeEdge = m_jit.graph().child(node, 0); 675 GPRReg calleeGPR; 676 CallFrameShuffleData shuffleData; 645 677 646 678 // Gotta load the arguments somehow. Varargs is trickier. … … 733 765 734 766 m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), JITCompiler::calleeFramePayloadSlot(JSStack::ArgumentCount)); 735 736 for (int i = 0; i < numPassedArgs; i++) { 737 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 738 JSValueOperand arg(this, argEdge); 739 GPRReg argGPR = arg.gpr(); 740 use(argEdge); 741 742 m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i)); 743 } 744 } 745 746 JSValueOperand callee(this, calleeEdge); 747 GPRReg calleeGPR = callee.gpr(); 748 callee.use(); 749 m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(JSStack::Callee)); 750 751 flushRegisters(); 752 753 GPRFlushedCallResult result(this); 754 GPRReg resultGPR = result.gpr(); 767 768 if (node->op() == TailCall) { 769 Edge calleeEdge = m_jit.graph().child(node, 0); 770 JSValueOperand callee(this, calleeEdge); 771 calleeGPR = callee.gpr(); 772 callee.use(); 773 774 shuffleData.numLocals = m_jit.graph().frameRegisterCount(); 775 shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatJS); 776 shuffleData.args.resize(numPassedArgs); 777 778 for (int i = 0; i < numPassedArgs; ++i) { 779 Edge argEdge = m_jit.graph().varArgChild(node, i + 1); 780 GenerationInfo& info = generationInfo(argEdge.node()); 781 use(argEdge); 782 shuffleData.args[i] = info.recovery(argEdge->virtualRegister()); 783 } 784 785 shuffleData.setupCalleeSaveRegisters(m_jit.codeBlock()); 786 } else { 787 m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), JITCompiler::calleeFramePayloadSlot(JSStack::ArgumentCount)); 788 789 for (int i = 0; i < numPassedArgs; i++) { 790 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 791 JSValueOperand arg(this, argEdge); 792 GPRReg argGPR = arg.gpr(); 793 use(argEdge); 794 795 m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i)); 796 } 797 } 798 } 799 800 if (node->op() != TailCall) { 801 Edge calleeEdge = m_jit.graph().child(node, 0); 802 JSValueOperand callee(this, calleeEdge); 803 calleeGPR = callee.gpr(); 804 callee.use(); 805 m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(JSStack::Callee)); 806 807 flushRegisters(); 808 } 809 810 CodeOrigin staticOrigin = node->origin.semantic; 811 ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames()); 812 ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames())); 813 CodeOrigin dynamicOrigin = 814 isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingDeadFrames() : staticOrigin; 815 816 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size()); 817 m_jit.emitStoreCallSiteIndex(callSite); 818 819 CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo(); 755 820 756 821 JITCompiler::DataLabelPtr targetToCheck; 757 JITCompiler::Jump slowPath; 758 759 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(node->origin.semantic, m_stream->size()); 760 m_jit.emitStoreCallSiteIndex(callSite); 761 762 CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo(); 763 764 slowPath = m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleeGPR, targetToCheck, MacroAssembler::TrustedImmPtr(0)); 765 766 JITCompiler::Call fastCall = m_jit.nearCall(); 822 JITCompiler::Jump slowPath = m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleeGPR, targetToCheck, MacroAssembler::TrustedImmPtr(0)); 823 824 if (isTail) { 825 if (node->op() == TailCall) { 826 callLinkInfo->setFrameShuffleData(shuffleData); 827 CallFrameShuffler(m_jit, shuffleData).prepareForTailCall(); 828 } else { 829 m_jit.emitRestoreCalleeSaves(); 830 m_jit.prepareForTailCallSlow(); 831 } 832 } 833 834 JITCompiler::Call fastCall = isTail ? m_jit.nearTailCall() : m_jit.nearCall(); 767 835 768 836 JITCompiler::Jump done = m_jit.jump(); 769 837 770 838 slowPath.link(&m_jit); 771 772 m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0 839 840 if (node->op() == TailCall) { 841 CallFrameShuffler callFrameShuffler(m_jit, shuffleData); 842 callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0)); 843 callFrameShuffler.prepareForSlowPath(); 844 } else { 845 m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0 846 847 if (isTail) 848 m_jit.emitRestoreCalleeSaves(); // This needs to happen after we moved calleeGPR to regT0 849 } 850 773 851 m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2 774 852 JITCompiler::Call slowCall = m_jit.nearCall(); 775 853 776 854 done.link(&m_jit); 777 778 m_jit.move(GPRInfo::returnValueGPR, resultGPR); 779 780 jsValueResult(resultGPR, m_currentNode, DataFormatJS, UseChildrenCalledExplicitly); 781 855 856 if (isTail) 857 m_jit.abortWithReason(JITDidReturnFromTailCall); 858 else { 859 GPRFlushedCallResult result(this); 860 GPRReg resultGPR = result.gpr(); 861 m_jit.move(GPRInfo::returnValueGPR, resultGPR); 862 863 jsValueResult(resultGPR, m_currentNode, DataFormatJS, UseChildrenCalledExplicitly); 864 865 // After the calls are done, we need to reestablish our stack 866 // pointer. We rely on this for varargs calls, calls with arity 867 // mismatch (the callframe is slided) and tail calls. 868 m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister); 869 } 870 782 871 callLinkInfo->setUpCall(callType, m_currentNode->origin.semantic, calleeGPR); 783 872 m_jit.addJSCall(fastCall, slowCall, targetToCheck, callLinkInfo); 784 785 // After the calls are done, we need to reestablish our stack786 // pointer. We rely on this for varargs calls, calls with arity787 // mismatch (the callframe is slided) and tail calls.788 m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);789 873 } 790 874 … … 4249 4333 4250 4334 case Call: 4335 case TailCall: 4336 case TailCallInlinedCaller: 4251 4337 case Construct: 4252 4338 case CallVarargs: 4339 case TailCallVarargs: 4340 case TailCallVarargsInlinedCaller: 4253 4341 case CallForwardVarargs: 4254 4342 case ConstructVarargs: 4255 4343 case ConstructForwardVarargs: 4344 case TailCallForwardVarargs: 4345 case TailCallForwardVarargsInlinedCaller: 4256 4346 emitCall(node); 4257 4347 break; 4258 4348 4259 4349 case LoadVarargs: { 4260 4350 LoadVarargsData* data = node->loadVarargsData(); -
trunk/Source/JavaScriptCore/dfg/DFGValidate.cpp
r190076 r190220 571 571 case ForwardVarargs: 572 572 case CallForwardVarargs: 573 case TailCallForwardVarargs: 574 case TailCallForwardVarargsInlinedCaller: 573 575 case ConstructForwardVarargs: 574 576 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
r184288 r190220 136 136 case CallVarargs: 137 137 case ConstructVarargs: 138 case TailCallVarargs: 139 case TailCallVarargsInlinedCaller: 138 140 if (node->child1() == candidate || node->child3() == candidate) { 139 141 if (verbose) … … 283 285 node->setOpAndDefaultFlags(ConstructForwardVarargs); 284 286 break; 285 287 288 case TailCallVarargs: 289 if (node->child2() != candidate) 290 break; 291 node->setOpAndDefaultFlags(TailCallForwardVarargs); 292 break; 293 294 case TailCallVarargsInlinedCaller: 295 if (node->child2() != candidate) 296 break; 297 node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller); 298 break; 299 286 300 case SetLocal: 287 301 // This is super odd. We don't have to do anything here, since in DFG IR, the phantom -
trunk/Source/JavaScriptCore/interpreter/CallFrame.cpp
r188932 r190220 96 96 CodeOrigin codeOrigin = this->codeOrigin(); 97 97 for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame; inlineCallFrame;) { 98 codeOrigin = inlineCallFrame-> caller;98 codeOrigin = inlineCallFrame->directCaller; 99 99 inlineCallFrame = codeOrigin.inlineCallFrame; 100 100 } -
trunk/Source/JavaScriptCore/interpreter/StackVisitor.cpp
r189995 r190220 61 61 if (m_frame.isInlinedFrame()) { 62 62 InlineCallFrame* inlineCallFrame = m_frame.inlineCallFrame(); 63 CodeOrigin* callerCodeOrigin = &inlineCallFrame->caller; 64 readInlinedFrame(m_frame.callFrame(), callerCodeOrigin); 63 CodeOrigin* callerCodeOrigin = inlineCallFrame->getCallerSkippingDeadFrames(); 64 if (!callerCodeOrigin) { 65 while (inlineCallFrame) { 66 readInlinedFrame(m_frame.callFrame(), &inlineCallFrame->directCaller); 67 inlineCallFrame = m_frame.inlineCallFrame(); 68 } 69 m_frame.m_VMEntryFrame = m_frame.m_CallerVMEntryFrame; 70 readFrame(m_frame.callerFrame()); 71 } else 72 readInlinedFrame(m_frame.callFrame(), callerCodeOrigin); 65 73 return; 66 74 }
Note:
See TracChangeset
for help on using the changeset viewer.