Changeset 209121 in webkit
- Timestamp:
- Nov 29, 2016 10:24:44 PM (7 years ago)
- Location:
- trunk
- Files:
-
- 12 added
- 30 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r209113 r209121 1 2016-11-29 Saam Barati <sbarati@apple.com> 2 3 We should be able optimize the pattern where we spread a function's rest parameter to another call 4 https://bugs.webkit.org/show_bug.cgi?id=163865 5 6 Reviewed by Filip Pizlo. 7 8 * microbenchmarks/default-derived-constructor.js: Added. 9 (createClassHierarchy.let.currentClass): 10 (createClassHierarchy): 11 * stress/call-varargs-spread.js: Added. 12 (assert): 13 (bar): 14 (foo): 15 * stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js: Added. 16 (assert): 17 (baz): 18 (bar): 19 (foo): 20 * stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js: Added. 21 (assert): 22 (foo): 23 (escape): 24 (bar): 25 * stress/phantom-new-array-with-spread-osr-exit.js: Added. 26 (assert): 27 (baz): 28 (bar): 29 (effects): 30 (foo): 31 * stress/phantom-spread-forward-varargs.js: Added. 32 (assert): 33 (test1.bar): 34 (test1.foo): 35 (test1): 36 (test2.bar): 37 (test2.foo): 38 (test3.baz): 39 (test3.bar): 40 (test3.foo): 41 (test4.baz): 42 (test4.bar): 43 (test4.foo): 44 (test5.baz): 45 (test5.bar): 46 (test5.foo): 47 * stress/phantom-spread-osr-exit.js: Added. 48 (assert): 49 (baz): 50 (bar): 51 (effects): 52 (foo): 53 * stress/spread-call-convert-to-static-call.js: Added. 54 (assert): 55 (baz): 56 (bar): 57 (foo): 58 * stress/spread-forward-call-varargs-stack-overflow.js: Added. 59 (assert): 60 (identity): 61 (bar): 62 (foo): 63 * stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js: Added. 64 (assert): 65 (baz.Array.prototype.Symbol.iterator): 66 (baz): 67 (bar): 68 (foo): 69 (test): 70 * stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js: Added. 71 (assert): 72 (baz.Array.prototype.Symbol.iterator): 73 (baz): 74 (bar): 75 (foo): 76 * stress/spread-forward-varargs-stack-overflow.js: Added. 77 (assert): 78 (bar): 79 (foo): 80 1 81 2016-11-29 Caitlin Potter <caitp@igalia.com> 2 82 -
trunk/Source/JavaScriptCore/ChangeLog
r209120 r209121 1 2016-11-29 Saam Barati <sbarati@apple.com> 2 3 We should be able optimize the pattern where we spread a function's rest parameter to another call 4 https://bugs.webkit.org/show_bug.cgi?id=163865 5 6 Reviewed by Filip Pizlo. 7 8 This patch optimizes the following patterns to prevent both the allocation 9 of the rest parameter, and the execution of the iterator protocol: 10 11 ``` 12 function foo(...args) { 13 let arr = [...args]; 14 } 15 16 and 17 18 function foo(...args) { 19 bar(...args); 20 } 21 ``` 22 23 To do this, I've extended the arguments elimination phase to reason 24 about Spread and NewArrayWithSpread. I've added two new nodes, PhantomSpread 25 and PhantomNewArrayWithSpread. PhantomSpread is only allowed over rest 26 parameters that don't escape. If the rest parameter *does* escape, we can't 27 convert the spread into a phantom because it would not be sound w.r.t JS 28 semantics because we would be reading from the call frame even though 29 the rest array may have changed. 30 31 Note that NewArrayWithSpread also understands what to do when one of its 32 arguments is PhantomSpread(@PhantomCreateRest) even if it itself is escaped. 33 34 PhantomNewArrayWithSpread is only allowed over a series of 35 PhantomSpread(@PhantomCreateRest) nodes. Like with PhantomSpread, PhantomNewArrayWithSpread 36 is only allowed if none of its arguments that are being spread are escaped 37 and if it itself is not escaped. 38 39 Because there is a dependency between a node being a candidate and 40 the escaped state of the node's children, I've extended the notion 41 of escaping a node inside the arguments elimination phase. Now, when 42 any node is escaped, we must consider all other candidates that are may 43 now no longer be valid. 44 45 For example: 46 47 ``` 48 function foo(...args) { 49 escape(args); 50 bar(...args); 51 } 52 ``` 53 54 In the above program, we don't know if the function call to escape() 55 modifies args, therefore, the spread can not become phantom because 56 the execution of the spread may not be as simple as reading the 57 arguments from the call frame. 58 59 Unfortunately, the arguments elimination phase does not consider control 60 flow when doing its escape analysis. It would be good to integrate this 61 phase with the object allocation sinking phase. To see why, consider 62 an example where we don't eliminate the spread and allocation of the rest 63 parameter even though we could: 64 65 ``` 66 function foo(rareCondition, ...args) { 67 bar(...args); 68 if (rareCondition) 69 baz(args); 70 } 71 ``` 72 73 There are only a few users of the PhantomSpread and PhantomNewArrayWithSpread 74 nodes. PhantomSpread is only used by PhantomNewArrayWithSpread and NewArrayWithSpread. 75 PhantomNewArrayWithSpread is only used by ForwardVarargs and the various 76 *Call*ForwardVarargs nodes. The users of these phantoms know how to produce 77 what the phantom node would have produced. For example, NewArrayWithSpread 78 knows how to produce the values that would have been produced by PhantomSpread(@PhantomCreateRest) 79 by directly reading from the call frame. 80 81 This patch is a 6% speedup on my MBP on ES6SampleBench. 82 83 * b3/B3LowerToAir.cpp: 84 (JSC::B3::Air::LowerToAir::tryAppendLea): 85 * b3/B3ValueRep.h: 86 * builtins/BuiltinExecutables.cpp: 87 (JSC::BuiltinExecutables::createDefaultConstructor): 88 * dfg/DFGAbstractInterpreterInlines.h: 89 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 90 * dfg/DFGArgumentsEliminationPhase.cpp: 91 * dfg/DFGClobberize.h: 92 (JSC::DFG::clobberize): 93 * dfg/DFGDoesGC.cpp: 94 (JSC::DFG::doesGC): 95 * dfg/DFGFixupPhase.cpp: 96 (JSC::DFG::FixupPhase::fixupNode): 97 * dfg/DFGForAllKills.h: 98 (JSC::DFG::forAllKillsInBlock): 99 * dfg/DFGNode.h: 100 (JSC::DFG::Node::hasConstant): 101 (JSC::DFG::Node::constant): 102 (JSC::DFG::Node::bitVector): 103 (JSC::DFG::Node::isPhantomAllocation): 104 * dfg/DFGNodeType.h: 105 * dfg/DFGOSRAvailabilityAnalysisPhase.cpp: 106 (JSC::DFG::OSRAvailabilityAnalysisPhase::run): 107 (JSC::DFG::LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator): 108 (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode): 109 * dfg/DFGOSRAvailabilityAnalysisPhase.h: 110 * dfg/DFGObjectAllocationSinkingPhase.cpp: 111 * dfg/DFGPreciseLocalClobberize.h: 112 (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop): 113 * dfg/DFGPredictionPropagationPhase.cpp: 114 * dfg/DFGPromotedHeapLocation.cpp: 115 (WTF::printInternal): 116 * dfg/DFGPromotedHeapLocation.h: 117 * dfg/DFGSafeToExecute.h: 118 (JSC::DFG::safeToExecute): 119 * dfg/DFGSpeculativeJIT32_64.cpp: 120 (JSC::DFG::SpeculativeJIT::compile): 121 * dfg/DFGSpeculativeJIT64.cpp: 122 (JSC::DFG::SpeculativeJIT::compile): 123 * dfg/DFGValidate.cpp: 124 * ftl/FTLCapabilities.cpp: 125 (JSC::FTL::canCompile): 126 * ftl/FTLLowerDFGToB3.cpp: 127 (JSC::FTL::DFG::LowerDFGToB3::LowerDFGToB3): 128 (JSC::FTL::DFG::LowerDFGToB3::compileNode): 129 (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread): 130 (JSC::FTL::DFG::LowerDFGToB3::compileSpread): 131 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread): 132 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs): 133 (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs): 134 (JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame): 135 (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread): 136 * ftl/FTLOperations.cpp: 137 (JSC::FTL::operationPopulateObjectInOSR): 138 (JSC::FTL::operationMaterializeObjectInOSR): 139 * jit/SetupVarargsFrame.cpp: 140 (JSC::emitSetupVarargsFrameFastCase): 141 * jsc.cpp: 142 (GlobalObject::finishCreation): 143 (functionMaxArguments): 144 * runtime/JSFixedArray.h: 145 (JSC::JSFixedArray::createFromArray): 146 1 147 2016-11-29 Commit Queue <commit-queue@webkit.org> 2 148 -
trunk/Source/JavaScriptCore/b3/B3LowerToAir.cpp
r208985 r209121 1930 1930 && value->child(0)->opcode() == Add) { 1931 1931 innerAdd = value->child(0); 1932 offset = value->child(1)->asInt32();1932 offset = static_cast<int32_t>(value->child(1)->asInt()); 1933 1933 value = value->child(0); 1934 1934 } -
trunk/Source/JavaScriptCore/b3/B3ValueRep.h
r206525 r209121 79 79 // the register is used late. This means that the register is used after the result 80 80 // is defined (i.e, the result will interfere with this as an input). 81 // It's not valid for this to be used as a result kind.81 // It's not a valid output representation. 82 82 LateRegister, 83 83 -
trunk/Source/JavaScriptCore/builtins/BuiltinExecutables.cpp
r208063 r209121 46 46 { 47 47 static NeverDestroyed<const String> baseConstructorCode(ASCIILiteral("(function () { })")); 48 static NeverDestroyed<const String> derivedConstructorCode(ASCIILiteral("(function ( ) { super(...arguments); })"));48 static NeverDestroyed<const String> derivedConstructorCode(ASCIILiteral("(function (...args) { super(...args); })")); 49 49 50 50 switch (constructorKind) { -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r208985 r209121 1976 1976 case PhantomClonedArguments: 1977 1977 case PhantomCreateRest: 1978 case PhantomSpread: 1979 case PhantomNewArrayWithSpread: 1978 1980 case BottomValue: 1979 1981 m_state.setDidClobber(true); // Prevent constant folding. -
trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
r208311 r209121 92 92 void identifyCandidates() 93 93 { 94 for (BasicBlock* block : m_graph.blocksIn NaturalOrder()) {94 for (BasicBlock* block : m_graph.blocksInPreOrder()) { 95 95 for (Node* node : *block) { 96 96 switch (node->op()) { … … 110 110 } 111 111 break; 112 113 case Spread: 114 if (m_graph.isWatchingHavingABadTimeWatchpoint(node)) { 115 // We check ArrayUse here because ArrayUse indicates that the iterator 116 // protocol for Arrays is non-observable by user code (e.g, it hasn't 117 // been changed). 118 if (node->child1().useKind() == ArrayUse && node->child1()->op() == CreateRest && m_candidates.contains(node->child1().node())) 119 m_candidates.add(node); 120 } 121 break; 122 123 case NewArrayWithSpread: { 124 if (m_graph.isWatchingHavingABadTimeWatchpoint(node)) { 125 BitVector* bitVector = node->bitVector(); 126 // We only allow for Spreads to be of rest nodes for now. 127 bool isOK = true; 128 for (unsigned i = 0; i < node->numChildren(); i++) { 129 if (bitVector->get(i)) { 130 Node* child = m_graph.varArgChild(node, i).node(); 131 isOK = child->op() == Spread && child->child1()->op() == CreateRest && m_candidates.contains(child); 132 if (!isOK) 133 break; 134 } 135 } 136 137 if (!isOK) 138 break; 139 140 m_candidates.add(node); 141 } 142 break; 143 } 112 144 113 145 case CreateScopedArguments: … … 126 158 if (verbose) 127 159 dataLog("Candidates: ", listDump(m_candidates), "\n"); 160 } 161 162 bool isStillValidCandidate(Node* candidate) 163 { 164 switch (candidate->op()) { 165 case Spread: 166 return m_candidates.contains(candidate->child1().node()); 167 168 case NewArrayWithSpread: { 169 BitVector* bitVector = candidate->bitVector(); 170 for (unsigned i = 0; i < candidate->numChildren(); i++) { 171 if (bitVector->get(i)) { 172 if (!m_candidates.contains(m_graph.varArgChild(candidate, i).node())) 173 return false; 174 } 175 } 176 return true; 177 } 178 179 default: 180 return true; 181 } 182 183 RELEASE_ASSERT_NOT_REACHED(); 184 return false; 185 } 186 187 void removeInvalidCandidates() 188 { 189 bool changed; 190 do { 191 changed = false; 192 Vector<Node*, 1> toRemove; 193 194 for (Node* candidate : m_candidates) { 195 if (!isStillValidCandidate(candidate)) 196 toRemove.append(candidate); 197 } 198 199 if (toRemove.size()) { 200 changed = true; 201 for (Node* node : toRemove) 202 m_candidates.remove(node); 203 } 204 205 } while (changed); 206 } 207 208 void transitivelyRemoveCandidate(Node* node, Node* source = nullptr) 209 { 210 bool removed = m_candidates.remove(node); 211 if (removed && verbose && source) 212 dataLog("eliminating candidate: ", node, " because it escapes from: ", source, "\n"); 213 214 if (removed) 215 removeInvalidCandidates(); 128 216 } 129 217 … … 134 222 if (!edge) 135 223 return; 136 bool removed = m_candidates.remove(edge.node()); 137 if (removed && verbose) 138 dataLog("eliminating candidate: ", edge.node(), " because it escapes from: ", source, "\n"); 224 transitivelyRemoveCandidate(edge.node(), source); 139 225 }; 140 226 … … 188 274 } 189 275 }; 276 277 removeInvalidCandidates(); 190 278 191 279 for (BasicBlock* block : m_graph.blocksInNaturalOrder()) { … … 202 290 203 291 case GetArrayLength: 204 escapeBasedOnArrayMode(node->arrayMode(), node->child1(), node);292 // FIXME: It would not be hard to support NewArrayWithSpread here if it is only over Spread(CreateRest) nodes. 205 293 escape(node->child2(), node); 206 294 break; 295 296 case NewArrayWithSpread: { 297 BitVector* bitVector = node->bitVector(); 298 bool isWatchingHavingABadTimeWatchpoint = m_graph.isWatchingHavingABadTimeWatchpoint(node); 299 for (unsigned i = 0; i < node->numChildren(); i++) { 300 Edge child = m_graph.varArgChild(node, i); 301 bool dontEscape; 302 if (bitVector->get(i)) { 303 dontEscape = child->op() == Spread 304 && child->child1().useKind() == ArrayUse 305 && child->child1()->op() == CreateRest 306 && isWatchingHavingABadTimeWatchpoint; 307 } else 308 dontEscape = false; 309 310 if (!dontEscape) 311 escape(child, node); 312 } 313 314 break; 315 } 316 317 case Spread: { 318 bool isOK = node->child1().useKind() == ArrayUse && node->child1()->op() == CreateRest; 319 if (!isOK) 320 escape(node->child1(), node); 321 break; 322 } 323 207 324 208 325 case LoadVarargs: 326 if (node->loadVarargsData()->offset && node->child1()->op() == NewArrayWithSpread) 327 escape(node->child1(), node); 209 328 break; 210 329 … … 215 334 escape(node->child1(), node); 216 335 escape(node->child2(), node); 336 if (node->callVarargsData()->firstVarArgOffset && node->child3()->op() == NewArrayWithSpread) 337 escape(node->child3(), node); 217 338 break; 218 339 … … 264 385 structure = globalObject->restParameterStructure(); 265 386 break; 387 case NewArrayWithSpread: 388 ASSERT(m_graph.isWatchingHavingABadTimeWatchpoint(node)); 389 structure = globalObject->originalArrayStructureForIndexingType(ArrayWithContiguous); 390 break; 266 391 default: 267 392 RELEASE_ASSERT_NOT_REACHED(); … … 391 516 if (verbose) 392 517 dataLog("eliminating candidate: ", candidate, " because it is clobbered by: ", block->at(nodeIndex), "\n"); 393 m_candidates.remove(candidate);518 transitivelyRemoveCandidate(candidate); 394 519 return; 395 520 } … … 418 543 if (verbose) 419 544 dataLog("eliminating candidate: ", candidate, " because it is clobbered by ", block->at(nodeIndex), "\n"); 420 m_candidates.remove(candidate);545 transitivelyRemoveCandidate(candidate); 421 546 return; 422 547 } … … 461 586 // Therefore, we should have already transformed the allocation before the use 462 587 // of an allocation. 463 ASSERT(candidate->op() == PhantomCreateRest || candidate->op() == PhantomDirectArguments || candidate->op() == PhantomClonedArguments); 588 ASSERT(candidate->op() == PhantomCreateRest || candidate->op() == PhantomDirectArguments || candidate->op() == PhantomClonedArguments 589 || candidate->op() == PhantomSpread || candidate->op() == PhantomNewArrayWithSpread); 464 590 return true; 465 591 }; 466 592 467 593 switch (node->op()) { 468 594 case CreateDirectArguments: … … 488 614 489 615 node->setOpAndDefaultFlags(PhantomClonedArguments); 616 break; 617 618 case Spread: 619 if (!m_candidates.contains(node)) 620 break; 621 622 node->setOpAndDefaultFlags(PhantomSpread); 623 break; 624 625 case NewArrayWithSpread: 626 if (!m_candidates.contains(node)) 627 break; 628 629 node->setOpAndDefaultFlags(PhantomNewArrayWithSpread); 490 630 break; 491 631 … … 600 740 break; 601 741 742 // LoadVarargs can exit, so it better be exitOK. 743 DFG_ASSERT(m_graph, node, node->origin.exitOK); 744 bool canExit = true; 602 745 LoadVarargsData* varargsData = node->loadVarargsData(); 603 unsigned numberOfArgumentsToSkip = 0; 604 if (candidate->op() == PhantomCreateRest) 605 numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 606 varargsData->offset += numberOfArgumentsToSkip; 607 608 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame; 609 610 if (inlineCallFrame 611 && !inlineCallFrame->isVarargs()) { 612 613 unsigned argumentCountIncludingThis = inlineCallFrame->arguments.size(); 614 if (argumentCountIncludingThis > varargsData->offset) 615 argumentCountIncludingThis -= varargsData->offset; 616 else 617 argumentCountIncludingThis = 1; 618 RELEASE_ASSERT(argumentCountIncludingThis >= 1); 619 620 if (argumentCountIncludingThis <= varargsData->limit) { 621 // LoadVarargs can exit, so it better be exitOK. 622 DFG_ASSERT(m_graph, node, node->origin.exitOK); 623 bool canExit = true; 624 625 Node* argumentCountIncludingThisNode = insertionSet.insertConstant( 626 nodeIndex, node->origin.withExitOK(canExit), 627 jsNumber(argumentCountIncludingThis)); 628 insertionSet.insertNode( 629 nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit), 630 OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode)); 631 insertionSet.insertNode( 632 nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit), 633 OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)), 634 Edge(argumentCountIncludingThisNode, KnownInt32Use)); 635 636 DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum); 637 // Define our limit to exclude "this", since that's a bit easier to reason about. 638 unsigned limit = varargsData->limit - 1; 639 Node* undefined = nullptr; 640 for (unsigned storeIndex = 0; storeIndex < limit; ++storeIndex) { 641 // First determine if we have an element we can load, and load it if 642 // possible. 643 644 unsigned loadIndex = storeIndex + varargsData->offset; 645 646 Node* value; 647 if (loadIndex + 1 < inlineCallFrame->arguments.size()) { 648 VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset; 649 StackAccessData* data = m_graph.m_stackAccessData.add( 650 reg, FlushedJSValue); 651 652 value = insertionSet.insertNode( 653 nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit), 654 OpInfo(data)); 655 } else { 656 // FIXME: We shouldn't have to store anything if 657 // storeIndex >= varargsData->mandatoryMinimum, but we will still 658 // have GetStacks in that range. So if we don't do the stores, we'll 659 // have degenerate IR: we'll have GetStacks of something that didn't 660 // have PutStacks. 661 // https://bugs.webkit.org/show_bug.cgi?id=147434 662 746 747 auto storeArgumentCountIncludingThis = [&] (unsigned argumentCountIncludingThis) { 748 Node* argumentCountIncludingThisNode = insertionSet.insertConstant( 749 nodeIndex, node->origin.withExitOK(canExit), 750 jsNumber(argumentCountIncludingThis)); 751 insertionSet.insertNode( 752 nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit), 753 OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode)); 754 insertionSet.insertNode( 755 nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit), 756 OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)), 757 Edge(argumentCountIncludingThisNode, KnownInt32Use)); 758 }; 759 760 auto storeValue = [&] (Node* value, unsigned storeIndex) { 761 VirtualRegister reg = varargsData->start + storeIndex; 762 StackAccessData* data = 763 m_graph.m_stackAccessData.add(reg, FlushedJSValue); 764 765 insertionSet.insertNode( 766 nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit), 767 OpInfo(reg.offset()), Edge(value)); 768 insertionSet.insertNode( 769 nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit), 770 OpInfo(data), Edge(value)); 771 }; 772 773 if (candidate->op() == PhantomNewArrayWithSpread) { 774 bool canConvertToStaticLoadStores = true; 775 BitVector* bitVector = candidate->bitVector(); 776 777 for (unsigned i = 0; i < candidate->numChildren(); i++) { 778 if (bitVector->get(i)) { 779 Node* child = m_graph.varArgChild(candidate, i).node(); 780 ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest); 781 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 782 if (!inlineCallFrame || inlineCallFrame->isVarargs()) { 783 canConvertToStaticLoadStores = false; 784 break; 785 } 786 } 787 } 788 789 if (canConvertToStaticLoadStores) { 790 unsigned argumentCountIncludingThis = 1; // |this| 791 for (unsigned i = 0; i < candidate->numChildren(); i++) { 792 if (bitVector->get(i)) { 793 Node* child = m_graph.varArgChild(candidate, i).node(); 794 ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest); 795 unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip(); 796 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 797 unsigned numberOfSpreadArguments; 798 unsigned frameArgumentCount = inlineCallFrame->arguments.size() - 1; 799 if (frameArgumentCount >= numberOfArgumentsToSkip) 800 numberOfSpreadArguments = frameArgumentCount - numberOfArgumentsToSkip; 801 else 802 numberOfSpreadArguments = 0; 803 804 argumentCountIncludingThis += numberOfSpreadArguments; 805 } else 806 ++argumentCountIncludingThis; 807 } 808 809 if (argumentCountIncludingThis <= varargsData->limit) { 810 storeArgumentCountIncludingThis(argumentCountIncludingThis); 811 812 DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum); 813 // Define our limit to exclude "this", since that's a bit easier to reason about. 814 unsigned limit = varargsData->limit - 1; 815 unsigned storeIndex = 0; 816 for (unsigned i = 0; i < candidate->numChildren(); i++) { 817 if (bitVector->get(i)) { 818 Node* child = m_graph.varArgChild(candidate, i).node(); 819 ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest); 820 unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip(); 821 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 822 unsigned frameArgumentCount = inlineCallFrame->arguments.size() - 1; 823 for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) { 824 VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset; 825 StackAccessData* data = m_graph.m_stackAccessData.add(reg, FlushedJSValue); 826 Node* value = insertionSet.insertNode( 827 nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit), 828 OpInfo(data)); 829 storeValue(value, storeIndex); 830 ++storeIndex; 831 } 832 } else { 833 Node* value = m_graph.varArgChild(candidate, i).node(); 834 storeValue(value, storeIndex); 835 ++storeIndex; 836 } 837 } 838 839 RELEASE_ASSERT(storeIndex <= limit); 840 Node* undefined = nullptr; 841 for (; storeIndex < limit; ++storeIndex) { 663 842 if (!undefined) { 664 843 undefined = insertionSet.insertConstant( 665 844 nodeIndex, node->origin.withExitOK(canExit), jsUndefined()); 666 845 } 667 value = undefined;846 storeValue(undefined, storeIndex); 668 847 } 669 670 // Now that we have a value, store it.671 672 VirtualRegister reg = varargsData->start + storeIndex;673 StackAccessData* data =674 m_graph.m_stackAccessData.add(reg, FlushedJSValue);675 676 insertionSet.insertNode(677 nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),678 OpInfo(reg.offset()), Edge(value));679 insertionSet.insertNode(680 nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),681 OpInfo(data), Edge(value));682 848 } 683 849 684 850 node->remove(); 685 851 node->origin.exitOK = canExit; 686 852 break; 687 853 } 688 } 689 854 } else { 855 unsigned numberOfArgumentsToSkip = 0; 856 if (candidate->op() == PhantomCreateRest) 857 numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 858 varargsData->offset += numberOfArgumentsToSkip; 859 860 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame; 861 862 if (inlineCallFrame 863 && !inlineCallFrame->isVarargs()) { 864 865 unsigned argumentCountIncludingThis = inlineCallFrame->arguments.size(); 866 if (argumentCountIncludingThis > varargsData->offset) 867 argumentCountIncludingThis -= varargsData->offset; 868 else 869 argumentCountIncludingThis = 1; 870 RELEASE_ASSERT(argumentCountIncludingThis >= 1); 871 872 if (argumentCountIncludingThis <= varargsData->limit) { 873 874 storeArgumentCountIncludingThis(argumentCountIncludingThis); 875 876 DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum); 877 // Define our limit to exclude "this", since that's a bit easier to reason about. 878 unsigned limit = varargsData->limit - 1; 879 Node* undefined = nullptr; 880 for (unsigned storeIndex = 0; storeIndex < limit; ++storeIndex) { 881 // First determine if we have an element we can load, and load it if 882 // possible. 883 884 Node* value = nullptr; 885 unsigned loadIndex = storeIndex + varargsData->offset; 886 887 if (loadIndex + 1 < inlineCallFrame->arguments.size()) { 888 VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset; 889 StackAccessData* data = m_graph.m_stackAccessData.add( 890 reg, FlushedJSValue); 891 892 value = insertionSet.insertNode( 893 nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit), 894 OpInfo(data)); 895 } else { 896 // FIXME: We shouldn't have to store anything if 897 // storeIndex >= varargsData->mandatoryMinimum, but we will still 898 // have GetStacks in that range. So if we don't do the stores, we'll 899 // have degenerate IR: we'll have GetStacks of something that didn't 900 // have PutStacks. 901 // https://bugs.webkit.org/show_bug.cgi?id=147434 902 903 if (!undefined) { 904 undefined = insertionSet.insertConstant( 905 nodeIndex, node->origin.withExitOK(canExit), jsUndefined()); 906 } 907 value = undefined; 908 } 909 910 // Now that we have a value, store it. 911 storeValue(value, storeIndex); 912 } 913 914 node->remove(); 915 node->origin.exitOK = canExit; 916 break; 917 } 918 } 919 } 920 690 921 node->setOpAndDefaultFlags(ForwardVarargs); 691 922 break; … … 699 930 if (!isEliminatedAllocation(candidate)) 700 931 break; 701 702 unsigned numberOfArgumentsToSkip = 0; 703 if (candidate->op() == PhantomCreateRest) 704 numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 705 CallVarargsData* varargsData = node->callVarargsData(); 706 varargsData->firstVarArgOffset += numberOfArgumentsToSkip; 707 708 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame; 709 if (inlineCallFrame && !inlineCallFrame->isVarargs()) { 710 Vector<Node*> arguments; 711 for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->arguments.size(); ++i) { 712 StackAccessData* data = m_graph.m_stackAccessData.add( 713 virtualRegisterForArgument(i) + inlineCallFrame->stackOffset, 714 FlushedJSValue); 715 716 Node* value = insertionSet.insertNode( 717 nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data)); 718 719 arguments.append(value); 720 } 721 932 933 auto convertToStaticArgumentCountCall = [&] (const Vector<Node*>& arguments) { 722 934 unsigned firstChild = m_graph.m_varArgChildren.size(); 723 935 m_graph.m_varArgChildren.append(node->child1()); … … 744 956 AdjacencyList::Variable, 745 957 firstChild, m_graph.m_varArgChildren.size() - firstChild); 746 break; 747 } 748 749 switch (node->op()) { 750 case CallVarargs: 751 node->setOpAndDefaultFlags(CallForwardVarargs); 752 break; 753 case ConstructVarargs: 754 node->setOpAndDefaultFlags(ConstructForwardVarargs); 755 break; 756 case TailCallVarargs: 757 node->setOpAndDefaultFlags(TailCallForwardVarargs); 758 break; 759 case TailCallVarargsInlinedCaller: 760 node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller); 761 break; 762 default: 763 RELEASE_ASSERT_NOT_REACHED(); 764 } 958 }; 959 960 auto convertToForwardsCall = [&] () { 961 switch (node->op()) { 962 case CallVarargs: 963 node->setOpAndDefaultFlags(CallForwardVarargs); 964 break; 965 case ConstructVarargs: 966 node->setOpAndDefaultFlags(ConstructForwardVarargs); 967 break; 968 case TailCallVarargs: 969 node->setOpAndDefaultFlags(TailCallForwardVarargs); 970 break; 971 case TailCallVarargsInlinedCaller: 972 node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller); 973 break; 974 default: 975 RELEASE_ASSERT_NOT_REACHED(); 976 } 977 }; 978 979 if (candidate->op() == PhantomNewArrayWithSpread) { 980 bool canTransformToStaticArgumentCountCall = true; 981 BitVector* bitVector = candidate->bitVector(); 982 for (unsigned i = 0; i < candidate->numChildren(); i++) { 983 if (bitVector->get(i)) { 984 Node* node = m_graph.varArgChild(candidate, i).node(); 985 ASSERT(node->op() == PhantomSpread); 986 ASSERT(node->child1()->op() == PhantomCreateRest); 987 InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame; 988 if (!inlineCallFrame || inlineCallFrame->isVarargs()) { 989 canTransformToStaticArgumentCountCall = false; 990 break; 991 } 992 } 993 } 994 995 if (canTransformToStaticArgumentCountCall) { 996 Vector<Node*> arguments; 997 for (unsigned i = 0; i < candidate->numChildren(); i++) { 998 Node* child = m_graph.varArgChild(candidate, i).node(); 999 if (bitVector->get(i)) { 1000 ASSERT(child->op() == PhantomSpread); 1001 ASSERT(child->child1()->op() == PhantomCreateRest); 1002 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 1003 unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip(); 1004 for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->arguments.size(); ++i) { 1005 StackAccessData* data = m_graph.m_stackAccessData.add( 1006 virtualRegisterForArgument(i) + inlineCallFrame->stackOffset, 1007 FlushedJSValue); 1008 1009 Node* value = insertionSet.insertNode( 1010 nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data)); 1011 1012 arguments.append(value); 1013 } 1014 } else 1015 arguments.append(child); 1016 } 1017 1018 convertToStaticArgumentCountCall(arguments); 1019 } else 1020 convertToForwardsCall(); 1021 } else { 1022 unsigned numberOfArgumentsToSkip = 0; 1023 if (candidate->op() == PhantomCreateRest) 1024 numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 1025 CallVarargsData* varargsData = node->callVarargsData(); 1026 varargsData->firstVarArgOffset += numberOfArgumentsToSkip; 1027 1028 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame; 1029 if (inlineCallFrame && !inlineCallFrame->isVarargs()) { 1030 Vector<Node*> arguments; 1031 for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->arguments.size(); ++i) { 1032 StackAccessData* data = m_graph.m_stackAccessData.add( 1033 virtualRegisterForArgument(i) + inlineCallFrame->stackOffset, 1034 FlushedJSValue); 1035 1036 Node* value = insertionSet.insertNode( 1037 nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data)); 1038 1039 arguments.append(value); 1040 } 1041 1042 convertToStaticArgumentCountCall(arguments); 1043 } else 1044 convertToForwardsCall(); 1045 } 1046 765 1047 break; 766 1048 } -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r208720 r209121 479 479 return; 480 480 481 case PhantomSpread: 482 case PhantomNewArrayWithSpread: 481 483 case PhantomCreateRest: 482 484 // Even though it's phantom, it still has the property that one can't be replaced with another. … … 1131 1133 // This also reads from JSFixedArray's data store, but we don't have any way of describing that yet. 1132 1134 read(HeapObjectCount); 1135 for (unsigned i = 0; i < node->numChildren(); i++) { 1136 Node* child = graph.varArgChild(node, i).node(); 1137 if (child->op() == PhantomSpread) { 1138 read(Stack); 1139 break; 1140 } 1141 } 1133 1142 write(HeapObjectCount); 1134 1143 return; -
trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
r208704 r209121 249 249 case PhantomDirectArguments: 250 250 case PhantomCreateRest: 251 case PhantomNewArrayWithSpread: 252 case PhantomSpread: 251 253 case PhantomClonedArguments: 252 254 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r208761 r209121 1440 1440 case PhantomDirectArguments: 1441 1441 case PhantomCreateRest: 1442 case PhantomSpread: 1443 case PhantomNewArrayWithSpread: 1442 1444 case PhantomClonedArguments: 1443 1445 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/dfg/DFGForAllKills.h
r206525 r209121 154 154 functor(block->size(), node); 155 155 156 LocalOSRAvailabilityCalculator localAvailability ;156 LocalOSRAvailabilityCalculator localAvailability(graph); 157 157 localAvailability.beginBlock(block); 158 158 // Start at the second node, because the functor is expected to only inspect nodes from the start of -
trunk/Source/JavaScriptCore/dfg/DFGNode.h
r208704 r209121 460 460 case PhantomDirectArguments: 461 461 case PhantomClonedArguments: 462 case PhantomCreateRest:463 462 // These pretend to be the empty value constant for the benefit of the DFG backend, which 464 463 // otherwise wouldn't take kindly to a node that doesn't compute a value. … … 474 473 ASSERT(hasConstant()); 475 474 476 if (op() == PhantomDirectArguments || op() == PhantomClonedArguments || op() == PhantomCreateRest) {475 if (op() == PhantomDirectArguments || op() == PhantomClonedArguments) { 477 476 // These pretend to be the empty value constant for the benefit of the DFG backend, which 478 477 // otherwise wouldn't take kindly to a node that doesn't compute a value. … … 1074 1073 BitVector* bitVector() 1075 1074 { 1076 ASSERT(op() == NewArrayWithSpread );1075 ASSERT(op() == NewArrayWithSpread || op() == PhantomNewArrayWithSpread); 1077 1076 return m_opInfo.as<BitVector*>(); 1078 1077 } … … 1770 1769 case PhantomDirectArguments: 1771 1770 case PhantomCreateRest: 1771 case PhantomSpread: 1772 case PhantomNewArrayWithSpread: 1772 1773 case PhantomClonedArguments: 1773 1774 case PhantomNewFunction: -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r208704 r209121 346 346 macro(PhantomDirectArguments, NodeResultJS | NodeMustGenerate) \ 347 347 macro(PhantomCreateRest, NodeResultJS | NodeMustGenerate) \ 348 macro(PhantomSpread, NodeResultJS | NodeMustGenerate) \ 349 macro(PhantomNewArrayWithSpread, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \ 348 350 macro(CreateScopedArguments, NodeResultJS) \ 349 351 macro(CreateClonedArguments, NodeResultJS) \ -
trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
r208235 r209121 67 67 // This could be made more efficient by processing blocks in reverse postorder. 68 68 69 LocalOSRAvailabilityCalculator calculator ;69 LocalOSRAvailabilityCalculator calculator(m_graph); 70 70 bool changed; 71 71 do { … … 106 106 } 107 107 108 LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator() 108 LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator(Graph& graph) 109 : m_graph(graph) 109 110 { 110 111 } … … 165 166 break; 166 167 } 167 168 168 169 case PhantomCreateRest: 169 170 case PhantomDirectArguments: … … 209 210 break; 210 211 } 212 213 case PhantomSpread: 214 m_availability.m_heap.set(PromotedHeapLocation(SpreadPLoc, node), Availability(node->child1().node())); 215 break; 216 217 case PhantomNewArrayWithSpread: 218 for (unsigned i = 0; i < node->numChildren(); i++) { 219 Node* child = m_graph.varArgChild(node, i).node(); 220 m_availability.m_heap.set(PromotedHeapLocation(NewArrayWithSpreadArgumentPLoc, node, i), Availability(child)); 221 } 222 break; 211 223 212 224 default: -
trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h
r206525 r209121 47 47 class LocalOSRAvailabilityCalculator { 48 48 public: 49 LocalOSRAvailabilityCalculator( );49 LocalOSRAvailabilityCalculator(Graph&); 50 50 ~LocalOSRAvailabilityCalculator(); 51 51 … … 55 55 56 56 AvailabilityMap m_availability; 57 Graph& m_graph; 57 58 }; 58 59 -
trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
r208761 r209121 1645 1645 // Place Phis in the right places, replace all uses of any load with the appropriate 1646 1646 // value, and create the materialization nodes. 1647 LocalOSRAvailabilityCalculator availabilityCalculator ;1647 LocalOSRAvailabilityCalculator availabilityCalculator(m_graph); 1648 1648 m_graph.clearReplacements(); 1649 1649 for (BasicBlock* block : m_graph.blocksInPreOrder()) { -
trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
r208524 r209121 107 107 void readTop() 108 108 { 109 switch (m_node->op()) { 110 case GetMyArgumentByVal: 111 case GetMyArgumentByValOutOfBounds: 112 case ForwardVarargs: 113 case CallForwardVarargs: 114 case ConstructForwardVarargs: 115 case TailCallForwardVarargs: 116 case TailCallForwardVarargsInlinedCaller: { 117 118 InlineCallFrame* inlineCallFrame; 119 if (m_node->hasArgumentsChild() && m_node->argumentsChild()) 120 inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame; 121 else 122 inlineCallFrame = m_node->origin.semantic.inlineCallFrame; 123 124 unsigned numberOfArgumentsToSkip = 0; 125 if (m_node->op() == GetMyArgumentByVal || m_node->op() == GetMyArgumentByValOutOfBounds) { 126 // The value of numberOfArgumentsToSkip guarantees that GetMyArgumentByVal* will never 127 // read any arguments below the number of arguments to skip. For example, if numberOfArgumentsToSkip is 2, 128 // we will never read argument 0 or argument 1. 129 numberOfArgumentsToSkip = m_node->numberOfArgumentsToSkip(); 130 } 131 109 auto readFrame = [&] (InlineCallFrame* inlineCallFrame, unsigned numberOfArgumentsToSkip) { 132 110 if (!inlineCallFrame) { 133 111 // Read the outermost arguments and argument count. … … 135 113 m_read(virtualRegisterForArgument(i)); 136 114 m_read(VirtualRegister(CallFrameSlot::argumentCount)); 137 break;115 return; 138 116 } 139 117 … … 142 120 if (inlineCallFrame->isVarargs()) 143 121 m_read(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)); 122 }; 123 124 auto readNewArrayWithSpreadNode = [&] (Node* arrayWithSpread) { 125 ASSERT(arrayWithSpread->op() == NewArrayWithSpread || arrayWithSpread->op() == PhantomNewArrayWithSpread); 126 BitVector* bitVector = arrayWithSpread->bitVector(); 127 for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) { 128 if (bitVector->get(i)) { 129 Node* child = m_graph.varArgChild(arrayWithSpread, i).node(); 130 if (child->op() == PhantomSpread) { 131 ASSERT(child->child1()->op() == PhantomCreateRest); 132 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 133 unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip(); 134 readFrame(inlineCallFrame, numberOfArgumentsToSkip); 135 } 136 } 137 } 138 }; 139 140 bool isForwardingNode = false; 141 switch (m_node->op()) { 142 case ForwardVarargs: 143 case CallForwardVarargs: 144 case ConstructForwardVarargs: 145 case TailCallForwardVarargs: 146 case TailCallForwardVarargsInlinedCaller: 147 isForwardingNode = true; 148 FALLTHROUGH; 149 case GetMyArgumentByVal: 150 case GetMyArgumentByValOutOfBounds: { 151 152 if (isForwardingNode && m_node->hasArgumentsChild() && m_node->argumentsChild() && m_node->argumentsChild()->op() == PhantomNewArrayWithSpread) { 153 Node* arrayWithSpread = m_node->argumentsChild().node(); 154 readNewArrayWithSpreadNode(arrayWithSpread); 155 } else { 156 InlineCallFrame* inlineCallFrame; 157 if (m_node->hasArgumentsChild() && m_node->argumentsChild()) 158 inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame; 159 else 160 inlineCallFrame = m_node->origin.semantic.inlineCallFrame; 161 162 unsigned numberOfArgumentsToSkip = 0; 163 if (m_node->op() == GetMyArgumentByVal || m_node->op() == GetMyArgumentByValOutOfBounds) { 164 // The value of numberOfArgumentsToSkip guarantees that GetMyArgumentByVal* will never 165 // read any arguments below the number of arguments to skip. For example, if numberOfArgumentsToSkip is 2, 166 // we will never read argument 0 or argument 1. 167 numberOfArgumentsToSkip = m_node->numberOfArgumentsToSkip(); 168 } 169 170 readFrame(inlineCallFrame, numberOfArgumentsToSkip); 171 } 172 173 break; 174 } 175 176 case NewArrayWithSpread: { 177 readNewArrayWithSpreadNode(m_node); 144 178 break; 145 179 } -
trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
r208704 r209121 1013 1013 case PhantomDirectArguments: 1014 1014 case PhantomCreateRest: 1015 case PhantomSpread: 1016 case PhantomNewArrayWithSpread: 1015 1017 case PhantomClonedArguments: 1016 1018 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/dfg/DFGPromotedHeapLocation.cpp
r199075 r209121 115 115 out.print("VectorLengthPLoc"); 116 116 return; 117 118 case SpreadPLoc: 119 out.print("SpreadPLoc"); 120 return; 121 122 case NewArrayWithSpreadArgumentPLoc: 123 out.print("NewArrayWithSpreadArgumentPLoc"); 124 return; 117 125 } 118 126 -
trunk/Source/JavaScriptCore/dfg/DFGPromotedHeapLocation.h
r206525 r209121 62 62 PublicLengthPLoc, 63 63 StructurePLoc, 64 VectorLengthPLoc 64 VectorLengthPLoc, 65 SpreadPLoc, 66 NewArrayWithSpreadArgumentPLoc, 65 67 }; 66 68 -
trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
r208704 r209121 361 361 case PhantomDirectArguments: 362 362 case PhantomCreateRest: 363 case PhantomSpread: 364 case PhantomNewArrayWithSpread: 363 365 case PhantomClonedArguments: 364 366 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r208985 r209121 5617 5617 case GetMyArgumentByValOutOfBounds: 5618 5618 case PhantomCreateRest: 5619 case PhantomSpread: 5620 case PhantomNewArrayWithSpread: 5619 5621 DFG_CRASH(m_jit.graph(), node, "unexpected node in DFG backend"); 5620 5622 break; -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r208985 r209121 5841 5841 case GetStack: 5842 5842 case PhantomCreateRest: 5843 case PhantomSpread: 5844 case PhantomNewArrayWithSpread: 5843 5845 DFG_CRASH(m_jit.graph(), node, "Unexpected node"); 5844 5846 break; -
trunk/Source/JavaScriptCore/dfg/DFGValidate.cpp
r208704 r209121 694 694 break; 695 695 696 case PhantomSpread: 697 VALIDATE((node), m_graph.m_form == SSA); 698 // We currently only support PhantomSpread over PhantomCreateRest. 699 VALIDATE((node), node->child1()->op() == PhantomCreateRest); 700 break; 701 702 case PhantomNewArrayWithSpread: { 703 VALIDATE((node), m_graph.m_form == SSA); 704 BitVector* bitVector = node->bitVector(); 705 for (unsigned i = 0; i < node->numChildren(); i++) { 706 Node* child = m_graph.varArgChild(node, i).node(); 707 if (bitVector->get(i)) { 708 // We currently only support PhantomSpread over PhantomCreateRest. 709 VALIDATE((node), child->op() == PhantomSpread); 710 } else 711 VALIDATE((node), !child->isPhantomAllocation()); 712 } 713 break; 714 } 715 716 case NewArrayWithSpread: { 717 BitVector* bitVector = node->bitVector(); 718 for (unsigned i = 0; i < node->numChildren(); i++) { 719 Node* child = m_graph.varArgChild(node, i).node(); 720 if (child->isPhantomAllocation()) { 721 VALIDATE((node), bitVector->get(i)); 722 VALIDATE((node), m_graph.m_form == SSA); 723 VALIDATE((node), child->op() == PhantomSpread); 724 } 725 } 726 break; 727 } 728 696 729 default: 697 730 m_graph.doToChildren( -
trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
r209112 r209121 236 236 case PhantomDirectArguments: 237 237 case PhantomCreateRest: 238 case PhantomSpread: 239 case PhantomNewArrayWithSpread: 238 240 case PhantomClonedArguments: 239 241 case GetMyArgumentByVal: -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r209112 r209121 135 135 , m_out(state) 136 136 , m_proc(*state.proc) 137 , m_availabilityCalculator(m_graph) 137 138 , m_state(state.graph) 138 139 , m_interpreter(state.graph, m_state) … … 1090 1091 case PhantomDirectArguments: 1091 1092 case PhantomCreateRest: 1093 case PhantomSpread: 1094 case PhantomNewArrayWithSpread: 1092 1095 case PhantomClonedArguments: 1093 1096 case PutHint: … … 4332 4335 unsigned startLength = 0; 4333 4336 BitVector* bitVector = m_node->bitVector(); 4337 HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths; 4338 4334 4339 for (unsigned i = 0; i < m_node->numChildren(); ++i) { 4335 4340 if (!bitVector->get(i)) … … 4342 4347 if (bitVector->get(i)) { 4343 4348 Edge use = m_graph.varArgChild(m_node, i); 4344 LValue fixedArray = lowCell(use); 4345 length = m_out.add(length, m_out.load32(fixedArray, m_heaps.JSFixedArray_size)); 4349 if (use->op() == PhantomSpread) { 4350 RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest); 4351 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame; 4352 unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip(); 4353 LValue spreadLength = cachedSpreadLengths.ensure(inlineCallFrame, [&] () { 4354 return getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip); 4355 }).iterator->value; 4356 length = m_out.add(length, spreadLength); 4357 } else { 4358 LValue fixedArray = lowCell(use); 4359 length = m_out.add(length, m_out.load32(fixedArray, m_heaps.JSFixedArray_size)); 4360 } 4346 4361 } 4347 4362 } … … 4356 4371 Edge use = m_graph.varArgChild(m_node, i); 4357 4372 if (bitVector->get(i)) { 4358 LBasicBlock loopStart = m_out.newBlock(); 4359 LBasicBlock continuation = m_out.newBlock(); 4360 4361 LValue fixedArray = lowCell(use); 4362 4363 ValueFromBlock fixedIndexStart = m_out.anchor(m_out.constIntPtr(0)); 4364 ValueFromBlock arrayIndexStart = m_out.anchor(index); 4365 ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index); 4366 4367 LValue fixedArraySize = m_out.zeroExtPtr(m_out.load32(fixedArray, m_heaps.JSFixedArray_size)); 4368 4369 m_out.branch( 4370 m_out.isZero64(fixedArraySize), 4371 unsure(continuation), unsure(loopStart)); 4372 4373 LBasicBlock lastNext = m_out.appendTo(loopStart, continuation); 4374 4375 LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart); 4376 LValue fixedArrayIndex = m_out.phi(pointerType(), fixedIndexStart); 4377 4378 LValue item = m_out.load64(m_out.baseIndex(m_heaps.JSFixedArray_buffer, fixedArray, fixedArrayIndex)); 4379 m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex)); 4380 4381 LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1)); 4382 LValue nextFixedArrayIndex = m_out.add(fixedArrayIndex, m_out.constIntPtr(1)); 4383 ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex); 4384 4385 m_out.addIncomingToPhi(fixedArrayIndex, m_out.anchor(nextFixedArrayIndex)); 4386 m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex)); 4387 4388 m_out.branch( 4389 m_out.below(nextFixedArrayIndex, fixedArraySize), 4390 unsure(loopStart), unsure(continuation)); 4391 4392 m_out.appendTo(continuation, lastNext); 4393 index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish); 4373 if (use->op() == PhantomSpread) { 4374 RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest); 4375 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame; 4376 unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip(); 4377 4378 LValue length = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame)); 4379 LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip); 4380 4381 LBasicBlock loopStart = m_out.newBlock(); 4382 LBasicBlock continuation = m_out.newBlock(); 4383 4384 ValueFromBlock loadIndexStart = m_out.anchor(m_out.constIntPtr(0)); 4385 ValueFromBlock arrayIndexStart = m_out.anchor(index); 4386 ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index); 4387 4388 m_out.branch( 4389 m_out.isZero64(length), 4390 unsure(continuation), unsure(loopStart)); 4391 4392 LBasicBlock lastNext = m_out.appendTo(loopStart, continuation); 4393 4394 LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart); 4395 LValue loadIndex = m_out.phi(pointerType(), loadIndexStart); 4396 4397 LValue item = m_out.load64(m_out.baseIndex(m_heaps.variables, sourceStart, loadIndex)); 4398 m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex)); 4399 4400 LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1)); 4401 LValue nextLoadIndex = m_out.add(loadIndex, m_out.constIntPtr(1)); 4402 ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex); 4403 4404 m_out.addIncomingToPhi(loadIndex, m_out.anchor(nextLoadIndex)); 4405 m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex)); 4406 4407 m_out.branch( 4408 m_out.below(nextLoadIndex, length), 4409 unsure(loopStart), unsure(continuation)); 4410 4411 m_out.appendTo(continuation, lastNext); 4412 index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish); 4413 } else { 4414 LBasicBlock loopStart = m_out.newBlock(); 4415 LBasicBlock continuation = m_out.newBlock(); 4416 4417 LValue fixedArray = lowCell(use); 4418 4419 ValueFromBlock fixedIndexStart = m_out.anchor(m_out.constIntPtr(0)); 4420 ValueFromBlock arrayIndexStart = m_out.anchor(index); 4421 ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index); 4422 4423 LValue fixedArraySize = m_out.zeroExtPtr(m_out.load32(fixedArray, m_heaps.JSFixedArray_size)); 4424 4425 m_out.branch( 4426 m_out.isZero64(fixedArraySize), 4427 unsure(continuation), unsure(loopStart)); 4428 4429 LBasicBlock lastNext = m_out.appendTo(loopStart, continuation); 4430 4431 LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart); 4432 LValue fixedArrayIndex = m_out.phi(pointerType(), fixedIndexStart); 4433 4434 LValue item = m_out.load64(m_out.baseIndex(m_heaps.JSFixedArray_buffer, fixedArray, fixedArrayIndex)); 4435 m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex)); 4436 4437 LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1)); 4438 LValue nextFixedArrayIndex = m_out.add(fixedArrayIndex, m_out.constIntPtr(1)); 4439 ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex); 4440 4441 m_out.addIncomingToPhi(fixedArrayIndex, m_out.anchor(nextFixedArrayIndex)); 4442 m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex)); 4443 4444 m_out.branch( 4445 m_out.below(nextFixedArrayIndex, fixedArraySize), 4446 unsure(loopStart), unsure(continuation)); 4447 4448 m_out.appendTo(continuation, lastNext); 4449 index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish); 4450 } 4394 4451 } else { 4395 4452 IndexedAbstractHeap& heap = m_heaps.indexedContiguousProperties; … … 4429 4486 void compileSpread() 4430 4487 { 4488 // It would be trivial to support this, but for now, we never create 4489 // IR that would necessitate this. The reason is that Spread is only 4490 // consumed by NewArrayWithSpread and never anything else. Also, any 4491 // Spread(PhantomCreateRest) will turn into PhantomSpread(PhantomCreateRest). 4492 RELEASE_ASSERT(m_node->child1()->op() != PhantomCreateRest); 4493 4431 4494 LValue argument = lowCell(m_node->child1()); 4432 4495 … … 6132 6195 } 6133 6196 6197 void compileCallOrConstructVarargsSpread() 6198 { 6199 Node* node = m_node; 6200 LValue jsCallee = lowJSValue(m_node->child1()); 6201 LValue thisArg = lowJSValue(m_node->child2()); 6202 6203 RELEASE_ASSERT(node->child3()->op() == PhantomNewArrayWithSpread); 6204 Node* arrayWithSpread = node->child3().node(); 6205 BitVector* bitVector = arrayWithSpread->bitVector(); 6206 unsigned numNonSpreadParameters = 0; 6207 Vector<LValue, 2> spreadLengths; 6208 Vector<LValue, 8> patchpointArguments; 6209 HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths; 6210 6211 for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) { 6212 if (bitVector->get(i)) { 6213 Node* spread = m_graph.varArgChild(arrayWithSpread, i).node(); 6214 RELEASE_ASSERT(spread->op() == PhantomSpread); 6215 RELEASE_ASSERT(spread->child1()->op() == PhantomCreateRest); 6216 InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame; 6217 unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip(); 6218 LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () { 6219 return m_out.zeroExtPtr(getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip)); 6220 }).iterator->value; 6221 patchpointArguments.append(length); 6222 spreadLengths.append(length); 6223 } else { 6224 ++numNonSpreadParameters; 6225 LValue argument = lowJSValue(m_graph.varArgChild(arrayWithSpread, i)); 6226 patchpointArguments.append(argument); 6227 } 6228 } 6229 6230 LValue argumentCountIncludingThis = m_out.constIntPtr(numNonSpreadParameters + 1); 6231 for (LValue length : spreadLengths) 6232 argumentCountIncludingThis = m_out.add(length, argumentCountIncludingThis); 6233 6234 PatchpointValue* patchpoint = m_out.patchpoint(Int64); 6235 6236 patchpoint->append(jsCallee, ValueRep::reg(GPRInfo::regT0)); 6237 patchpoint->append(thisArg, ValueRep::WarmAny); 6238 patchpoint->append(argumentCountIncludingThis, ValueRep::WarmAny); 6239 patchpoint->appendVectorWithRep(patchpointArguments, ValueRep::WarmAny); 6240 patchpoint->append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister)); 6241 patchpoint->append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister)); 6242 6243 RefPtr<PatchpointExceptionHandle> exceptionHandle = preparePatchpointForExceptions(patchpoint); 6244 6245 patchpoint->clobber(RegisterSet::macroScratchRegisters()); 6246 patchpoint->clobber(RegisterSet::volatileRegistersForJSCall()); // No inputs will be in a volatile register. 6247 patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR); 6248 6249 patchpoint->numGPScratchRegisters = 0; 6250 6251 // This is the minimum amount of call arg area stack space that all JS->JS calls always have. 6252 unsigned minimumJSCallAreaSize = 6253 sizeof(CallerFrameAndPC) + 6254 WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(EncodedJSValue)); 6255 6256 m_proc.requestCallArgAreaSizeInBytes(minimumJSCallAreaSize); 6257 6258 CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite(); 6259 State* state = &m_ftlState; 6260 patchpoint->setGenerator( 6261 [=] (CCallHelpers& jit, const StackmapGenerationParams& params) { 6262 AllowMacroScratchRegisterUsage allowScratch(jit); 6263 CallSiteIndex callSiteIndex = 6264 state->jitCode->common.addUniqueCallSiteIndex(codeOrigin); 6265 6266 Box<CCallHelpers::JumpList> exceptions = 6267 exceptionHandle->scheduleExitCreation(params)->jumps(jit); 6268 6269 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex); 6270 6271 jit.store32( 6272 CCallHelpers::TrustedImm32(callSiteIndex.bits()), 6273 CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount))); 6274 6275 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo(); 6276 6277 RegisterSet usedRegisters = RegisterSet::allRegisters(); 6278 usedRegisters.exclude(RegisterSet::volatileRegistersForJSCall()); 6279 GPRReg calleeGPR = params[1].gpr(); 6280 usedRegisters.set(calleeGPR); 6281 6282 ScratchRegisterAllocator allocator(usedRegisters); 6283 GPRReg scratchGPR1 = allocator.allocateScratchGPR(); 6284 GPRReg scratchGPR2 = allocator.allocateScratchGPR(); 6285 GPRReg scratchGPR3 = allocator.allocateScratchGPR(); 6286 GPRReg scratchGPR4 = allocator.allocateScratchGPR(); 6287 RELEASE_ASSERT(!allocator.numberOfReusedRegisters()); 6288 6289 auto getValueFromRep = [&] (B3::ValueRep rep, GPRReg result) { 6290 ASSERT(!usedRegisters.get(result)); 6291 6292 if (rep.isConstant()) { 6293 jit.move(CCallHelpers::Imm64(rep.value()), result); 6294 return; 6295 } 6296 6297 // Note: in this function, we only request 64 bit values. 6298 if (rep.isStack()) { 6299 jit.load64( 6300 CCallHelpers::Address(GPRInfo::callFrameRegister, rep.offsetFromFP()), 6301 result); 6302 return; 6303 } 6304 6305 RELEASE_ASSERT(rep.isGPR()); 6306 ASSERT(usedRegisters.get(rep.gpr())); 6307 jit.move(rep.gpr(), result); 6308 }; 6309 6310 auto callWithExceptionCheck = [&] (void* callee) { 6311 jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR); 6312 jit.call(GPRInfo::nonPreservedNonArgumentGPR); 6313 exceptions->append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth)); 6314 }; 6315 6316 auto adjustStack = [&] (GPRReg amount) { 6317 jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), amount, CCallHelpers::stackPointerRegister); 6318 }; 6319 6320 CCallHelpers::JumpList slowCase; 6321 unsigned originalStackHeight = params.proc().frameSize(); 6322 6323 { 6324 unsigned numUsedSlots = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), originalStackHeight / sizeof(EncodedJSValue)); 6325 B3::ValueRep argumentCountIncludingThisRep = params[3]; 6326 getValueFromRep(argumentCountIncludingThisRep, scratchGPR2); 6327 slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR2, CCallHelpers::TrustedImm32(JSC::maxArguments + 1))); 6328 6329 jit.move(scratchGPR2, scratchGPR1); 6330 jit.addPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(numUsedSlots + CallFrame::headerSizeInRegisters)), scratchGPR1); 6331 // scratchGPR1 now has the required frame size in Register units 6332 // Round scratchGPR1 to next multiple of stackAlignmentRegisters() 6333 jit.addPtr(CCallHelpers::TrustedImm32(stackAlignmentRegisters() - 1), scratchGPR1); 6334 jit.andPtr(CCallHelpers::TrustedImm32(~(stackAlignmentRegisters() - 1)), scratchGPR1); 6335 jit.negPtr(scratchGPR1); 6336 jit.lshiftPtr(CCallHelpers::Imm32(3), scratchGPR1); 6337 jit.addPtr(GPRInfo::callFrameRegister, scratchGPR1); 6338 6339 jit.store32(scratchGPR2, CCallHelpers::Address(scratchGPR1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset)); 6340 6341 int storeOffset = CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register)); 6342 6343 for (unsigned i = arrayWithSpread->numChildren(); i--; ) { 6344 unsigned paramsOffset = 4; 6345 6346 if (bitVector->get(i)) { 6347 Node* spread = state->graph.varArgChild(arrayWithSpread, i).node(); 6348 RELEASE_ASSERT(spread->op() == PhantomSpread); 6349 RELEASE_ASSERT(spread->child1()->op() == PhantomCreateRest); 6350 InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame; 6351 6352 unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip(); 6353 6354 B3::ValueRep numArgumentsToCopy = params[paramsOffset + i]; 6355 getValueFromRep(numArgumentsToCopy, scratchGPR3); 6356 int loadOffset = (AssemblyHelpers::argumentsStart(inlineCallFrame).offset() + numberOfArgumentsToSkip) * static_cast<int>(sizeof(Register)); 6357 6358 auto done = jit.branchTestPtr(MacroAssembler::Zero, scratchGPR3); 6359 auto loopStart = jit.label(); 6360 jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR3); 6361 jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR2); 6362 jit.load64(CCallHelpers::BaseIndex(GPRInfo::callFrameRegister, scratchGPR3, CCallHelpers::TimesEight, loadOffset), scratchGPR4); 6363 jit.store64(scratchGPR4, 6364 CCallHelpers::BaseIndex(scratchGPR1, scratchGPR2, CCallHelpers::TimesEight, storeOffset)); 6365 jit.branchTestPtr(CCallHelpers::NonZero, scratchGPR3).linkTo(loopStart, &jit); 6366 done.link(&jit); 6367 } else { 6368 jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR2); 6369 getValueFromRep(params[paramsOffset + i], scratchGPR3); 6370 jit.store64(scratchGPR3, 6371 CCallHelpers::BaseIndex(scratchGPR1, scratchGPR2, CCallHelpers::TimesEight, storeOffset)); 6372 } 6373 } 6374 } 6375 6376 { 6377 CCallHelpers::Jump dontThrow = jit.jump(); 6378 slowCase.link(&jit); 6379 jit.setupArgumentsExecState(); 6380 callWithExceptionCheck(bitwise_cast<void*>(operationThrowStackOverflowForVarargs)); 6381 jit.abortWithReason(DFGVarargsThrowingPathDidNotThrow); 6382 6383 dontThrow.link(&jit); 6384 } 6385 6386 adjustStack(scratchGPR1); 6387 6388 ASSERT(calleeGPR == GPRInfo::regT0); 6389 jit.store64(calleeGPR, CCallHelpers::calleeFrameSlot(CallFrameSlot::callee)); 6390 getValueFromRep(params[2], scratchGPR3); 6391 jit.store64(scratchGPR3, CCallHelpers::calleeArgumentSlot(0)); 6392 6393 CallLinkInfo::CallType callType; 6394 if (node->op() == ConstructVarargs || node->op() == ConstructForwardVarargs) 6395 callType = CallLinkInfo::ConstructVarargs; 6396 else if (node->op() == TailCallVarargs || node->op() == TailCallForwardVarargs) 6397 callType = CallLinkInfo::TailCallVarargs; 6398 else 6399 callType = CallLinkInfo::CallVarargs; 6400 6401 bool isTailCall = CallLinkInfo::callModeFor(callType) == CallMode::Tail; 6402 6403 CCallHelpers::DataLabelPtr targetToCheck; 6404 CCallHelpers::Jump slowPath = jit.branchPtrWithPatch( 6405 CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck, 6406 CCallHelpers::TrustedImmPtr(nullptr)); 6407 6408 CCallHelpers::Call fastCall; 6409 CCallHelpers::Jump done; 6410 6411 if (isTailCall) { 6412 jit.emitRestoreCalleeSaves(); 6413 jit.prepareForTailCallSlow(); 6414 fastCall = jit.nearTailCall(); 6415 } else { 6416 fastCall = jit.nearCall(); 6417 done = jit.jump(); 6418 } 6419 6420 slowPath.link(&jit); 6421 6422 if (isTailCall) 6423 jit.emitRestoreCalleeSaves(); 6424 ASSERT(!usedRegisters.get(GPRInfo::regT2)); 6425 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); 6426 CCallHelpers::Call slowCall = jit.nearCall(); 6427 6428 if (isTailCall) 6429 jit.abortWithReason(JITDidReturnFromTailCall); 6430 else 6431 done.link(&jit); 6432 6433 callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0); 6434 6435 jit.addPtr( 6436 CCallHelpers::TrustedImm32(-originalStackHeight), 6437 GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister); 6438 6439 jit.addLinkTask( 6440 [=] (LinkBuffer& linkBuffer) { 6441 MacroAssemblerCodePtr linkCall = 6442 linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code(); 6443 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 6444 6445 callLinkInfo->setCallLocations( 6446 CodeLocationLabel(linkBuffer.locationOfNearCall(slowCall)), 6447 CodeLocationLabel(linkBuffer.locationOf(targetToCheck)), 6448 linkBuffer.locationOfNearCall(fastCall)); 6449 }); 6450 }); 6451 6452 switch (node->op()) { 6453 case TailCallForwardVarargs: 6454 m_out.unreachable(); 6455 break; 6456 6457 default: 6458 setJSValue(patchpoint); 6459 break; 6460 } 6461 } 6462 6134 6463 void compileCallOrConstructVarargs() 6135 6464 { … … 6158 6487 break; 6159 6488 } 6489 6490 if (forwarding && m_node->child3() && m_node->child3()->op() == PhantomNewArrayWithSpread) { 6491 compileCallOrConstructVarargsSpread(); 6492 return; 6493 } 6494 6160 6495 6161 6496 PatchpointValue* patchpoint = m_out.patchpoint(Int64); … … 6531 6866 void compileForwardVarargs() 6532 6867 { 6868 if (m_node->child1() && m_node->child1()->op() == PhantomNewArrayWithSpread) { 6869 compileForwardVarargsWithSpread(); 6870 return; 6871 } 6872 6533 6873 LoadVarargsData* data = m_node->loadVarargsData(); 6534 6874 InlineCallFrame* inlineCallFrame; … … 6616 6956 m_out.branch(m_out.isNull(currentIndex), unsure(continuation), unsure(mainLoop)); 6617 6957 6958 m_out.appendTo(continuation, lastNext); 6959 } 6960 6961 LValue getSpreadLengthFromInlineCallFrame(InlineCallFrame* inlineCallFrame, unsigned numberOfArgumentsToSkip) 6962 { 6963 ArgumentsLength argumentsLength = getArgumentsLength(inlineCallFrame); 6964 if (argumentsLength.isKnown) { 6965 unsigned knownLength = argumentsLength.known; 6966 if (knownLength >= numberOfArgumentsToSkip) 6967 knownLength = knownLength - numberOfArgumentsToSkip; 6968 else 6969 knownLength = 0; 6970 return m_out.constInt32(knownLength); 6971 } 6972 6973 6974 // We need to perform the same logical operation as the code above, but through dynamic operations. 6975 if (!numberOfArgumentsToSkip) 6976 return argumentsLength.value; 6977 6978 LBasicBlock isLarger = m_out.newBlock(); 6979 LBasicBlock continuation = m_out.newBlock(); 6980 6981 ValueFromBlock smallerOrEqualLengthResult = m_out.anchor(m_out.constInt32(0)); 6982 m_out.branch( 6983 m_out.above(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip)), unsure(isLarger), unsure(continuation)); 6984 LBasicBlock lastNext = m_out.appendTo(isLarger, continuation); 6985 ValueFromBlock largerLengthResult = m_out.anchor(m_out.sub(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip))); 6986 m_out.jump(continuation); 6987 6988 m_out.appendTo(continuation, lastNext); 6989 return m_out.phi(Int32, smallerOrEqualLengthResult, largerLengthResult); 6990 } 6991 6992 void compileForwardVarargsWithSpread() 6993 { 6994 HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths; 6995 6996 Node* arrayWithSpread = m_node->child1().node(); 6997 RELEASE_ASSERT(arrayWithSpread->op() == PhantomNewArrayWithSpread); 6998 BitVector* bitVector = arrayWithSpread->bitVector(); 6999 7000 unsigned numberOfStaticArguments = 0; 7001 Vector<LValue, 2> spreadLengths; 7002 for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) { 7003 if (bitVector->get(i)) { 7004 Node* child = m_graph.varArgChild(arrayWithSpread, i).node(); 7005 ASSERT(child->op() == PhantomSpread); 7006 ASSERT(child->child1()->op() == PhantomCreateRest); 7007 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 7008 LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () { 7009 return getSpreadLengthFromInlineCallFrame(inlineCallFrame, child->child1()->numberOfArgumentsToSkip()); 7010 }).iterator->value; 7011 spreadLengths.append(length); 7012 } else 7013 ++numberOfStaticArguments; 7014 } 7015 7016 LValue lengthIncludingThis = m_out.constInt32(1 + numberOfStaticArguments); 7017 for (LValue length : spreadLengths) 7018 lengthIncludingThis = m_out.add(lengthIncludingThis, length); 7019 7020 LoadVarargsData* data = m_node->loadVarargsData(); 7021 speculate( 7022 VarargsOverflow, noValue(), nullptr, 7023 m_out.above(lengthIncludingThis, m_out.constInt32(data->limit))); 7024 7025 m_out.store32(lengthIncludingThis, payloadFor(data->machineCount)); 7026 7027 LValue targetStart = addressFor(data->machineStart).value(); 7028 LValue storeIndex = m_out.constIntPtr(0); 7029 for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) { 7030 if (bitVector->get(i)) { 7031 Node* child = m_graph.varArgChild(arrayWithSpread, i).node(); 7032 RELEASE_ASSERT(child->op() == PhantomSpread); 7033 RELEASE_ASSERT(child->child1()->op() == PhantomCreateRest); 7034 InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame; 7035 7036 LValue sourceStart = getArgumentsStart(inlineCallFrame, child->child1()->numberOfArgumentsToSkip()); 7037 LValue spreadLength = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame)); 7038 7039 LBasicBlock loop = m_out.newBlock(); 7040 LBasicBlock continuation = m_out.newBlock(); 7041 ValueFromBlock startLoadIndex = m_out.anchor(m_out.constIntPtr(0)); 7042 ValueFromBlock startStoreIndex = m_out.anchor(storeIndex); 7043 ValueFromBlock startStoreIndexForEnd = m_out.anchor(storeIndex); 7044 7045 m_out.branch(m_out.isZero64(spreadLength), unsure(continuation), unsure(loop)); 7046 7047 LBasicBlock lastNext = m_out.appendTo(loop, continuation); 7048 LValue loopStoreIndex = m_out.phi(Int64, startStoreIndex); 7049 LValue loadIndex = m_out.phi(Int64, startLoadIndex); 7050 LValue value = m_out.load64( 7051 m_out.baseIndex(m_heaps.variables, sourceStart, loadIndex)); 7052 m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, loopStoreIndex)); 7053 LValue nextLoadIndex = m_out.add(m_out.constIntPtr(1), loadIndex); 7054 m_out.addIncomingToPhi(loadIndex, m_out.anchor(nextLoadIndex)); 7055 LValue nextStoreIndex = m_out.add(m_out.constIntPtr(1), loopStoreIndex); 7056 m_out.addIncomingToPhi(loopStoreIndex, m_out.anchor(nextStoreIndex)); 7057 ValueFromBlock loopStoreIndexForEnd = m_out.anchor(nextStoreIndex); 7058 m_out.branch(m_out.below(nextLoadIndex, spreadLength), unsure(loop), unsure(continuation)); 7059 7060 m_out.appendTo(continuation, lastNext); 7061 storeIndex = m_out.phi(Int64, startStoreIndexForEnd, loopStoreIndexForEnd); 7062 } else { 7063 LValue value = lowJSValue(m_graph.varArgChild(arrayWithSpread, i)); 7064 m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, storeIndex)); 7065 storeIndex = m_out.add(m_out.constIntPtr(1), storeIndex); 7066 } 7067 } 7068 7069 LBasicBlock undefinedLoop = m_out.newBlock(); 7070 LBasicBlock continuation = m_out.newBlock(); 7071 7072 ValueFromBlock startStoreIndex = m_out.anchor(storeIndex); 7073 LValue loopBoundValue = m_out.constIntPtr(data->mandatoryMinimum); 7074 m_out.branch(m_out.below(storeIndex, loopBoundValue), 7075 unsure(undefinedLoop), unsure(continuation)); 7076 7077 LBasicBlock lastNext = m_out.appendTo(undefinedLoop, continuation); 7078 LValue loopStoreIndex = m_out.phi(Int64, startStoreIndex); 7079 m_out.store64( 7080 m_out.constInt64(JSValue::encode(jsUndefined())), 7081 m_out.baseIndex(m_heaps.variables, targetStart, loopStoreIndex)); 7082 LValue nextIndex = m_out.add(loopStoreIndex, m_out.constIntPtr(1)); 7083 m_out.addIncomingToPhi(loopStoreIndex, m_out.anchor(nextIndex)); 7084 m_out.branch( 7085 m_out.below(nextIndex, loopBoundValue), unsure(undefinedLoop), unsure(continuation)); 7086 6618 7087 m_out.appendTo(continuation, lastNext); 6619 7088 } -
trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp
r208761 r209121 36 36 #include "JSAsyncFunction.h" 37 37 #include "JSCInlines.h" 38 #include "JSFixedArray.h" 38 39 #include "JSGeneratorFunction.h" 39 40 #include "JSLexicalEnvironment.h" … … 87 88 case PhantomClonedArguments: 88 89 case PhantomCreateRest: 90 case PhantomSpread: 91 case PhantomNewArrayWithSpread: 89 92 // Those are completely handled by operationMaterializeObjectInOSR 90 93 break; … … 394 397 } 395 398 #endif 396 397 399 return array; 398 400 } 401 399 402 default: 400 403 RELEASE_ASSERT_NOT_REACHED(); … … 402 405 } 403 406 } 407 408 case PhantomSpread: { 409 JSArray* array = nullptr; 410 for (unsigned i = materialization->properties().size(); i--;) { 411 const ExitPropertyValue& property = materialization->properties()[i]; 412 if (property.location().kind() == SpreadPLoc) { 413 array = jsCast<JSArray*>(JSValue::decode(values[i])); 414 break; 415 } 416 } 417 RELEASE_ASSERT(array); 418 419 // Note: it is sound for JSFixedArray::createFromArray to call getDirectIndex here 420 // because we're guaranteed we won't be calling any getters. The reason for this is 421 // that we only support PhantomSpread over CreateRest, which is an array we create. 422 // Any attempts to put a getter on any indices on the rest array will escape the array. 423 JSFixedArray* fixedArray = JSFixedArray::createFromArray(exec, vm, array); 424 return fixedArray; 425 } 426 427 case PhantomNewArrayWithSpread: { 428 CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock( 429 materialization->origin(), exec->codeBlock()); 430 JSGlobalObject* globalObject = codeBlock->globalObject(); 431 Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous); 432 433 unsigned arraySize = 0; 434 unsigned numProperties = 0; 435 for (unsigned i = materialization->properties().size(); i--;) { 436 const ExitPropertyValue& property = materialization->properties()[i]; 437 if (property.location().kind() == NewArrayWithSpreadArgumentPLoc) { 438 ++numProperties; 439 JSValue value = JSValue::decode(values[i]); 440 if (JSFixedArray* fixedArray = jsDynamicCast<JSFixedArray*>(value)) 441 arraySize += fixedArray->size(); 442 else 443 arraySize += 1; 444 } 445 } 446 447 JSArray* result = JSArray::tryCreateUninitialized(vm, structure, arraySize); 448 RELEASE_ASSERT(result); 449 450 #if !ASSERT_DISABLED 451 // Ensure we see indices for everything in the range: [0, numProperties) 452 for (unsigned i = 0; i < numProperties; ++i) { 453 bool found = false; 454 for (unsigned j = 0; j < materialization->properties().size(); ++j) { 455 const ExitPropertyValue& property = materialization->properties()[j]; 456 if (property.location().kind() == NewArrayWithSpreadArgumentPLoc && property.location().info() == i) { 457 found = true; 458 break; 459 } 460 } 461 ASSERT(found); 462 } 463 #endif 464 465 Vector<JSValue, 8> arguments; 466 arguments.grow(numProperties); 467 468 for (unsigned i = materialization->properties().size(); i--;) { 469 const ExitPropertyValue& property = materialization->properties()[i]; 470 if (property.location().kind() == NewArrayWithSpreadArgumentPLoc) { 471 JSValue value = JSValue::decode(values[i]); 472 RELEASE_ASSERT(property.location().info() < numProperties); 473 arguments[property.location().info()] = value; 474 } 475 } 476 477 unsigned arrayIndex = 0; 478 for (JSValue value : arguments) { 479 if (JSFixedArray* fixedArray = jsDynamicCast<JSFixedArray*>(value)) { 480 for (unsigned i = 0; i < fixedArray->size(); i++) { 481 ASSERT(fixedArray->get(i)); 482 result->initializeIndex(vm, arrayIndex, fixedArray->get(i)); 483 ++arrayIndex; 484 } 485 } else { 486 // We are not spreading. 487 result->initializeIndex(vm, arrayIndex, value); 488 ++arrayIndex; 489 } 490 } 491 492 return result; 493 } 494 404 495 405 496 default: -
trunk/Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
r208235 r209121 79 79 endVarArgs.link(&jit); 80 80 } 81 slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR1, CCallHelpers::TrustedImm32( maxArguments + 1)));81 slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR1, CCallHelpers::TrustedImm32(JSC::maxArguments + 1))); 82 82 83 83 emitSetVarargsFrame(jit, scratchGPR1, true, numUsedSlotsGPR, scratchGPR2); -
trunk/Source/JavaScriptCore/jsc.cpp
r209083 r209121 991 991 #endif 992 992 993 static EncodedJSValue JSC_HOST_CALL functionMaxArguments(ExecState*); 994 993 995 #if ENABLE(WEBASSEMBLY) 994 996 static EncodedJSValue JSC_HOST_CALL functionTestWasmModuleFunctions(ExecState*); … … 1242 1244 addFunction(vm, "samplingProfilerStackTraces", functionSamplingProfilerStackTraces, 0); 1243 1245 #endif 1246 1247 addFunction(vm, "maxArguments", functionMaxArguments, 0); 1244 1248 1245 1249 #if ENABLE(WEBASSEMBLY) … … 2485 2489 #endif // ENABLE(SAMPLING_PROFILER) 2486 2490 2491 EncodedJSValue JSC_HOST_CALL functionMaxArguments(ExecState*) 2492 { 2493 return JSValue::encode(jsNumber(JSC::maxArguments)); 2494 } 2495 2487 2496 #if ENABLE(WEBASSEMBLY) 2488 2497 -
trunk/Source/JavaScriptCore/runtime/JSFixedArray.h
r208637 r209121 80 80 // however, if we do that, we ensure we're calling in with an array with all self properties between 81 81 // [0, length). 82 ASSERT(array->globalObject()->isArrayIteratorProtocolFastAndNonObservable()); 82 // 83 // We may also call into this during OSR exit to materialize a phantom fixed array. 84 // We may be creating a fixed array during OSR exit even after the iterator protocol changed. 85 // But, when the phantom would have logically been created, the protocol hadn't been 86 // changed. Therefore, it is sound to assume empty indices are jsUndefined(). 83 87 value = jsUndefined(); 84 88 }
Note: See TracChangeset
for help on using the changeset viewer.