Changeset 197861 in webkit
- Timestamp:
- Mar 9, 2016 9:51:38 AM (8 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 18 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r197833 r197861 1 2016-03-09 Benjamin Poulain <benjamin@webkit.org> 2 3 [JSC] Pick how to OSR Enter to FTL at runtime instead of compile time 4 https://bugs.webkit.org/show_bug.cgi?id=155217 5 6 Reviewed by Filip Pizlo. 7 8 This patch addresses 2 types of problems with tiering up to FTL 9 with OSR Entry in a loop: 10 -When there are nested loops, it is generally valuable to enter 11 an outer loop rather than an inner loop. 12 -When tiering up at a point that cannot OSR Enter, we are at 13 the mercy of the outer loop frequency to compile the right 14 entry point. 15 16 The first case is significant in the test "gaussian-blur". 17 That test has 4 nested loops. When we have an OSR Entry, 18 the analysis phases have to be pesimistic where we enter: 19 we do not really know what constraint can be proven from 20 the DFG code that was running. 21 22 In "gaussian-blur", integer-range analysis removes pretty 23 much all overflow checks in the inner loops of where we entered. 24 The more outside we enter, the better code we generate. 25 26 Since we spend the most iterations in the inner loop, we naturally 27 tend to OSR Enter into the 2 most inner loops, making the most 28 pessimistic assumptions. 29 30 To avoid such problems, I changed how we decide where to OSR Enter. 31 Previously, the last CheckTierUpAndOSREnter to cross the threshold 32 was where we take the entry point for FTL. 33 34 What happens now is that the entry point is not decied when 35 compiling the CheckTierUp variants. Instead, all the information 36 we need is gathered during compilation and keept on the JITCode 37 to be used at runtime. 38 39 When we try to tier up and decide to OSR Enter, we use the information 40 we have to pick a good outer loop for OSR Entry. 41 42 Now the problem is outer loop do not CheckTierUpAndOSREnter often, 43 wasting several miliseconds before entering the newly compiled FTL code. 44 45 To solve that, every CheckTierUpAndOSREnter has its own trigger that 46 bypass the counter. When the FTL Code is compiled, the trigger is set 47 and we enter through the right CheckTierUpAndOSREnter immediately. 48 49 --- 50 51 This new mechanism also solves a problem of ai-astar. 52 When we try to tier up in ai-astar, we had nothing to compile until 53 the outer loop is reached. 54 55 To make sure we reached the CheckTierUpAndOSREnter in a reasonable time, 56 we had CheckTierUpWithNestedTriggerAndOSREnter with a special trigger. 57 58 With the new mechanism, we can do much better: 59 -When we keep hitting CheckTierUpInLoop, we now have all the information 60 we need to already start compiling the outer loop. 61 Instead of waiting for the outer loop to be reached a few times, we compile 62 it as soon as the inner loop is hammering CheckTierUpInLoop. 63 -With the new triggers, the very next time we hit the outer loop, we OSR Enter. 64 65 This allow us to compile what we need sooner and enter sooner. 66 67 * dfg/DFGAbstractInterpreterInlines.h: 68 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): Deleted. 69 * dfg/DFGClobberize.h: 70 (JSC::DFG::clobberize): Deleted. 71 * dfg/DFGDoesGC.cpp: 72 (JSC::DFG::doesGC): Deleted. 73 * dfg/DFGFixupPhase.cpp: 74 (JSC::DFG::FixupPhase::fixupNode): Deleted. 75 * dfg/DFGJITCode.h: 76 * dfg/DFGJITCompiler.cpp: 77 (JSC::DFG::JITCompiler::JITCompiler): 78 (JSC::DFG::JITCompiler::compileEntryExecutionFlag): 79 * dfg/DFGNodeType.h: 80 * dfg/DFGOperations.cpp: 81 * dfg/DFGOperations.h: 82 * dfg/DFGPlan.h: 83 (JSC::DFG::Plan::canTierUpAndOSREnter): 84 * dfg/DFGPredictionPropagationPhase.cpp: 85 (JSC::DFG::PredictionPropagationPhase::propagate): Deleted. 86 * dfg/DFGSafeToExecute.h: 87 (JSC::DFG::safeToExecute): Deleted. 88 * dfg/DFGSpeculativeJIT32_64.cpp: 89 (JSC::DFG::SpeculativeJIT::compile): Deleted. 90 * dfg/DFGSpeculativeJIT64.cpp: 91 (JSC::DFG::SpeculativeJIT::compile): 92 * dfg/DFGTierUpCheckInjectionPhase.cpp: 93 (JSC::DFG::TierUpCheckInjectionPhase::run): 94 (JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap): 95 (JSC::DFG::TierUpCheckInjectionPhase::findLoopsContainingLoopHintWithoutOSREnter): Deleted. 96 * dfg/DFGToFTLForOSREntryDeferredCompilationCallback.cpp: 97 (JSC::DFG::ToFTLForOSREntryDeferredCompilationCallback::ToFTLForOSREntryDeferredCompilationCallback): 98 (JSC::DFG::Ref<ToFTLForOSREntryDeferredCompilationCallback>ToFTLForOSREntryDeferredCompilationCallback::create): 99 (JSC::DFG::ToFTLForOSREntryDeferredCompilationCallback::compilationDidBecomeReadyAsynchronously): 100 (JSC::DFG::ToFTLForOSREntryDeferredCompilationCallback::compilationDidComplete): 101 * dfg/DFGToFTLForOSREntryDeferredCompilationCallback.h: 102 1 103 2016-03-08 Filip Pizlo <fpizlo@apple.com> 2 104 -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r197833 r197861 2661 2661 2662 2662 case CheckTierUpAndOSREnter: 2663 case CheckTierUpWithNestedTriggerAndOSREnter:2664 2663 case LoopHint: 2665 2664 case ZombieHint: -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r197833 r197861 344 344 case CheckTierUpAtReturn: 345 345 case CheckTierUpAndOSREnter: 346 case CheckTierUpWithNestedTriggerAndOSREnter:347 346 case LoopHint: 348 347 case Breakpoint: -
trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
r197833 r197861 181 181 case CheckTierUpAtReturn: 182 182 case CheckTierUpAndOSREnter: 183 case CheckTierUpWithNestedTriggerAndOSREnter:184 183 case LoopHint: 185 184 case StoreBarrier: -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r197833 r197861 1271 1271 case CheckTierUpAtReturn: 1272 1272 case CheckTierUpAndOSREnter: 1273 case CheckTierUpWithNestedTriggerAndOSREnter:1274 1273 case InvalidationPoint: 1275 1274 case CheckArray: -
trunk/Source/JavaScriptCore/dfg/DFGJITCode.h
r197563 r197861 139 139 DFG::MinifiedGraph minifiedDFG; 140 140 #if ENABLE(FTL_JIT) 141 uint8_t nestedTriggerIsSet { 0 };142 141 uint8_t neverExecutedEntry { 1 }; 142 143 143 UpperTierExecutionCounter tierUpCounter; 144 145 // For osrEntryPoint that are in inner loop, this maps their bytecode to the bytecode 146 // of the outerloop entry points in order (from innermost to outermost). 147 // 148 // The key may not always be a target for OSR Entry but the list in the value is guaranteed 149 // to be usable for OSR Entry. 150 HashMap<unsigned, Vector<unsigned>> tierUpInLoopHierarchy; 151 152 // Map each bytecode of CheckTierUpAndOSREnter to its stream index. 153 HashMap<unsigned, unsigned, WTF::IntHash<unsigned>, WTF::UnsignedWithZeroKeyHashTraits<unsigned>> bytecodeIndexToStreamIndex; 154 155 // Map each bytecode of CheckTierUpAndOSREnter to its trigger forcing OSR Entry. 156 // This can never be modified after it has been initialized since the addresses of the triggers 157 // are used by the JIT. 158 HashMap<unsigned, uint8_t> tierUpEntryTriggers; 159 160 // Set of bytecode that were the target of a TierUp operation. 161 HashSet<unsigned, WTF::IntHash<unsigned>, WTF::UnsignedWithZeroKeyHashTraits<unsigned>> tierUpEntrySeen; 162 144 163 WriteBarrier<CodeBlock> m_osrEntryBlock; 145 164 unsigned osrEntryRetry; -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r197159 r197861 57 57 if (shouldDumpDisassembly() || m_graph.m_vm.m_perBytecodeProfiler) 58 58 m_disassembler = std::make_unique<Disassembler>(dfg); 59 #if ENABLE(FTL_JIT) 60 m_jitCode->tierUpInLoopHierarchy = WTFMove(m_graph.m_plan.tierUpInLoopHierarchy); 61 for (unsigned tierUpBytecode : m_graph.m_plan.tierUpAndOSREnterBytecodes) 62 m_jitCode->tierUpEntryTriggers.add(tierUpBytecode, 0); 63 #endif 59 64 } 60 65 … … 115 120 { 116 121 #if ENABLE(FTL_JIT) 117 if (m_graph.m_plan.canTierUpAndOSREnter )122 if (m_graph.m_plan.canTierUpAndOSREnter()) 118 123 store8(TrustedImm32(0), &m_jitCode->neverExecutedEntry); 119 124 #endif // ENABLE(FTL_JIT) -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r197833 r197861 96 96 macro(CheckTierUpInLoop, NodeMustGenerate) \ 97 97 macro(CheckTierUpAndOSREnter, NodeMustGenerate) \ 98 macro(CheckTierUpWithNestedTriggerAndOSREnter, NodeMustGenerate) \99 98 macro(CheckTierUpAtReturn, NodeMustGenerate) \ 100 99 \ -
trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp
r197796 r197861 1555 1555 } 1556 1556 1557 static void triggerTierUpNowCommon(ExecState* exec, bool inLoop)1557 void JIT_OPERATION triggerTierUpNow(ExecState* exec) 1558 1558 { 1559 1559 VM* vm = &exec->vm(); … … 1574 1574 jitCode->tierUpCounter, "\n"); 1575 1575 } 1576 if (inLoop)1577 jitCode->nestedTriggerIsSet = 1;1578 1576 1579 1577 if (shouldTriggerFTLCompile(codeBlock, jitCode)) 1580 1578 triggerFTLReplacementCompile(vm, codeBlock, jitCode); 1581 } 1582 1583 void JIT_OPERATION triggerTierUpNow(ExecState* exec) 1584 { 1585 triggerTierUpNowCommon(exec, false); 1586 } 1587 1588 void JIT_OPERATION triggerTierUpNowInLoop(ExecState* exec) 1589 { 1590 triggerTierUpNowCommon(exec, true); 1591 } 1592 1593 char* JIT_OPERATION triggerOSREntryNow( 1594 ExecState* exec, int32_t bytecodeIndex, int32_t streamIndex) 1595 { 1596 VM* vm = &exec->vm(); 1597 NativeCallFrameTracer tracer(vm, exec); 1598 DeferGC deferGC(vm->heap); 1579 1580 if (codeBlock->hasOptimizedReplacement()) { 1581 if (jitCode->tierUpEntryTriggers.isEmpty()) { 1582 // There is nothing more we can do, the only way this will be entered 1583 // is through the function entry point. 1584 jitCode->dontOptimizeAnytimeSoon(codeBlock); 1585 return; 1586 } 1587 if (jitCode->osrEntryBlock() && jitCode->tierUpEntryTriggers.size() == 1) { 1588 // There is only one outer loop and its trigger must have been set 1589 // when the plan completed. 1590 // Exiting the inner loop is useless, we can ignore the counter and leave 1591 // the trigger do its job. 1592 jitCode->dontOptimizeAnytimeSoon(codeBlock); 1593 return; 1594 } 1595 } 1596 } 1597 1598 static char* tierUpCommon(ExecState* exec, unsigned originBytecodeIndex, unsigned osrEntryBytecodeIndex) 1599 { 1600 VM* vm = &exec->vm(); 1599 1601 CodeBlock* codeBlock = exec->codeBlock(); 1600 1601 if (codeBlock->jitType() != JITCode::DFGJIT) { 1602 dataLog("Unexpected code block in DFG->FTL tier-up: ", *codeBlock, "\n"); 1603 RELEASE_ASSERT_NOT_REACHED(); 1604 } 1605 1606 JITCode* jitCode = codeBlock->jitCode()->dfg(); 1607 jitCode->nestedTriggerIsSet = 0; 1608 1609 if (Options::verboseOSR()) { 1610 dataLog( 1611 *codeBlock, ": Entered triggerOSREntryNow with executeCounter = ", 1612 jitCode->tierUpCounter, "\n"); 1613 } 1614 1615 // - If we don't have an FTL code block, then try to compile one. 1616 // - If we do have an FTL code block, then try to enter for a while. 1617 // - If we couldn't enter for a while, then trigger OSR entry. 1618 1619 if (!shouldTriggerFTLCompile(codeBlock, jitCode)) 1620 return nullptr; 1621 1622 if (!jitCode->neverExecutedEntry) { 1623 triggerFTLReplacementCompile(vm, codeBlock, jitCode); 1624 1625 if (!codeBlock->hasOptimizedReplacement()) 1626 return nullptr; 1627 1628 if (jitCode->osrEntryRetry < Options::ftlOSREntryRetryThreshold()) { 1629 jitCode->osrEntryRetry++; 1630 return nullptr; 1631 } 1632 } 1633 1634 // It's time to try to compile code for OSR entry. 1602 1603 // Resolve any pending plan for OSR Enter on this function. 1635 1604 Worklist::State worklistState; 1636 1605 if (Worklist* worklist = existingGlobalFTLWorklistOrNull()) { … … 1639 1608 } else 1640 1609 worklistState = Worklist::NotKnown; 1641 1610 1611 JITCode* jitCode = codeBlock->jitCode()->dfg(); 1642 1612 if (worklistState == Worklist::Compiling) { 1643 1613 jitCode->setOptimizationThresholdBasedOnCompilationResult( … … 1645 1615 return nullptr; 1646 1616 } 1647 1648 if (CodeBlock* entryBlock = jitCode->osrEntryBlock()) { 1649 void* address = FTL::prepareOSREntry( 1650 exec, codeBlock, entryBlock, bytecodeIndex, streamIndex); 1651 if (address) 1652 return static_cast<char*>(address); 1653 1654 if (jitCode->osrEntryRetry < Options::ftlOSREntryRetryThreshold()) { 1655 jitCode->osrEntryRetry++; 1656 return nullptr; 1657 } 1658 1659 FTL::ForOSREntryJITCode* entryCode = entryBlock->jitCode()->ftlForOSREntry(); 1660 entryCode->countEntryFailure(); 1661 if (entryCode->entryFailureCount() < 1662 Options::ftlOSREntryFailureCountForReoptimization()) { 1663 jitCode->optimizeSoon(codeBlock); 1664 return nullptr; 1665 } 1666 1667 // OSR entry failed. Oh no! This implies that we need to retry. We retry 1668 // without exponential backoff and we only do this for the entry code block. 1669 jitCode->clearOSREntryBlock(); 1670 jitCode->osrEntryRetry = 0; 1671 return nullptr; 1672 } 1673 1617 1674 1618 if (worklistState == Worklist::Compiled) { 1675 1619 // This means that compilation failed and we already set the thresholds. … … 1679 1623 } 1680 1624 1625 // If we can OSR Enter, do it right away. 1626 if (originBytecodeIndex == osrEntryBytecodeIndex) { 1627 unsigned streamIndex = jitCode->bytecodeIndexToStreamIndex.get(originBytecodeIndex); 1628 if (CodeBlock* entryBlock = jitCode->osrEntryBlock()) { 1629 if (void* address = FTL::prepareOSREntry(exec, codeBlock, entryBlock, originBytecodeIndex, streamIndex)) 1630 return static_cast<char*>(address); 1631 } 1632 } 1633 1634 // - If we don't have an FTL code block, then try to compile one. 1635 // - If we do have an FTL code block, then try to enter for a while. 1636 // - If we couldn't enter for a while, then trigger OSR entry. 1637 1638 if (!shouldTriggerFTLCompile(codeBlock, jitCode)) 1639 return nullptr; 1640 1641 if (!jitCode->neverExecutedEntry) { 1642 triggerFTLReplacementCompile(vm, codeBlock, jitCode); 1643 1644 if (!codeBlock->hasOptimizedReplacement()) 1645 return nullptr; 1646 1647 if (jitCode->osrEntryRetry < Options::ftlOSREntryRetryThreshold()) { 1648 jitCode->osrEntryRetry++; 1649 return nullptr; 1650 } 1651 } 1652 1653 // It's time to try to compile code for OSR entry. 1654 if (CodeBlock* entryBlock = jitCode->osrEntryBlock()) { 1655 if (jitCode->osrEntryRetry < Options::ftlOSREntryRetryThreshold()) { 1656 jitCode->osrEntryRetry++; 1657 jitCode->setOptimizationThresholdBasedOnCompilationResult( 1658 codeBlock, CompilationDeferred); 1659 return nullptr; 1660 } 1661 1662 FTL::ForOSREntryJITCode* entryCode = entryBlock->jitCode()->ftlForOSREntry(); 1663 entryCode->countEntryFailure(); 1664 if (entryCode->entryFailureCount() < 1665 Options::ftlOSREntryFailureCountForReoptimization()) { 1666 jitCode->setOptimizationThresholdBasedOnCompilationResult( 1667 codeBlock, CompilationDeferred); 1668 return nullptr; 1669 } 1670 1671 // OSR entry failed. Oh no! This implies that we need to retry. We retry 1672 // without exponential backoff and we only do this for the entry code block. 1673 unsigned osrEntryBytecode = entryBlock->jitCode()->ftlForOSREntry()->bytecodeIndex(); 1674 jitCode->clearOSREntryBlock(); 1675 jitCode->osrEntryRetry = 0; 1676 jitCode->tierUpEntryTriggers.set(osrEntryBytecode, 0); 1677 jitCode->setOptimizationThresholdBasedOnCompilationResult( 1678 codeBlock, CompilationDeferred); 1679 return nullptr; 1680 } 1681 1682 unsigned streamIndex = jitCode->bytecodeIndexToStreamIndex.get(osrEntryBytecodeIndex); 1683 auto tierUpHierarchyEntry = jitCode->tierUpInLoopHierarchy.find(osrEntryBytecodeIndex); 1684 if (tierUpHierarchyEntry != jitCode->tierUpInLoopHierarchy.end()) { 1685 for (unsigned osrEntryCandidate : tierUpHierarchyEntry->value) { 1686 if (jitCode->tierUpEntrySeen.contains(osrEntryCandidate)) { 1687 osrEntryBytecodeIndex = osrEntryCandidate; 1688 streamIndex = jitCode->bytecodeIndexToStreamIndex.get(osrEntryBytecodeIndex); 1689 } 1690 } 1691 } 1692 1681 1693 // We aren't compiling and haven't compiled anything for OSR entry. So, try to compile 1682 1694 // something. 1695 auto triggerIterator = jitCode->tierUpEntryTriggers.find(osrEntryBytecodeIndex); 1696 RELEASE_ASSERT(triggerIterator != jitCode->tierUpEntryTriggers.end()); 1697 uint8_t* triggerAddress = &(triggerIterator->value); 1698 1683 1699 Operands<JSValue> mustHandleValues; 1684 1700 jitCode->reconstruct( 1685 exec, codeBlock, CodeOrigin( bytecodeIndex), streamIndex, mustHandleValues);1701 exec, codeBlock, CodeOrigin(osrEntryBytecodeIndex), streamIndex, mustHandleValues); 1686 1702 CodeBlock* replacementCodeBlock = codeBlock->newReplacement(); 1703 1687 1704 CompilationResult forEntryResult = compile( 1688 *vm, replacementCodeBlock, codeBlock, FTLForOSREntryMode, bytecodeIndex,1689 mustHandleValues, ToFTLForOSREntryDeferredCompilationCallback::create( ));1705 *vm, replacementCodeBlock, codeBlock, FTLForOSREntryMode, osrEntryBytecodeIndex, 1706 mustHandleValues, ToFTLForOSREntryDeferredCompilationCallback::create(triggerAddress)); 1690 1707 1691 1708 if (jitCode->neverExecutedEntry) … … 1702 1719 // We signal to try again after a while if that happens. 1703 1720 void* address = FTL::prepareOSREntry( 1704 exec, codeBlock, jitCode->osrEntryBlock(), bytecodeIndex, streamIndex);1721 exec, codeBlock, jitCode->osrEntryBlock(), originBytecodeIndex, streamIndex); 1705 1722 return static_cast<char*>(address); 1723 } 1724 1725 void JIT_OPERATION triggerTierUpNowInLoop(ExecState* exec, unsigned bytecodeIndex) 1726 { 1727 VM* vm = &exec->vm(); 1728 NativeCallFrameTracer tracer(vm, exec); 1729 DeferGC deferGC(vm->heap); 1730 CodeBlock* codeBlock = exec->codeBlock(); 1731 1732 if (codeBlock->jitType() != JITCode::DFGJIT) { 1733 dataLog("Unexpected code block in DFG->FTL tier-up: ", *codeBlock, "\n"); 1734 RELEASE_ASSERT_NOT_REACHED(); 1735 } 1736 1737 JITCode* jitCode = codeBlock->jitCode()->dfg(); 1738 1739 if (Options::verboseOSR()) { 1740 dataLog( 1741 *codeBlock, ": Entered triggerTierUpNowInLoop with executeCounter = ", 1742 jitCode->tierUpCounter, "\n"); 1743 } 1744 1745 auto tierUpHierarchyEntry = jitCode->tierUpInLoopHierarchy.find(bytecodeIndex); 1746 if (tierUpHierarchyEntry != jitCode->tierUpInLoopHierarchy.end() 1747 && !tierUpHierarchyEntry->value.isEmpty()) { 1748 tierUpCommon(exec, bytecodeIndex, tierUpHierarchyEntry->value.first()); 1749 } else if (shouldTriggerFTLCompile(codeBlock, jitCode)) 1750 triggerFTLReplacementCompile(vm, codeBlock, jitCode); 1751 1752 // Since we cannot OSR Enter here, the default "optimizeSoon()" is not useful. 1753 if (codeBlock->hasOptimizedReplacement()) 1754 jitCode->setOptimizationThresholdBasedOnCompilationResult(codeBlock, CompilationDeferred); 1755 } 1756 1757 char* JIT_OPERATION triggerOSREntryNow(ExecState* exec, unsigned bytecodeIndex) 1758 { 1759 VM* vm = &exec->vm(); 1760 NativeCallFrameTracer tracer(vm, exec); 1761 DeferGC deferGC(vm->heap); 1762 CodeBlock* codeBlock = exec->codeBlock(); 1763 1764 if (codeBlock->jitType() != JITCode::DFGJIT) { 1765 dataLog("Unexpected code block in DFG->FTL tier-up: ", *codeBlock, "\n"); 1766 RELEASE_ASSERT_NOT_REACHED(); 1767 } 1768 1769 JITCode* jitCode = codeBlock->jitCode()->dfg(); 1770 jitCode->tierUpEntrySeen.add(bytecodeIndex); 1771 1772 if (Options::verboseOSR()) { 1773 dataLog( 1774 *codeBlock, ": Entered triggerOSREntryNow with executeCounter = ", 1775 jitCode->tierUpCounter, "\n"); 1776 } 1777 1778 return tierUpCommon(exec, bytecodeIndex, bytecodeIndex); 1706 1779 } 1707 1780 -
trunk/Source/JavaScriptCore/dfg/DFGOperations.h
r197648 r197861 170 170 #if ENABLE(FTL_JIT) 171 171 void JIT_OPERATION triggerTierUpNow(ExecState*) WTF_INTERNAL; 172 void JIT_OPERATION triggerTierUpNowInLoop(ExecState* ) WTF_INTERNAL;173 char* JIT_OPERATION triggerOSREntryNow(ExecState*, int32_t bytecodeIndex, int32_t streamIndex) WTF_INTERNAL;172 void JIT_OPERATION triggerTierUpNowInLoop(ExecState*, unsigned bytecodeIndex) WTF_INTERNAL; 173 char* JIT_OPERATION triggerOSREntryNow(ExecState*, unsigned bytecodeIndex) WTF_INTERNAL; 174 174 #endif // ENABLE(FTL_JIT) 175 175 -
trunk/Source/JavaScriptCore/dfg/DFGPlan.h
r197159 r197861 76 76 bool isKnownToBeLiveDuringGC(); 77 77 void cancel(); 78 79 bool canTierUpAndOSREnter() const { return !tierUpAndOSREnterBytecodes.isEmpty(); } 78 80 79 81 VM& vm; … … 100 102 101 103 bool willTryToTierUp { false }; 102 bool canTierUpAndOSREnter { false }; 104 105 HashMap<unsigned, Vector<unsigned>> tierUpInLoopHierarchy; 106 Vector<unsigned> tierUpAndOSREnterBytecodes; 103 107 104 108 enum Stage { Preparing, Compiling, Compiled, Ready, Cancelled }; -
trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
r197833 r197861 642 642 case CheckTierUpAtReturn: 643 643 case CheckTierUpAndOSREnter: 644 case CheckTierUpWithNestedTriggerAndOSREnter:645 644 case InvalidationPoint: 646 645 case CheckInBounds: -
trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
r197833 r197861 297 297 case CheckTierUpAtReturn: 298 298 case CheckTierUpAndOSREnter: 299 case CheckTierUpWithNestedTriggerAndOSREnter:300 299 case LoopHint: 301 300 case StoreBarrier: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r197833 r197861 4991 4991 case CheckTierUpAtReturn: 4992 4992 case CheckTierUpAndOSREnter: 4993 case CheckTierUpWithNestedTriggerAndOSREnter:4994 4993 case Int52Rep: 4995 4994 case FiatInt52: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r197833 r197861 4980 4980 4981 4981 silentSpillAllRegisters(InvalidGPRReg); 4982 m_jit.setupArgumentsExecState(); 4982 m_jit.setupArgumentsWithExecState( 4983 TrustedImm32(node->origin.semantic.bytecodeIndex)); 4983 4984 appendCall(triggerTierUpNowInLoop); 4984 4985 silentFillAllRegisters(InvalidGPRReg); … … 5003 5004 } 5004 5005 5005 case CheckTierUpAndOSREnter: 5006 case CheckTierUpWithNestedTriggerAndOSREnter: { 5006 case CheckTierUpAndOSREnter: { 5007 5007 ASSERT(!node->origin.semantic.inlineCallFrame); 5008 5008 … … 5010 5010 GPRReg tempGPR = temp.gpr(); 5011 5011 5012 MacroAssembler::Jump forceOSREntry; 5013 if (op == CheckTierUpWithNestedTriggerAndOSREnter) 5014 forceOSREntry = m_jit.branchTest8(MacroAssembler::NonZero, MacroAssembler::AbsoluteAddress(&m_jit.jitCode()->nestedTriggerIsSet)); 5012 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex; 5013 auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex); 5014 RELEASE_ASSERT(triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end()); 5015 uint8_t* forceEntryTrigger = &(m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex)->value); 5016 MacroAssembler::Jump forceOSREntry = m_jit.branchTest8(MacroAssembler::NonZero, MacroAssembler::AbsoluteAddress(forceEntryTrigger)); 5015 5017 5016 5018 MacroAssembler::Jump done = m_jit.branchAdd32( … … 5019 5021 MacroAssembler::AbsoluteAddress(&m_jit.jitCode()->tierUpCounter.m_counter)); 5020 5022 5021 if (forceOSREntry.isSet()) 5022 forceOSREntry.link(&m_jit); 5023 forceOSREntry.link(&m_jit); 5023 5024 silentSpillAllRegisters(tempGPR); 5024 m_jit.setupArgumentsWithExecState(5025 TrustedImm32(node->origin.semantic.bytecodeIndex),5026 TrustedImm32(m_stream->size()));5025 unsigned streamIndex = m_stream->size(); 5026 m_jit.jitCode()->bytecodeIndexToStreamIndex.add(bytecodeIndex, streamIndex); 5027 m_jit.setupArgumentsWithExecState(TrustedImm32(bytecodeIndex)); 5027 5028 appendCallSetResult(triggerOSREntryNow, tempGPR); 5028 5029 MacroAssembler::Jump dontEnter = m_jit.branchTestPtr(MacroAssembler::Zero, tempGPR); … … 5039 5040 case CheckTierUpAtReturn: 5040 5041 case CheckTierUpAndOSREnter: 5041 case CheckTierUpWithNestedTriggerAndOSREnter:5042 5042 DFG_CRASH(m_jit.graph(), node, "Unexpected tier-up node"); 5043 5043 break; -
trunk/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
r197159 r197861 66 66 level = FTL::CanCompile; 67 67 68 // First we find all the loops that contain a LoopHint for which we cannot OSR enter.69 // We use that information to decide if we need CheckTierUpAndOSREnter or CheckTierUpWithNestedTriggerAndOSREnter.70 68 m_graph.ensureNaturalLoops(); 71 69 NaturalLoops& naturalLoops = *m_graph.m_naturalLoops; 72 73 HashSet<const NaturalLoop*> loopsContainingLoopHintWithoutOSREnter = findLoopsContainingLoopHintWithoutOSREnter(naturalLoops, level); 74 75 bool canTierUpAndOSREnter = false; 76 70 HashMap<const NaturalLoop*, unsigned> naturalLoopToLoopHint = buildNaturalLoopToLoopHintMap(naturalLoops); 71 72 HashMap<unsigned, LoopHintDescriptor> tierUpHierarchy; 73 77 74 InsertionSet insertionSet(m_graph); 78 75 for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) { … … 80 77 if (!block) 81 78 continue; 82 79 83 80 for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) { 84 81 Node* node = block->at(nodeIndex); … … 87 84 88 85 NodeOrigin origin = node->origin; 89 if (canOSREnterAtLoopHint(level, block, nodeIndex)) { 90 canTierUpAndOSREnter = true; 91 const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block); 92 if (loop && loopsContainingLoopHintWithoutOSREnter.contains(loop)) 93 insertionSet.insertNode(nodeIndex + 1, SpecNone, CheckTierUpWithNestedTriggerAndOSREnter, origin); 94 else 95 insertionSet.insertNode(nodeIndex + 1, SpecNone, CheckTierUpAndOSREnter, origin); 96 } else 97 insertionSet.insertNode(nodeIndex + 1, SpecNone, CheckTierUpInLoop, origin); 86 bool canOSREnter = canOSREnterAtLoopHint(level, block, nodeIndex); 87 88 NodeType tierUpType = CheckTierUpAndOSREnter; 89 if (!canOSREnter) 90 tierUpType = CheckTierUpInLoop; 91 insertionSet.insertNode(nodeIndex + 1, SpecNone, tierUpType, origin); 92 93 unsigned bytecodeIndex = origin.semantic.bytecodeIndex; 94 if (canOSREnter) 95 m_graph.m_plan.tierUpAndOSREnterBytecodes.append(bytecodeIndex); 96 97 if (const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block)) { 98 LoopHintDescriptor descriptor; 99 descriptor.canOSREnter = canOSREnter; 100 101 const NaturalLoop* outerLoop = loop; 102 while ((outerLoop = naturalLoops.innerMostOuterLoop(*outerLoop))) { 103 auto it = naturalLoopToLoopHint.find(outerLoop); 104 if (it != naturalLoopToLoopHint.end()) 105 descriptor.osrEntryCandidates.append(it->value); 106 } 107 if (!descriptor.osrEntryCandidates.isEmpty()) 108 tierUpHierarchy.add(bytecodeIndex, WTFMove(descriptor)); 109 } 98 110 break; 99 111 } 100 112 101 113 NodeAndIndex terminal = block->findTerminal(); 102 114 if (terminal.node->isFunctionTerminal()) { … … 104 116 terminal.index, SpecNone, CheckTierUpAtReturn, terminal.node->origin); 105 117 } 106 118 107 119 insertionSet.execute(block); 108 120 } 109 121 110 m_graph.m_plan.canTierUpAndOSREnter = canTierUpAndOSREnter; 122 // Add all the candidates that can be OSR Entered. 123 for (auto entry : tierUpHierarchy) { 124 Vector<unsigned> tierUpCandidates; 125 for (unsigned bytecodeIndex : entry.value.osrEntryCandidates) { 126 auto descriptorIt = tierUpHierarchy.find(bytecodeIndex); 127 if (descriptorIt != tierUpHierarchy.end() 128 && descriptorIt->value.canOSREnter) 129 tierUpCandidates.append(bytecodeIndex); 130 } 131 132 if (!tierUpCandidates.isEmpty()) 133 m_graph.m_plan.tierUpInLoopHierarchy.add(entry.key, WTFMove(tierUpCandidates)); 134 } 111 135 m_graph.m_plan.willTryToTierUp = true; 112 136 return true; … … 119 143 private: 120 144 #if ENABLE(FTL_JIT) 145 struct LoopHintDescriptor { 146 Vector<unsigned> osrEntryCandidates; 147 bool canOSREnter; 148 }; 149 121 150 bool canOSREnterAtLoopHint(FTL::CapabilityLevel level, const BasicBlock* block, unsigned nodeIndex) 122 151 { … … 138 167 } 139 168 140 HashSet<const NaturalLoop*> findLoopsContainingLoopHintWithoutOSREnter(const NaturalLoops& naturalLoops, FTL::CapabilityLevel level) 141 { 142 HashSet<const NaturalLoop*> loopsContainingLoopHintWithoutOSREnter; 169 HashMap<const NaturalLoop*, unsigned> buildNaturalLoopToLoopHintMap(const NaturalLoops& naturalLoops) 170 { 171 HashMap<const NaturalLoop*, unsigned> naturalLoopsToLoopHint; 172 143 173 for (BasicBlock* block : m_graph.blocksInNaturalOrder()) { 144 174 for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) { … … 147 177 continue; 148 178 149 if (!canOSREnterAtLoopHint(level, block, nodeIndex)) { 150 const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block); 151 while (loop) { 152 loopsContainingLoopHintWithoutOSREnter.add(loop); 153 loop = naturalLoops.innerMostOuterLoop(*loop); 154 } 179 if (const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block)) { 180 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex; 181 naturalLoopsToLoopHint.add(loop, bytecodeIndex); 155 182 } 156 } 157 } 158 return loopsContainingLoopHintWithoutOSREnter; 183 break; 184 } 185 } 186 return naturalLoopsToLoopHint; 159 187 } 160 188 #endif -
trunk/Source/JavaScriptCore/dfg/DFGToFTLForOSREntryDeferredCompilationCallback.cpp
r190827 r197861 32 32 #include "DFGJITCode.h" 33 33 #include "Executable.h" 34 #include "FTLForOSREntryJITCode.h" 34 35 #include "JSCInlines.h" 35 36 36 37 namespace JSC { namespace DFG { 37 38 38 ToFTLForOSREntryDeferredCompilationCallback::ToFTLForOSREntryDeferredCompilationCallback() 39 ToFTLForOSREntryDeferredCompilationCallback::ToFTLForOSREntryDeferredCompilationCallback(uint8_t* forcedOSREntryTrigger) 40 : m_forcedOSREntryTrigger(forcedOSREntryTrigger) 39 41 { 40 42 } … … 44 46 } 45 47 46 Ref<ToFTLForOSREntryDeferredCompilationCallback>ToFTLForOSREntryDeferredCompilationCallback::create( )48 Ref<ToFTLForOSREntryDeferredCompilationCallback>ToFTLForOSREntryDeferredCompilationCallback::create(uint8_t* forcedOSREntryTrigger) 47 49 { 48 return adoptRef(*new ToFTLForOSREntryDeferredCompilationCallback( ));50 return adoptRef(*new ToFTLForOSREntryDeferredCompilationCallback(forcedOSREntryTrigger)); 49 51 } 50 52 … … 57 59 ") did become ready.\n"); 58 60 } 59 60 profiledDFGCodeBlock->jitCode()->dfg()->forceOptimizationSlowPathConcurrently( 61 profiledDFGCodeBlock); 61 62 *m_forcedOSREntryTrigger = 1; 62 63 } 63 64 … … 74 75 75 76 switch (result) { 76 case CompilationSuccessful: 77 case CompilationSuccessful: { 77 78 jitCode->setOSREntryBlock(*codeBlock->vm(), profiledDFGCodeBlock, codeBlock); 79 unsigned osrEntryBytecode = codeBlock->jitCode()->ftlForOSREntry()->bytecodeIndex(); 80 jitCode->tierUpEntryTriggers.set(osrEntryBytecode, 1); 78 81 break; 82 } 79 83 case CompilationFailed: 80 84 jitCode->osrEntryRetry = 0; 81 85 jitCode->abandonOSREntry = true; 86 profiledDFGCodeBlock->jitCode()->dfg()->setOptimizationThresholdBasedOnCompilationResult( 87 profiledDFGCodeBlock, result); 82 88 break; 83 89 case CompilationDeferred: -
trunk/Source/JavaScriptCore/dfg/DFGToFTLForOSREntryDeferredCompilationCallback.h
r190827 r197861 41 41 class ToFTLForOSREntryDeferredCompilationCallback : public DeferredCompilationCallback { 42 42 protected: 43 ToFTLForOSREntryDeferredCompilationCallback( );43 ToFTLForOSREntryDeferredCompilationCallback(uint8_t* forcedOSREntryTrigger); 44 44 45 45 public: 46 46 virtual ~ToFTLForOSREntryDeferredCompilationCallback(); 47 47 48 static Ref<ToFTLForOSREntryDeferredCompilationCallback> create( );48 static Ref<ToFTLForOSREntryDeferredCompilationCallback> create(uint8_t* forcedOSREntryTrigger); 49 49 50 50 virtual void compilationDidBecomeReadyAsynchronously(CodeBlock*, CodeBlock* profiledDFGCodeBlock); 51 51 virtual void compilationDidComplete(CodeBlock*, CodeBlock* profiledDFGCodeBlock, CompilationResult); 52 53 private: 54 uint8_t* m_forcedOSREntryTrigger; 52 55 }; 53 56
Note: See TracChangeset
for help on using the changeset viewer.