Changeset 243232 in webkit
- Timestamp:
- Mar 20, 2019 1:24:36 PM (5 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 54 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r243207 r243232 1 2019-03-20 Robin Morisset <rmorisset@apple.com> 2 3 Compress CodeOrigin into a single word in the common case 4 https://bugs.webkit.org/show_bug.cgi?id=195928 5 6 Reviewed by Saam Barati. 7 8 The trick is that pointers only take 48 bits on x86_64 in practice (and we can even use the bottom three bits of that thanks to alignment), and even less on ARM64. 9 So we can shove the bytecode index in the top bits almost all the time. 10 If the bytecodeIndex is too ginormous (1<<16 in practice on x86_64), we just set one bit at the bottom and store a pointer to some out-of-line storage instead. 11 Finally we represent an invalid bytecodeIndex (which used to be represented by UINT_MAX) by setting the second least signifcant bit. 12 13 The patch looks very long, but most of it is just replacing direct accesses to inlineCallFrame and bytecodeIndex by the relevant getters. 14 15 End result: CodeOrigin in the common case moves from 16 bytes (8 for InlineCallFrame*, 4 for unsigned bytecodeIndex, 4 of padding) to 8. 16 As a reference, during running JetStream2 we allocate more than 35M CodeOrigins. While they won't all be alive at the same time, it is still quite a lot of objects, so I am hoping for some small 17 improvement to RAMification from this work. 18 19 The one slightly tricky part is that we must implement copy and move assignment operators and constructors to make sure that any out-of-line storage belongs to a single CodeOrigin and is destroyed exactly once. 20 21 * bytecode/ByValInfo.h: 22 * bytecode/CallLinkStatus.cpp: 23 (JSC::CallLinkStatus::computeFor): 24 * bytecode/CodeBlock.cpp: 25 (JSC::CodeBlock::globalObjectFor): 26 (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): 27 (JSC::CodeBlock::bytecodeOffsetFromCallSiteIndex): 28 * bytecode/CodeOrigin.cpp: 29 (JSC::CodeOrigin::inlineDepth const): 30 (JSC::CodeOrigin::isApproximatelyEqualTo const): 31 (JSC::CodeOrigin::approximateHash const): 32 (JSC::CodeOrigin::inlineStack const): 33 (JSC::CodeOrigin::codeOriginOwner const): 34 (JSC::CodeOrigin::stackOffset const): 35 (JSC::CodeOrigin::dump const): 36 (JSC::CodeOrigin::inlineDepthForCallFrame): Deleted. 37 * bytecode/CodeOrigin.h: 38 (JSC::OutOfLineCodeOrigin::OutOfLineCodeOrigin): 39 (JSC::CodeOrigin::CodeOrigin): 40 (JSC::CodeOrigin::~CodeOrigin): 41 (JSC::CodeOrigin::isSet const): 42 (JSC::CodeOrigin::isHashTableDeletedValue const): 43 (JSC::CodeOrigin::bytecodeIndex const): 44 (JSC::CodeOrigin::inlineCallFrame const): 45 (JSC::CodeOrigin::buildCompositeValue): 46 (JSC::CodeOrigin::hash const): 47 (JSC::CodeOrigin::operator== const): 48 (JSC::CodeOrigin::exitingInlineKind const): Deleted. 49 * bytecode/DeferredSourceDump.h: 50 * bytecode/GetByIdStatus.cpp: 51 (JSC::GetByIdStatus::computeForStubInfo): 52 (JSC::GetByIdStatus::computeFor): 53 * bytecode/ICStatusMap.cpp: 54 (JSC::ICStatusContext::isInlined const): 55 * bytecode/InByIdStatus.cpp: 56 (JSC::InByIdStatus::computeFor): 57 (JSC::InByIdStatus::computeForStubInfo): 58 * bytecode/InlineCallFrame.cpp: 59 (JSC::InlineCallFrame::dumpInContext const): 60 * bytecode/InlineCallFrame.h: 61 (JSC::InlineCallFrame::computeCallerSkippingTailCalls): 62 (JSC::InlineCallFrame::getCallerInlineFrameSkippingTailCalls): 63 (JSC::baselineCodeBlockForOriginAndBaselineCodeBlock): 64 (JSC::CodeOrigin::walkUpInlineStack): 65 * bytecode/InstanceOfStatus.h: 66 * bytecode/PutByIdStatus.cpp: 67 (JSC::PutByIdStatus::computeForStubInfo): 68 (JSC::PutByIdStatus::computeFor): 69 * dfg/DFGAbstractInterpreterInlines.h: 70 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 71 * dfg/DFGArgumentsEliminationPhase.cpp: 72 * dfg/DFGArgumentsUtilities.cpp: 73 (JSC::DFG::argumentsInvolveStackSlot): 74 (JSC::DFG::emitCodeToGetArgumentsArrayLength): 75 * dfg/DFGArrayMode.h: 76 * dfg/DFGByteCodeParser.cpp: 77 (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation): 78 (JSC::DFG::ByteCodeParser::setLocal): 79 (JSC::DFG::ByteCodeParser::setArgument): 80 (JSC::DFG::ByteCodeParser::flushForTerminalImpl): 81 (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit): 82 (JSC::DFG::ByteCodeParser::parseBlock): 83 (JSC::DFG::ByteCodeParser::parseCodeBlock): 84 (JSC::DFG::ByteCodeParser::handlePutByVal): 85 * dfg/DFGClobberize.h: 86 (JSC::DFG::clobberize): 87 * dfg/DFGConstantFoldingPhase.cpp: 88 (JSC::DFG::ConstantFoldingPhase::foldConstants): 89 * dfg/DFGFixupPhase.cpp: 90 (JSC::DFG::FixupPhase::attemptToMakeGetArrayLength): 91 * dfg/DFGForAllKills.h: 92 (JSC::DFG::forAllKilledOperands): 93 * dfg/DFGGraph.cpp: 94 (JSC::DFG::Graph::dumpCodeOrigin): 95 (JSC::DFG::Graph::dump): 96 (JSC::DFG::Graph::isLiveInBytecode): 97 (JSC::DFG::Graph::methodOfGettingAValueProfileFor): 98 (JSC::DFG::Graph::willCatchExceptionInMachineFrame): 99 * dfg/DFGGraph.h: 100 (JSC::DFG::Graph::executableFor): 101 (JSC::DFG::Graph::isStrictModeFor): 102 (JSC::DFG::Graph::hasExitSite): 103 (JSC::DFG::Graph::forAllLocalsLiveInBytecode): 104 * dfg/DFGLiveCatchVariablePreservationPhase.cpp: 105 (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch): 106 * dfg/DFGMinifiedNode.cpp: 107 (JSC::DFG::MinifiedNode::fromNode): 108 * dfg/DFGOSRAvailabilityAnalysisPhase.cpp: 109 (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode): 110 * dfg/DFGOSRExit.cpp: 111 (JSC::DFG::OSRExit::executeOSRExit): 112 (JSC::DFG::reifyInlinedCallFrames): 113 (JSC::DFG::adjustAndJumpToTarget): 114 (JSC::DFG::printOSRExit): 115 (JSC::DFG::OSRExit::compileExit): 116 * dfg/DFGOSRExitBase.cpp: 117 (JSC::DFG::OSRExitBase::considerAddingAsFrequentExitSiteSlow): 118 * dfg/DFGOSRExitCompilerCommon.cpp: 119 (JSC::DFG::handleExitCounts): 120 (JSC::DFG::reifyInlinedCallFrames): 121 (JSC::DFG::adjustAndJumpToTarget): 122 * dfg/DFGOSRExitPreparation.cpp: 123 (JSC::DFG::prepareCodeOriginForOSRExit): 124 * dfg/DFGObjectAllocationSinkingPhase.cpp: 125 * dfg/DFGOperations.cpp: 126 * dfg/DFGPreciseLocalClobberize.h: 127 (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop): 128 * dfg/DFGSpeculativeJIT.cpp: 129 (JSC::DFG::SpeculativeJIT::emitGetLength): 130 (JSC::DFG::SpeculativeJIT::emitGetCallee): 131 (JSC::DFG::SpeculativeJIT::compileCurrentBlock): 132 (JSC::DFG::SpeculativeJIT::compileValueAdd): 133 (JSC::DFG::SpeculativeJIT::compileValueSub): 134 (JSC::DFG::SpeculativeJIT::compileValueNegate): 135 (JSC::DFG::SpeculativeJIT::compileValueMul): 136 (JSC::DFG::SpeculativeJIT::compileForwardVarargs): 137 (JSC::DFG::SpeculativeJIT::compileCreateDirectArguments): 138 * dfg/DFGSpeculativeJIT32_64.cpp: 139 (JSC::DFG::SpeculativeJIT::emitCall): 140 * dfg/DFGSpeculativeJIT64.cpp: 141 (JSC::DFG::SpeculativeJIT::emitCall): 142 (JSC::DFG::SpeculativeJIT::compile): 143 * dfg/DFGTierUpCheckInjectionPhase.cpp: 144 (JSC::DFG::TierUpCheckInjectionPhase::run): 145 (JSC::DFG::TierUpCheckInjectionPhase::canOSREnterAtLoopHint): 146 (JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap): 147 * dfg/DFGTypeCheckHoistingPhase.cpp: 148 (JSC::DFG::TypeCheckHoistingPhase::run): 149 * dfg/DFGVariableEventStream.cpp: 150 (JSC::DFG::VariableEventStream::reconstruct const): 151 * ftl/FTLLowerDFGToB3.cpp: 152 (JSC::FTL::DFG::LowerDFGToB3::compileValueAdd): 153 (JSC::FTL::DFG::LowerDFGToB3::compileValueSub): 154 (JSC::FTL::DFG::LowerDFGToB3::compileValueMul): 155 (JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub): 156 (JSC::FTL::DFG::LowerDFGToB3::compileValueNegate): 157 (JSC::FTL::DFG::LowerDFGToB3::compileGetMyArgumentByVal): 158 (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread): 159 (JSC::FTL::DFG::LowerDFGToB3::compileSpread): 160 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread): 161 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs): 162 (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs): 163 (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread): 164 (JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength): 165 (JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee): 166 (JSC::FTL::DFG::LowerDFGToB3::getArgumentsStart): 167 (JSC::FTL::DFG::LowerDFGToB3::codeOriginDescriptionOfCallSite const): 168 * ftl/FTLOSRExitCompiler.cpp: 169 (JSC::FTL::compileStub): 170 * ftl/FTLOperations.cpp: 171 (JSC::FTL::operationMaterializeObjectInOSR): 172 * interpreter/CallFrame.cpp: 173 (JSC::CallFrame::bytecodeOffset): 174 * interpreter/StackVisitor.cpp: 175 (JSC::StackVisitor::unwindToMachineCodeBlockFrame): 176 (JSC::StackVisitor::readFrame): 177 (JSC::StackVisitor::readNonInlinedFrame): 178 (JSC::inlinedFrameOffset): 179 (JSC::StackVisitor::readInlinedFrame): 180 * interpreter/StackVisitor.h: 181 * jit/AssemblyHelpers.cpp: 182 (JSC::AssemblyHelpers::executableFor): 183 * jit/AssemblyHelpers.h: 184 (JSC::AssemblyHelpers::isStrictModeFor): 185 (JSC::AssemblyHelpers::argumentsStart): 186 (JSC::AssemblyHelpers::argumentCount): 187 * jit/PCToCodeOriginMap.cpp: 188 (JSC::PCToCodeOriginMap::PCToCodeOriginMap): 189 (JSC::PCToCodeOriginMap::findPC const): 190 * profiler/ProfilerOriginStack.cpp: 191 (JSC::Profiler::OriginStack::OriginStack): 192 * profiler/ProfilerOriginStack.h: 193 * runtime/ErrorInstance.cpp: 194 (JSC::appendSourceToError): 195 * runtime/SamplingProfiler.cpp: 196 (JSC::SamplingProfiler::processUnverifiedStackTraces): 197 1 198 2019-03-20 Devin Rousso <drousso@apple.com> 2 199 -
trunk/Source/JavaScriptCore/bytecode/ByValInfo.h
r236587 r243232 28 28 #include "ClassInfo.h" 29 29 #include "CodeLocation.h" 30 #include "CodeOrigin.h"31 30 #include "IndexingType.h" 32 31 #include "JITStubRoutine.h" -
trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
r240041 r243232 303 303 if (CallLinkStatusInternal::verbose) 304 304 dataLog("Figuring out call profiling for ", codeOrigin, "\n"); 305 ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex );305 ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex()); 306 306 if (CallLinkStatusInternal::verbose) { 307 307 dataLog("takesSlowPath = ", exitSiteData.takesSlowPath, "\n"); … … 347 347 auto bless = [&] (CallLinkStatus& result) { 348 348 if (!context->isInlined(codeOrigin)) 349 result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex , baselineMap, exitSiteData));349 result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData)); 350 350 }; 351 351 … … 394 394 } 395 395 396 return computeFor(profiledBlock, codeOrigin.bytecodeIndex , baselineMap, exitSiteData);396 return computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData); 397 397 } 398 398 #endif -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r242928 r243232 2057 2057 JSGlobalObject* CodeBlock::globalObjectFor(CodeOrigin codeOrigin) 2058 2058 { 2059 if (!codeOrigin.inlineCallFrame) 2059 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 2060 if (!inlineCallFrame) 2060 2061 return globalObject(); 2061 return codeOrigin.inlineCallFrame->baselineCodeBlock->globalObject();2062 return inlineCallFrame->baselineCodeBlock->globalObject(); 2062 2063 } 2063 2064 … … 2404 2405 2405 2406 bool didTryToEnterInLoop = false; 2406 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame ; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {2407 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) { 2407 2408 if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) { 2408 2409 didTryToEnterInLoop = true; … … 3176 3177 RELEASE_ASSERT(canGetCodeOrigin(callSiteIndex)); 3177 3178 CodeOrigin origin = codeOrigin(callSiteIndex); 3178 bytecodeOffset = origin.bytecodeIndex ;3179 bytecodeOffset = origin.bytecodeIndex(); 3179 3180 #else 3180 3181 RELEASE_ASSERT_NOT_REACHED(); -
trunk/Source/JavaScriptCore/bytecode/CodeOrigin.cpp
r234086 r243232 34 34 namespace JSC { 35 35 36 unsigned CodeOrigin::inlineDepth ForCallFrame(InlineCallFrame* inlineCallFrame)36 unsigned CodeOrigin::inlineDepth() const 37 37 { 38 38 unsigned result = 1; 39 for (InlineCallFrame* current = inlineCallFrame ; current; current = current->directCaller.inlineCallFrame)39 for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame()) 40 40 result++; 41 41 return result; 42 }43 44 unsigned CodeOrigin::inlineDepth() const45 {46 return inlineDepthForCallFrame(inlineCallFrame);47 42 } 48 43 … … 66 61 ASSERT(b.isSet()); 67 62 68 if (a.bytecodeIndex != b.bytecodeIndex)63 if (a.bytecodeIndex() != b.bytecodeIndex()) 69 64 return false; 70 71 bool aHasInlineCallFrame = !!a.inlineCallFrame && a.inlineCallFrame != terminal; 72 bool bHasInlineCallFrame = !!b.inlineCallFrame; 65 66 auto* aInlineCallFrame = a.inlineCallFrame(); 67 auto* bInlineCallFrame = b.inlineCallFrame(); 68 bool aHasInlineCallFrame = !!aInlineCallFrame && aInlineCallFrame != terminal; 69 bool bHasInlineCallFrame = !!bInlineCallFrame; 73 70 if (aHasInlineCallFrame != bHasInlineCallFrame) 74 71 return false; … … 77 74 return true; 78 75 79 if (a .inlineCallFrame->baselineCodeBlock.get() != b.inlineCallFrame->baselineCodeBlock.get())76 if (aInlineCallFrame->baselineCodeBlock.get() != bInlineCallFrame->baselineCodeBlock.get()) 80 77 return false; 81 78 82 a = a .inlineCallFrame->directCaller;83 b = b .inlineCallFrame->directCaller;79 a = aInlineCallFrame->directCaller; 80 b = bInlineCallFrame->directCaller; 84 81 } 85 82 } … … 95 92 CodeOrigin codeOrigin = *this; 96 93 for (;;) { 97 result += codeOrigin.bytecodeIndex; 98 99 if (!codeOrigin.inlineCallFrame) 94 result += codeOrigin.bytecodeIndex(); 95 96 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 97 98 if (!inlineCallFrame) 100 99 return result; 101 100 102 if ( codeOrigin.inlineCallFrame == terminal)101 if (inlineCallFrame == terminal) 103 102 return result; 104 103 105 result += WTF::PtrHash<JSCell*>::hash( codeOrigin.inlineCallFrame->baselineCodeBlock.get());104 result += WTF::PtrHash<JSCell*>::hash(inlineCallFrame->baselineCodeBlock.get()); 106 105 107 codeOrigin = codeOrigin.inlineCallFrame->directCaller;106 codeOrigin = inlineCallFrame->directCaller; 108 107 } 109 108 } … … 114 113 result.last() = *this; 115 114 unsigned index = result.size() - 2; 116 for (InlineCallFrame* current = inlineCallFrame ; current; current = current->directCaller.inlineCallFrame)115 for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame()) 117 116 result[index--] = current->directCaller; 118 RELEASE_ASSERT(!result[0].inlineCallFrame );117 RELEASE_ASSERT(!result[0].inlineCallFrame()); 119 118 return result; 120 119 } … … 122 121 CodeBlock* CodeOrigin::codeOriginOwner() const 123 122 { 123 auto* inlineCallFrame = this->inlineCallFrame(); 124 124 if (!inlineCallFrame) 125 return 0;125 return nullptr; 126 126 return inlineCallFrame->baselineCodeBlock.get(); 127 127 } … … 129 129 int CodeOrigin::stackOffset() const 130 130 { 131 auto* inlineCallFrame = this->inlineCallFrame(); 131 132 if (!inlineCallFrame) 132 133 return 0; 133 134 134 return inlineCallFrame->stackOffset; 135 135 } … … 147 147 out.print(" --> "); 148 148 149 if (InlineCallFrame* frame = stack[i].inlineCallFrame ) {149 if (InlineCallFrame* frame = stack[i].inlineCallFrame()) { 150 150 out.print(frame->briefFunctionInformation(), ":<", RawPointer(frame->baselineCodeBlock.get()), "> "); 151 151 if (frame->isClosureCall) … … 153 153 } 154 154 155 out.print("bc#", stack[i].bytecodeIndex );155 out.print("bc#", stack[i].bytecodeIndex()); 156 156 } 157 157 } -
trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h
r234086 r243232 26 26 #pragma once 27 27 28 #include "CodeBlockHash.h"29 #include "ExitingInlineKind.h"30 28 #include <limits.h> 31 29 #include <wtf/HashMap.h> … … 40 38 struct InlineCallFrame; 41 39 42 struct CodeOrigin { 43 static const unsigned invalidBytecodeIndex = UINT_MAX; 44 45 // Bytecode offset that you'd use to re-execute this instruction, and the 46 // bytecode index of the bytecode instruction that produces some result that 47 // you're interested in (used for mapping Nodes whose values you're using 48 // to bytecode instructions that have the appropriate value profile). 49 unsigned bytecodeIndex; 50 51 InlineCallFrame* inlineCallFrame; 52 40 class CodeOrigin { 41 public: 53 42 CodeOrigin() 54 : bytecodeIndex(invalidBytecodeIndex) 55 , inlineCallFrame(0) 43 #if CPU(ADDRESS64) 44 : m_compositeValue(buildCompositeValue(nullptr, s_invalidBytecodeIndex)) 45 #else 46 : m_bytecodeIndex(s_invalidBytecodeIndex) 47 , m_inlineCallFrame(nullptr) 48 #endif 56 49 { 57 50 } 58 51 59 52 CodeOrigin(WTF::HashTableDeletedValueType) 60 : bytecodeIndex(invalidBytecodeIndex) 61 , inlineCallFrame(deletedMarker()) 62 { 63 } 64 65 explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0) 66 : bytecodeIndex(bytecodeIndex) 67 , inlineCallFrame(inlineCallFrame) 68 { 69 ASSERT(bytecodeIndex < invalidBytecodeIndex); 70 } 71 72 bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; } 53 #if CPU(ADDRESS64) 54 : m_compositeValue(buildCompositeValue(deletedMarker(), s_invalidBytecodeIndex)) 55 #else 56 : m_bytecodeIndex(s_invalidBytecodeIndex) 57 , m_inlineCallFrame(deletedMarker()) 58 #endif 59 { 60 } 61 62 explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = nullptr) 63 #if CPU(ADDRESS64) 64 : m_compositeValue(buildCompositeValue(inlineCallFrame, bytecodeIndex)) 65 #else 66 : m_bytecodeIndex(bytecodeIndex) 67 , m_inlineCallFrame(inlineCallFrame) 68 #endif 69 { 70 ASSERT(bytecodeIndex < s_invalidBytecodeIndex); 71 #if CPU(ADDRESS64) 72 ASSERT(!(bitwise_cast<uintptr_t>(inlineCallFrame) & ~s_maskCompositeValueForPointer)); 73 #endif 74 } 75 76 #if CPU(ADDRESS64) 77 CodeOrigin& operator=(const CodeOrigin& other) 78 { 79 if (this != &other) { 80 if (UNLIKELY(isOutOfLine())) 81 delete outOfLineCodeOrigin(); 82 83 if (UNLIKELY(other.isOutOfLine())) 84 m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex()); 85 else 86 m_compositeValue = other.m_compositeValue; 87 } 88 return *this; 89 } 90 CodeOrigin& operator=(CodeOrigin&& other) 91 { 92 if (this != &other) { 93 if (UNLIKELY(isOutOfLine())) 94 delete outOfLineCodeOrigin(); 95 96 m_compositeValue = std::exchange(other.m_compositeValue, 0); 97 } 98 return *this; 99 } 100 101 CodeOrigin(const CodeOrigin& other) 102 { 103 // We don't use the member initializer list because it would not let us optimize the common case where there is no out-of-line storage 104 // (in which case we don't have to extract the components of the composite value just to reassemble it). 105 if (UNLIKELY(other.isOutOfLine())) 106 m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex()); 107 else 108 m_compositeValue = other.m_compositeValue; 109 } 110 CodeOrigin(CodeOrigin&& other) 111 : m_compositeValue(std::exchange(other.m_compositeValue, 0)) 112 { 113 } 114 115 ~CodeOrigin() 116 { 117 if (UNLIKELY(isOutOfLine())) 118 delete outOfLineCodeOrigin(); 119 } 120 #endif 121 122 bool isSet() const 123 { 124 #if CPU(ADDRESS64) 125 return !(m_compositeValue & s_maskIsBytecodeIndexInvalid); 126 #else 127 return m_bytecodeIndex != s_invalidBytecodeIndex; 128 #endif 129 } 73 130 explicit operator bool() const { return isSet(); } 74 131 75 132 bool isHashTableDeletedValue() const 76 133 { 77 return bytecodeIndex == invalidBytecodeIndex && !!inlineCallFrame; 134 #if CPU(ADDRESS64) 135 return !isSet() && (m_compositeValue & s_maskCompositeValueForPointer); 136 #else 137 return m_bytecodeIndex == s_invalidBytecodeIndex && !!m_inlineCallFrame; 138 #endif 78 139 } 79 140 … … 88 149 int stackOffset() const; 89 150 90 static unsigned inlineDepthForCallFrame(InlineCallFrame*);91 92 ExitingInlineKind exitingInlineKind() const93 {94 return inlineCallFrame ? ExitFromInlined : ExitFromNotInlined;95 }96 97 151 unsigned hash() const; 98 152 bool operator==(const CodeOrigin& other) const; … … 113 167 JS_EXPORT_PRIVATE void dump(PrintStream&) const; 114 168 void dumpInContext(PrintStream&, DumpContext*) const; 115 169 170 unsigned bytecodeIndex() const 171 { 172 #if CPU(ADDRESS64) 173 if (!isSet()) 174 return s_invalidBytecodeIndex; 175 if (UNLIKELY(isOutOfLine())) 176 return outOfLineCodeOrigin()->bytecodeIndex; 177 return m_compositeValue >> (64 - s_freeBitsAtTop); 178 #else 179 return m_bytecodeIndex; 180 #endif 181 } 182 183 InlineCallFrame* inlineCallFrame() const 184 { 185 #if CPU(ADDRESS64) 186 if (UNLIKELY(isOutOfLine())) 187 return outOfLineCodeOrigin()->inlineCallFrame; 188 return bitwise_cast<InlineCallFrame*>(m_compositeValue & s_maskCompositeValueForPointer); 189 #else 190 return m_inlineCallFrame; 191 #endif 192 } 193 116 194 private: 195 static constexpr unsigned s_invalidBytecodeIndex = UINT_MAX; 196 197 #if CPU(ADDRESS64) 198 static constexpr uintptr_t s_maskIsOutOfLine = 1; 199 static constexpr uintptr_t s_maskIsBytecodeIndexInvalid = 2; 200 201 struct OutOfLineCodeOrigin { 202 WTF_MAKE_FAST_ALLOCATED; 203 public: 204 InlineCallFrame* inlineCallFrame; 205 unsigned bytecodeIndex; 206 207 OutOfLineCodeOrigin(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex) 208 : inlineCallFrame(inlineCallFrame) 209 , bytecodeIndex(bytecodeIndex) 210 { 211 } 212 }; 213 214 bool isOutOfLine() const 215 { 216 return m_compositeValue & s_maskIsOutOfLine; 217 } 218 OutOfLineCodeOrigin* outOfLineCodeOrigin() const 219 { 220 ASSERT(isOutOfLine()); 221 return bitwise_cast<OutOfLineCodeOrigin*>(m_compositeValue & s_maskCompositeValueForPointer); 222 } 223 #endif 224 117 225 static InlineCallFrame* deletedMarker() 118 226 { 119 return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(1)); 120 } 227 auto value = static_cast<uintptr_t>(1 << 3); 228 #if CPU(ADDRESS64) 229 ASSERT(value & s_maskCompositeValueForPointer); 230 ASSERT(!(value & ~s_maskCompositeValueForPointer)); 231 #endif 232 return bitwise_cast<InlineCallFrame*>(value); 233 } 234 235 #if CPU(X86_64) && CPU(ADDRESS64) 236 static constexpr unsigned s_freeBitsAtTop = 16; 237 static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000fffffffffff8; 238 #elif CPU(ARM64) && CPU(ADDRESS64) 239 static constexpr unsigned s_freeBitsAtTop = 28; 240 static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000000ffffffff8; 241 #endif 242 #if CPU(ADDRESS64) 243 static uintptr_t buildCompositeValue(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex) 244 { 245 if (bytecodeIndex == s_invalidBytecodeIndex) 246 return bitwise_cast<uintptr_t>(inlineCallFrame) | s_maskIsBytecodeIndexInvalid; 247 248 if (UNLIKELY(bytecodeIndex >= 1 << s_freeBitsAtTop)) { 249 auto* outOfLine = new OutOfLineCodeOrigin(inlineCallFrame, bytecodeIndex); 250 return bitwise_cast<uintptr_t>(outOfLine) | s_maskIsOutOfLine; 251 } 252 253 uintptr_t encodedBytecodeIndex = static_cast<uintptr_t>(bytecodeIndex) << (64 - s_freeBitsAtTop); 254 ASSERT(!(encodedBytecodeIndex & bitwise_cast<uintptr_t>(inlineCallFrame))); 255 return encodedBytecodeIndex | bitwise_cast<uintptr_t>(inlineCallFrame); 256 } 257 258 // The bottom bit indicates whether to look at an out-of-line implementation (because of a bytecode index which is too big for us to store). 259 // The next bit indicates whether this is an invalid bytecode (which depending on the InlineCallFrame* can either indicate an unset CodeOrigin, 260 // or a deletion marker for a hash table). 261 // The next bit is free 262 // The next 64-s_freeBitsAtTop-3 are the InlineCallFrame* or the OutOfLineCodeOrigin* 263 // Finally the last s_freeBitsAtTop are the bytecodeIndex if it is inline 264 uintptr_t m_compositeValue; 265 #else 266 unsigned m_bytecodeIndex; 267 InlineCallFrame* m_inlineCallFrame; 268 #endif 121 269 }; 122 270 123 271 inline unsigned CodeOrigin::hash() const 124 272 { 125 return WTF::IntHash<unsigned>::hash(bytecodeIndex ) +126 WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame );273 return WTF::IntHash<unsigned>::hash(bytecodeIndex()) + 274 WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame()); 127 275 } 128 276 129 277 inline bool CodeOrigin::operator==(const CodeOrigin& other) const 130 278 { 131 return bytecodeIndex == other.bytecodeIndex 132 && inlineCallFrame == other.inlineCallFrame; 279 #if CPU(ADDRESS64) 280 if (m_compositeValue == other.m_compositeValue) 281 return true; 282 #endif 283 return bytecodeIndex() == other.bytecodeIndex() 284 && inlineCallFrame() == other.inlineCallFrame(); 133 285 } 134 286 -
trunk/Source/JavaScriptCore/bytecode/DeferredSourceDump.h
r235684 r243232 26 26 #pragma once 27 27 28 #include "CodeOrigin.h"29 28 #include "JITCode.h" 30 29 #include "Strong.h" -
trunk/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
r242659 r243232 131 131 GetByIdStatus GetByIdStatus::computeForStubInfo(const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, CodeOrigin codeOrigin, UniquedStringImpl* uid) 132 132 { 133 unsigned bytecodeIndex = codeOrigin.bytecodeIndex(); 133 134 GetByIdStatus result = GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback( 134 135 locker, profiledBlock, stubInfo, uid, 135 CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex));136 137 if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex))136 CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex)); 137 138 if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, bytecodeIndex)) 138 139 return result.slowVersion(); 139 140 return result; … … 303 304 ICStatusContextStack& icContextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid) 304 305 { 305 CallLinkStatus::ExitSiteData callExitSiteData =306 CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex);307 ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex);306 unsigned bytecodeIndex = codeOrigin.bytecodeIndex(); 307 CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex); 308 ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex); 308 309 309 310 for (ICStatusContext* context : icContextStack) { … … 315 316 // inlined and not-inlined. 316 317 GetByIdStatus baselineResult = computeFor( 317 profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,318 profiledBlock, baselineMap, bytecodeIndex, uid, didExit, 318 319 callExitSiteData); 319 320 baselineResult.merge(result); … … 340 341 } 341 342 342 return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);343 return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData); 343 344 } 344 345 -
trunk/Source/JavaScriptCore/bytecode/ICStatusMap.cpp
r234086 r243232 40 40 bool ICStatusContext::isInlined(CodeOrigin codeOrigin) const 41 41 { 42 return codeOrigin.inlineCallFrame && codeOrigin.inlineCallFrame != inlineCallFrame; 42 auto* originInlineCallFrame = codeOrigin.inlineCallFrame(); 43 return originInlineCallFrame && originInlineCallFrame != inlineCallFrame; 43 44 } 44 45 -
trunk/Source/JavaScriptCore/bytecode/InByIdStatus.cpp
r242109 r243232 74 74 ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid) 75 75 { 76 ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex); 76 unsigned bytecodeIndex = codeOrigin.bytecodeIndex(); 77 ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex); 77 78 78 79 for (ICStatusContext* context : contextStack) { … … 82 83 if (!context->isInlined(codeOrigin)) { 83 84 InByIdStatus baselineResult = computeFor( 84 profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);85 profiledBlock, baselineMap, bytecodeIndex, uid, didExit); 85 86 baselineResult.merge(result); 86 87 return baselineResult; … … 107 108 } 108 109 109 return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);110 return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit); 110 111 } 111 112 #endif // ENABLE(JIT) … … 116 117 InByIdStatus result = InByIdStatus::computeForStubInfoWithoutExitSiteFeedback(locker, stubInfo, uid); 117 118 118 if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex ))119 if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex())) 119 120 return InByIdStatus(TakesSlowPath); 120 121 return result; -
trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp
r221528 r243232 70 70 if (isStrictMode()) 71 71 out.print(" (StrictMode)"); 72 out.print(", bc#", directCaller.bytecodeIndex , ", ", static_cast<Kind>(kind));72 out.print(", bc#", directCaller.bytecodeIndex(), ", ", static_cast<Kind>(kind)); 73 73 if (isClosureCall) 74 74 out.print(", closure call"); -
trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.h
r236495 r243232 153 153 callKind = inlineCallFrame->kind; 154 154 codeOrigin = &inlineCallFrame->directCaller; 155 inlineCallFrame = codeOrigin->inlineCallFrame ;155 inlineCallFrame = codeOrigin->inlineCallFrame(); 156 156 } while (inlineCallFrame && tailCallee); 157 157 … … 173 173 { 174 174 CodeOrigin* caller = getCallerSkippingTailCalls(); 175 return caller ? caller->inlineCallFrame : nullptr;175 return caller ? caller->inlineCallFrame() : nullptr; 176 176 } 177 177 … … 242 242 { 243 243 ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT); 244 if (codeOrigin.inlineCallFrame) 245 return baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame); 244 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 245 if (inlineCallFrame) 246 return baselineCodeBlockForInlineCallFrame(inlineCallFrame); 246 247 return baselineCodeBlock; 247 248 } 248 249 250 // This function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame 249 251 template <typename Function> 250 252 inline void CodeOrigin::walkUpInlineStack(const Function& function) … … 253 255 while (true) { 254 256 function(codeOrigin); 255 if (!codeOrigin.inlineCallFrame) 257 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 258 if (!inlineCallFrame) 256 259 break; 257 codeOrigin = codeOrigin.inlineCallFrame->directCaller;260 codeOrigin = inlineCallFrame->directCaller; 258 261 } 259 262 } -
trunk/Source/JavaScriptCore/bytecode/InstanceOfStatus.h
r234086 r243232 26 26 #pragma once 27 27 28 #include "CodeOrigin.h"29 28 #include "ConcurrentJSLock.h" 30 29 #include "ICStatusMap.h" -
trunk/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
r240138 r243232 123 123 return computeForStubInfo( 124 124 locker, baselineBlock, stubInfo, uid, 125 CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex ));125 CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex())); 126 126 } 127 127 … … 238 238 PutByIdStatus PutByIdStatus::computeFor(CodeBlock* baselineBlock, ICStatusMap& baselineMap, ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid) 239 239 { 240 CallLinkStatus::ExitSiteData callExitSiteData =241 CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex);242 ExitFlag didExit = hasBadCacheExitSite(baselineBlock, codeOrigin.bytecodeIndex);240 unsigned bytecodeIndex = codeOrigin.bytecodeIndex(); 241 CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(baselineBlock, bytecodeIndex); 242 ExitFlag didExit = hasBadCacheExitSite(baselineBlock, bytecodeIndex); 243 243 244 244 for (ICStatusContext* context : contextStack) { … … 248 248 if (!context->isInlined(codeOrigin)) { 249 249 PutByIdStatus baselineResult = computeFor( 250 baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,250 baselineBlock, baselineMap, bytecodeIndex, uid, didExit, 251 251 callExitSiteData); 252 252 baselineResult.merge(result); … … 273 273 } 274 274 275 return computeFor(baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);275 return computeFor(baselineBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData); 276 276 } 277 277 -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r243206 r243232 2214 2214 case GetMyArgumentByValOutOfBounds: { 2215 2215 JSValue index = forNode(node->child2()).m_value; 2216 InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame ;2216 InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame(); 2217 2217 2218 2218 if (index && index.isUInt32()) { -
trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
r242954 r243232 530 530 Operands<bool>& clobberedByThisBlock = clobberedByBlock[block]; 531 531 532 if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ) {532 if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame()) { 533 533 if (inlineCallFrame->isVarargs()) { 534 534 isClobberedByBlock |= clobberedByThisBlock.operand( … … 760 760 if (m_graph.varArgChild(node, 1)->isInt32Constant()) { 761 761 unsigned index = m_graph.varArgChild(node, 1)->asUInt32(); 762 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;762 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 763 763 index += numberOfArgumentsToSkip; 764 764 … … 862 862 863 863 ASSERT(candidate->op() == PhantomCreateRest); 864 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;864 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 865 865 return inlineCallFrame && !inlineCallFrame->isVarargs(); 866 866 }); … … 888 888 ASSERT(candidate->op() == PhantomCreateRest); 889 889 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 890 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;890 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 891 891 unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1; 892 892 if (frameArgumentCount >= numberOfArgumentsToSkip) … … 936 936 ASSERT(candidate->op() == PhantomCreateRest); 937 937 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 938 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;938 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 939 939 unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1; 940 940 for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) { … … 971 971 varargsData->offset += numberOfArgumentsToSkip; 972 972 973 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;973 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 974 974 975 975 if (inlineCallFrame … … 1112 1112 1113 1113 ASSERT(candidate->op() == PhantomCreateRest); 1114 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;1114 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 1115 1115 return inlineCallFrame && !inlineCallFrame->isVarargs(); 1116 1116 }); … … 1151 1151 1152 1152 ASSERT(candidate->op() == PhantomCreateRest); 1153 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;1153 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 1154 1154 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip(); 1155 1155 for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->argumentCountIncludingThis; ++i) { … … 1176 1176 varargsData->firstVarArgOffset += numberOfArgumentsToSkip; 1177 1177 1178 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame ;1178 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame(); 1179 1179 if (inlineCallFrame && !inlineCallFrame->isVarargs()) { 1180 1180 Vector<Node*> arguments; -
trunk/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
r232070 r243232 55 55 bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister reg) 56 56 { 57 return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame , reg);57 return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), reg); 58 58 } 59 59 … … 107 107 } 108 108 109 InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame ;109 InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame(); 110 110 111 111 unsigned numberOfArgumentsToSkip = 0; -
trunk/Source/JavaScriptCore/dfg/DFGArrayMode.h
r239951 r243232 33 33 namespace JSC { 34 34 35 structCodeOrigin;35 class CodeOrigin; 36 36 37 37 namespace DFG { -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r243176 r243232 386 386 { 387 387 ASSERT(node->op() == GetLocal); 388 ASSERT(node->origin.semantic.bytecodeIndex == m_currentIndex);388 ASSERT(node->origin.semantic.bytecodeIndex() == m_currentIndex); 389 389 ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock); 390 390 LazyOperandValueProfileKey key(m_currentIndex, node->local()); … … 443 443 VariableAccessData* variableAccessData = newVariableAccessData(operand); 444 444 variableAccessData->mergeStructureCheckHoistingFailed( 445 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex , BadCache));445 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache)); 446 446 variableAccessData->mergeCheckArrayHoistingFailed( 447 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex , BadIndexingType));447 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType)); 448 448 Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value); 449 449 m_currentBlock->variablesAtTail.local(local) = node; … … 499 499 500 500 variableAccessData->mergeStructureCheckHoistingFailed( 501 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex , BadCache));501 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache)); 502 502 variableAccessData->mergeCheckArrayHoistingFailed( 503 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex , BadIndexingType));503 m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType)); 504 504 Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value); 505 505 m_currentBlock->variablesAtTail.argument(argument) = node; … … 564 564 origin.walkUpInlineStack( 565 565 [&] (CodeOrigin origin) { 566 unsigned bytecodeIndex = origin.bytecodeIndex ;567 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame ;566 unsigned bytecodeIndex = origin.bytecodeIndex(); 567 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame(); 568 568 flushImpl(inlineCallFrame, addFlushDirect); 569 569 … … 869 869 870 870 InlineStackEntry* stack = m_inlineStackTop; 871 while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame )871 while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame()) 872 872 stack = stack->m_caller; 873 873 874 bytecodeIndex = codeOrigin->bytecodeIndex ;874 bytecodeIndex = codeOrigin->bytecodeIndex(); 875 875 CodeBlock* profiledBlock = stack->m_profiledBlock; 876 876 ConcurrentJSLocker locker(profiledBlock->m_lock); … … 5397 5397 { 5398 5398 ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock); 5399 ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex )).byValInfo;5399 ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo; 5400 5400 // FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null. 5401 5401 // At that time, there is no information. … … 6027 6027 6028 6028 Node* setArgument = addToGraph(SetArgument, OpInfo(variable)); 6029 setArgument->origin.forExit .bytecodeIndex = exitBytecodeIndex;6029 setArgument->origin.forExit = CodeOrigin(exitBytecodeIndex, setArgument->origin.forExit.inlineCallFrame()); 6030 6030 m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument); 6031 6031 entrypointArguments[argument] = setArgument; … … 7089 7089 Vector<DeferredSourceDump>& deferredSourceDump = m_graph.m_plan.callback()->ensureDeferredSourceDump(); 7090 7090 if (inlineCallFrame()) { 7091 DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex );7091 DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex()); 7092 7092 deferredSourceDump.append(dump); 7093 7093 } else … … 7177 7177 { 7178 7178 ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock); 7179 ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex )).byValInfo;7179 ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo; 7180 7180 // FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null. 7181 7181 // At that time, there is no information. -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r242715 r243232 111 111 // to the runtime, and that call may walk stack. Therefore, each node must read() anything that a stack 112 112 // scan would read. That's what this does. 113 for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame ; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {113 for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) { 114 114 if (inlineCallFrame->isClosureCall) 115 115 read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::callee)); … … 124 124 // a scope which is expected to be flushed to the stack. 125 125 if (graph.hasDebuggerEnabled()) { 126 ASSERT(!node->origin.semantic.inlineCallFrame );126 ASSERT(!node->origin.semantic.inlineCallFrame()); 127 127 read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister())); 128 128 } … … 711 711 712 712 case CallEval: 713 ASSERT(!node->origin.semantic.inlineCallFrame );713 ASSERT(!node->origin.semantic.inlineCallFrame()); 714 714 read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister())); 715 715 read(AbstractHeap(Stack, virtualRegisterForArgument(0))); -
trunk/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
r241228 r243232 364 364 unsigned index = checkedIndex.unsafeGet(); 365 365 Node* arguments = node->child1().node(); 366 InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame ;366 InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame(); 367 367 368 368 // Don't try to do anything if the index is known to be outside our static bounds. Note -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r242954 r243232 3431 3431 CodeBlock* profiledBlock = m_graph.baselineCodeBlockFor(node->origin.semantic); 3432 3432 ArrayProfile* arrayProfile = 3433 profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex );3433 profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex()); 3434 3434 ArrayMode arrayMode = ArrayMode(Array::SelectUsingPredictions, Array::Read); 3435 3435 if (arrayProfile) { -
trunk/Source/JavaScriptCore/dfg/DFGForAllKills.h
r209121 r243232 69 69 // It's easier to do this if the inline call frames are the same. This is way faster than the 70 70 // other loop, below. 71 if (before.inlineCallFrame == after.inlineCallFrame) { 72 int stackOffset = before.inlineCallFrame ? before.inlineCallFrame->stackOffset : 0; 73 CodeBlock* codeBlock = graph.baselineCodeBlockFor(before.inlineCallFrame); 71 auto* beforeInlineCallFrame = before.inlineCallFrame(); 72 if (beforeInlineCallFrame == after.inlineCallFrame()) { 73 int stackOffset = beforeInlineCallFrame ? beforeInlineCallFrame->stackOffset : 0; 74 CodeBlock* codeBlock = graph.baselineCodeBlockFor(beforeInlineCallFrame); 74 75 FullBytecodeLiveness& fullLiveness = graph.livenessFor(codeBlock); 75 const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex );76 const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex );76 const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex()); 77 const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex()); 77 78 78 79 (liveBefore & ~liveAfter).forEachSetBit( -
trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp
r242945 r243232 121 121 return false; 122 122 123 if (previousNode->origin.semantic.inlineCallFrame == currentNode->origin.semantic.inlineCallFrame)123 if (previousNode->origin.semantic.inlineCallFrame() == currentNode->origin.semantic.inlineCallFrame()) 124 124 return false; 125 125 … … 129 129 unsigned indexOfDivergence = commonSize; 130 130 for (unsigned i = 0; i < commonSize; ++i) { 131 if (previousInlineStack[i].inlineCallFrame != currentInlineStack[i].inlineCallFrame) {131 if (previousInlineStack[i].inlineCallFrame() != currentInlineStack[i].inlineCallFrame()) { 132 132 indexOfDivergence = i; 133 133 break; … … 141 141 out.print(prefix); 142 142 printWhiteSpace(out, i * 2); 143 out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame , context), "\n");143 out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame(), context), "\n"); 144 144 hasPrinted = true; 145 145 } … … 149 149 out.print(prefix); 150 150 printWhiteSpace(out, i * 2); 151 out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame , context), "\n");151 out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame(), context), "\n"); 152 152 hasPrinted = true; 153 153 } … … 395 395 out.print(comma, "ClobbersExit"); 396 396 if (node->origin.isSet()) { 397 out.print(comma, "bc#", node->origin.semantic.bytecodeIndex );397 out.print(comma, "bc#", node->origin.semantic.bytecodeIndex()); 398 398 if (node->origin.semantic != node->origin.forExit && node->origin.forExit.isSet()) 399 399 out.print(comma, "exit: ", node->origin.forExit); … … 1124 1124 if (verbose) 1125 1125 dataLog("reg = ", reg, "\n"); 1126 1126 1127 auto* inlineCallFrame = codeOriginPtr->inlineCallFrame(); 1127 1128 if (operand.offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) { 1128 1129 if (reg.isArgument()) { 1129 1130 RELEASE_ASSERT(reg.offset() < CallFrame::headerSizeInRegisters); 1130 1131 if (codeOriginPtr->inlineCallFrame->isClosureCall 1131 1132 1133 if (inlineCallFrame->isClosureCall 1132 1134 && reg.offset() == CallFrameSlot::callee) { 1133 1135 if (verbose) … … 1136 1138 } 1137 1139 1138 if ( codeOriginPtr->inlineCallFrame->isVarargs()1140 if (inlineCallFrame->isVarargs() 1139 1141 && reg.offset() == CallFrameSlot::argumentCount) { 1140 1142 if (verbose) … … 1148 1150 if (verbose) 1149 1151 dataLog("Asking the bytecode liveness.\n"); 1150 return livenessFor(codeOriginPtr->inlineCallFrame).operandIsLive( 1151 reg.offset(), codeOriginPtr->bytecodeIndex); 1152 } 1153 1154 InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame; 1152 return livenessFor(inlineCallFrame).operandIsLive(reg.offset(), codeOriginPtr->bytecodeIndex()); 1153 } 1154 1155 1155 if (!inlineCallFrame) { 1156 1156 if (verbose) … … 1623 1623 profiledBlock, 1624 1624 LazyOperandValueProfileKey( 1625 node->origin.semantic.bytecodeIndex , node->local()));1625 node->origin.semantic.bytecodeIndex(), node->local())); 1626 1626 } 1627 1627 } 1628 1628 1629 1629 if (node->hasHeapPrediction()) 1630 return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex );1630 return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex()); 1631 1631 1632 1632 if (profiledBlock->hasBaselineJITProfiling()) { 1633 if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex ))1633 if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex())) 1634 1634 return result; 1635 1635 } … … 1729 1729 return false; 1730 1730 1731 unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex ;1731 unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex(); 1732 1732 while (1) { 1733 InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame ;1733 InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame(); 1734 1734 CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame); 1735 1735 if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) { … … 1742 1742 return false; 1743 1743 1744 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex ;1745 codeOrigin = codeOrigin.inlineCallFrame->directCaller;1744 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex(); 1745 codeOrigin = inlineCallFrame->directCaller; 1746 1746 } 1747 1747 -
trunk/Source/JavaScriptCore/dfg/DFGGraph.h
r242945 r243232 425 425 ScriptExecutable* executableFor(const CodeOrigin& codeOrigin) 426 426 { 427 return executableFor(codeOrigin.inlineCallFrame );427 return executableFor(codeOrigin.inlineCallFrame()); 428 428 } 429 429 … … 442 442 bool isStrictModeFor(CodeOrigin codeOrigin) 443 443 { 444 if (!codeOrigin.inlineCallFrame )444 if (!codeOrigin.inlineCallFrame()) 445 445 return m_codeBlock->isStrictMode(); 446 return codeOrigin.inlineCallFrame ->isStrictMode();446 return codeOrigin.inlineCallFrame()->isStrictMode(); 447 447 } 448 448 … … 464 464 bool hasExitSite(const CodeOrigin& codeOrigin, ExitKind exitKind) 465 465 { 466 return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex , exitKind));466 return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex(), exitKind)); 467 467 } 468 468 … … 828 828 829 829 for (;;) { 830 InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame ;830 InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame(); 831 831 VirtualRegister stackOffset(inlineCallFrame ? inlineCallFrame->stackOffset : 0); 832 832 … … 840 840 CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame); 841 841 FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock); 842 const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex );842 const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex()); 843 843 for (unsigned relativeLocal = codeBlock->numCalleeLocals(); relativeLocal--;) { 844 844 VirtualRegister reg = stackOffset + virtualRegisterForLocal(relativeLocal); -
trunk/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
r221225 r243232 126 126 return cachedHandlerResult; 127 127 128 unsigned bytecodeIndexToCheck = origin.bytecodeIndex ;128 unsigned bytecodeIndexToCheck = origin.bytecodeIndex(); 129 129 130 130 cachedCodeOrigin = origin; 131 131 132 132 while (1) { 133 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame ;133 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame(); 134 134 CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame); 135 135 if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) { … … 150 150 } 151 151 152 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex ;152 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex(); 153 153 origin = inlineCallFrame->directCaller; 154 154 } … … 200 200 201 201 if (currentExceptionHandler && (node->op() == SetLocal || node->op() == SetArgument)) { 202 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame ;202 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); 203 203 if (inlineCallFrame) 204 204 seenInlineCallFrames.add(inlineCallFrame); -
trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
r209764 r243232 44 44 else { 45 45 ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments); 46 result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame );46 result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame()); 47 47 } 48 48 return result; -
trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
r240364 r243232 259 259 case PhantomDirectArguments: 260 260 case PhantomClonedArguments: { 261 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame ;261 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); 262 262 if (!inlineCallFrame) { 263 263 // We don't need to record anything about how the arguments are to be recovered. It's just a -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
r241927 r243232 394 394 CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; 395 395 CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock); 396 arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex );396 arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex()); 397 397 if (arrayProfile) 398 398 extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ArrayProfileUpdate); … … 407 407 CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); 408 408 const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap(); 409 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex );409 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex()); 410 410 ASSERT(codeLocation); 411 411 … … 749 749 750 750 const CodeOrigin* codeOrigin; 751 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame ; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {752 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame ;751 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) { 752 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame(); 753 753 CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock); 754 754 InlineCallFrame::Kind trueCallerCallKind; … … 768 768 } else { 769 769 CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); 770 unsigned callBytecodeIndex = trueCaller->bytecodeIndex ;770 unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); 771 771 MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget; 772 772 … … 800 800 } 801 801 802 if (trueCaller->inlineCallFrame )803 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame ->stackOffset * sizeof(EncodedJSValue);802 if (trueCaller->inlineCallFrame()) 803 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue); 804 804 805 805 void* targetAddress = jumpTarget.executableAddress(); … … 823 823 frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame); 824 824 #if USE(JSVALUE64) 825 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex ).bits();825 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits(); 826 826 frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits); 827 827 if (!inlineCallFrame->isClosureCall) … … 840 840 if (codeOrigin) { 841 841 #if USE(JSVALUE64) 842 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex ).bits();842 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits(); 843 843 #else 844 844 const Instruction* instruction = outermostBaselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex).ptr(); … … 869 869 } 870 870 871 if (exit.m_codeOrigin.inlineCallFrame) 872 context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue); 871 auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); 872 if (exitInlineCallFrame) 873 context.fp() = context.fp<uint8_t*>() + exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue); 873 874 874 875 void* jumpTarget = exitState->jumpTarget; … … 890 891 CodeBlock* alternative = codeBlock->alternative(); 891 892 ExitKind kind = exit.m_kind; 892 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex ;893 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); 893 894 894 895 dataLog("Speculation failure in ", *codeBlock); … … 1100 1101 debugInfo->codeBlock = jit.codeBlock(); 1101 1102 debugInfo->kind = exit.m_kind; 1102 debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex ;1103 debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); 1103 1104 1104 1105 jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo); … … 1149 1150 1150 1151 CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; 1151 if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex )) {1152 if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) { 1152 1153 #if USE(JSVALUE64) 1153 1154 GPRReg usedRegister; -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp
r234086 r243232 44 44 if (sourceProfiledCodeBlock) { 45 45 ExitingInlineKind inlineKind; 46 if (m_codeOriginForExitProfile.inlineCallFrame )46 if (m_codeOriginForExitProfile.inlineCallFrame()) 47 47 inlineKind = ExitFromInlined; 48 48 else … … 53 53 site = FrequentExitSite(HoistingFailed, jitType, inlineKind); 54 54 else 55 site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex , m_kind, jitType, inlineKind);55 site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex(), m_kind, jitType, inlineKind); 56 56 ExitProfile::add(sourceProfiledCodeBlock, site); 57 57 } -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r241222 r243232 74 74 AssemblyHelpers::JumpList loopThreshold; 75 75 76 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame ; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {76 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) { 77 77 loopThreshold.append( 78 78 jit.branchTest8( … … 146 146 147 147 const CodeOrigin* codeOrigin; 148 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame ; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {149 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame ;148 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) { 149 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame(); 150 150 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(*codeOrigin); 151 151 InlineCallFrame::Kind trueCallerCallKind; … … 167 167 } else { 168 168 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); 169 unsigned callBytecodeIndex = trueCaller->bytecodeIndex ;169 unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); 170 170 void* jumpTarget = nullptr; 171 171 … … 199 199 } 200 200 201 if (trueCaller->inlineCallFrame ) {201 if (trueCaller->inlineCallFrame()) { 202 202 jit.addPtr( 203 AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame ->stackOffset * sizeof(EncodedJSValue)),203 AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue)), 204 204 GPRInfo::callFrameRegister, 205 205 GPRInfo::regT3); … … 232 232 #if USE(JSVALUE64) 233 233 jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset())); 234 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex ).bits();234 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits(); 235 235 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount))); 236 236 if (!inlineCallFrame->isClosureCall) … … 250 250 if (codeOrigin) { 251 251 #if USE(JSVALUE64) 252 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex ).bits();252 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits(); 253 253 #else 254 254 const Instruction* instruction = jit.baselineCodeBlock()->instructions().at(codeOrigin->bytecodeIndex).ptr(); … … 305 305 } 306 306 307 if (exit.m_codeOrigin.inlineCallFrame) 308 jit.addPtr(AssemblyHelpers::TrustedImm32(exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister); 307 auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); 308 if (exitInlineCallFrame) 309 jit.addPtr(AssemblyHelpers::TrustedImm32(exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister); 309 310 310 311 CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin); 311 312 ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion()); 312 313 ASSERT(codeBlockForExit->jitType() == JITCode::BaselineJIT); 313 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex );314 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex()); 314 315 ASSERT(codeLocation); 315 316 -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp
r241582 r243232 42 42 DeferGC deferGC(vm.heap); 43 43 44 for (; codeOrigin.inlineCallFrame ; codeOrigin = codeOrigin.inlineCallFrame->directCaller) {45 CodeBlock* codeBlock = codeOrigin.inlineCallFrame ->baselineCodeBlock.get();44 for (; codeOrigin.inlineCallFrame(); codeOrigin = codeOrigin.inlineCallFrame()->directCaller) { 45 CodeBlock* codeBlock = codeOrigin.inlineCallFrame()->baselineCodeBlock.get(); 46 46 JITWorklist::ensureGlobalWorklist().compileNow(codeBlock); 47 47 } -
trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
r240447 r243232 1269 1269 forEachEscapee([&] (HashMap<Node*, Allocation>& escapees, Node* where) { 1270 1270 for (Node* allocation : escapees.keys()) { 1271 InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame ;1271 InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame(); 1272 1272 if (!inlineCallFrame) 1273 1273 continue; 1274 if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame )1274 if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame()) 1275 1275 m_sinkCandidates.remove(allocation); 1276 1276 } -
trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp
r243081 r243232 2896 2896 2897 2897 CodeOrigin origin = exec->codeOrigin(); 2898 auto* inlineCallFrame = origin.inlineCallFrame(); 2898 2899 bool strictMode; 2899 if ( origin.inlineCallFrame)2900 strictMode = origin.inlineCallFrame->baselineCodeBlock->isStrictMode();2900 if (inlineCallFrame) 2901 strictMode = inlineCallFrame->baselineCodeBlock->isStrictMode(); 2901 2902 else 2902 2903 strictMode = exec->codeBlock()->isStrictMode(); … … 3045 3046 3046 3047 bool didTryToEnterIntoInlinedLoops = false; 3047 for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame ; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {3048 for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) { 3048 3049 if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) { 3049 3050 didTryToEnterIntoInlinedLoops = true; -
trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
r234178 r243232 131 131 return; 132 132 } 133 InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame ;133 InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame(); 134 134 unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip(); 135 135 readFrame(inlineCallFrame, numberOfArgumentsToSkip); … … 193 193 InlineCallFrame* inlineCallFrame; 194 194 if (m_node->hasArgumentsChild() && m_node->argumentsChild()) 195 inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame ;195 inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame(); 196 196 else 197 inlineCallFrame = m_node->origin.semantic.inlineCallFrame ;197 inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); 198 198 199 199 unsigned numberOfArgumentsToSkip = 0; … … 221 221 222 222 case GetArgument: { 223 InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame ;223 InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); 224 224 unsigned indexIncludingThis = m_node->argumentIndex(); 225 225 if (!inlineCallFrame) { … … 249 249 250 250 // Read all of the inline arguments and call frame headers that we didn't already capture. 251 for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame ; inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {251 for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) { 252 252 if (!inlineCallFrame->isStrictMode()) { 253 253 for (unsigned i = inlineCallFrame->argumentsWithFixup.size(); i--;) -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r242955 r243232 188 188 void SpeculativeJIT::emitGetLength(CodeOrigin origin, GPRReg lengthGPR, bool includeThis) 189 189 { 190 emitGetLength(origin.inlineCallFrame , lengthGPR, includeThis);190 emitGetLength(origin.inlineCallFrame(), lengthGPR, includeThis); 191 191 } 192 192 193 193 void SpeculativeJIT::emitGetCallee(CodeOrigin origin, GPRReg calleeGPR) 194 194 { 195 if (origin.inlineCallFrame) { 196 if (origin.inlineCallFrame->isClosureCall) { 195 auto* inlineCallFrame = origin.inlineCallFrame(); 196 if (inlineCallFrame) { 197 if (inlineCallFrame->isClosureCall) { 197 198 m_jit.loadPtr( 198 JITCompiler::addressFor( origin.inlineCallFrame->calleeRecovery.virtualRegister()),199 JITCompiler::addressFor(inlineCallFrame->calleeRecovery.virtualRegister()), 199 200 calleeGPR); 200 201 } else { 201 202 m_jit.move( 202 TrustedImmPtr::weakPointer(m_jit.graph(), origin.inlineCallFrame->calleeRecovery.constant().asCell()),203 TrustedImmPtr::weakPointer(m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()), 203 204 calleeGPR); 204 205 } … … 1864 1865 "SpeculativeJIT generating Node @%d (bc#%u) at JIT offset 0x%x", 1865 1866 (int)m_currentNode->index(), 1866 m_currentNode->origin.semantic.bytecodeIndex , m_jit.debugOffset());1867 m_currentNode->origin.semantic.bytecodeIndex(), m_jit.debugOffset()); 1867 1868 dataLog("\n"); 1868 1869 } … … 3938 3939 3939 3940 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 3940 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 3941 const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr(); 3941 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 3942 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 3943 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 3942 3944 JITAddIC* addIC = m_jit.codeBlock()->addJITAddIC(arithProfile, instruction); 3943 3945 auto repatchingFunction = operationValueAddOptimize; … … 3962 3964 3963 3965 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 3964 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 3965 const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr(); 3966 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 3967 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 3968 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 3966 3969 JITSubIC* subIC = m_jit.codeBlock()->addJITSubIC(arithProfile, instruction); 3967 3970 auto repatchingFunction = operationValueSubOptimize; … … 4554 4557 { 4555 4558 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 4556 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4557 const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr(); 4559 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 4560 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 4561 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 4558 4562 JITNegIC* negIC = m_jit.codeBlock()->addJITNegIC(arithProfile, instruction); 4559 4563 auto repatchingFunction = operationArithNegateOptimize; … … 4777 4781 4778 4782 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 4779 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4780 const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr(); 4783 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 4784 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 4785 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 4781 4786 JITMulIC* mulIC = m_jit.codeBlock()->addJITMulIC(arithProfile, instruction); 4782 4787 auto repatchingFunction = operationValueMulOptimize; … … 7232 7237 InlineCallFrame* inlineCallFrame; 7233 7238 if (node->child1()) 7234 inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame ;7239 inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame(); 7235 7240 else 7236 inlineCallFrame = node->origin.semantic.inlineCallFrame ;7241 inlineCallFrame = node->origin.semantic.inlineCallFrame(); 7237 7242 7238 7243 GPRTemporary length(this); … … 7397 7402 unsigned knownLength; 7398 7403 bool lengthIsKnown; // if false, lengthGPR will have the length. 7399 if (node->origin.semantic.inlineCallFrame 7400 && !node->origin.semantic.inlineCallFrame->isVarargs()) { 7401 knownLength = node->origin.semantic.inlineCallFrame->argumentCountIncludingThis - 1; 7404 auto* inlineCallFrame = node->origin.semantic.inlineCallFrame(); 7405 if (inlineCallFrame 7406 && !inlineCallFrame->isVarargs()) { 7407 knownLength = inlineCallFrame->argumentCountIncludingThis - 1; 7402 7408 lengthIsKnown = true; 7403 7409 } else { … … 7472 7478 addSlowPathGenerator(WTFMove(generator)); 7473 7479 } 7474 7475 if ( node->origin.semantic.inlineCallFrame) {7476 if ( node->origin.semantic.inlineCallFrame->isClosureCall) {7480 7481 if (inlineCallFrame) { 7482 if (inlineCallFrame->isClosureCall) { 7477 7483 m_jit.loadPtr( 7478 7484 JITCompiler::addressFor( 7479 node->origin.semantic.inlineCallFrame->calleeRecovery.virtualRegister()),7485 inlineCallFrame->calleeRecovery.virtualRegister()), 7480 7486 scratch1GPR); 7481 7487 } else { 7482 7488 m_jit.move( 7483 7489 TrustedImmPtr::weakPointer( 7484 m_jit.graph(), node->origin.semantic.inlineCallFrame->calleeRecovery.constant().asCell()),7490 m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()), 7485 7491 scratch1GPR); 7486 7492 } -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r242715 r243232 619 619 InlineCallFrame* inlineCallFrame; 620 620 if (node->child3()) 621 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame ;621 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame(); 622 622 else 623 inlineCallFrame = node->origin.semantic.inlineCallFrame ;623 inlineCallFrame = node->origin.semantic.inlineCallFrame(); 624 624 // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds. 625 625 emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase); … … 769 769 770 770 CodeOrigin staticOrigin = node->origin.semantic; 771 ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()); 772 ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls())); 771 InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame(); 772 ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls()); 773 ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls())); 773 774 CodeOrigin dynamicOrigin = 774 isEmulatedTail ? *static Origin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;775 isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin; 775 776 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size()); 776 777 -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r242715 r243232 580 580 InlineCallFrame* inlineCallFrame; 581 581 if (node->child3()) 582 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame ;582 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame(); 583 583 else 584 inlineCallFrame = node->origin.semantic.inlineCallFrame ;584 inlineCallFrame = node->origin.semantic.inlineCallFrame(); 585 585 // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds. 586 586 emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase); … … 721 721 722 722 CodeOrigin staticOrigin = node->origin.semantic; 723 ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()); 724 ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls())); 723 InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame(); 724 ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls()); 725 ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls())); 725 726 CodeOrigin dynamicOrigin = 726 isEmulatedTail ? *static Origin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;727 isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin; 727 728 728 729 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size()); … … 5015 5016 Vector<SilentRegisterSavePlan> savePlans; 5016 5017 silentSpillAllRegistersImpl(false, savePlans, InvalidGPRReg); 5017 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex ;5018 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 5018 5019 5019 5020 addSlowPathGeneratorLambda([=]() { … … 5044 5045 5045 5046 case CheckTierUpAndOSREnter: { 5046 ASSERT(!node->origin.semantic.inlineCallFrame );5047 ASSERT(!node->origin.semantic.inlineCallFrame()); 5047 5048 5048 5049 GPRTemporary temp(this); 5049 5050 GPRReg tempGPR = temp.gpr(); 5050 5051 5051 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex ;5052 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 5052 5053 auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex); 5053 5054 DFG_ASSERT(m_jit.graph(), node, triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end()); -
trunk/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
r234178 r243232 109 109 insertionSet.insertNode(nodeIndex + 1, SpecNone, tierUpType, origin); 110 110 111 unsigned bytecodeIndex = origin.semantic.bytecodeIndex ;111 unsigned bytecodeIndex = origin.semantic.bytecodeIndex(); 112 112 if (canOSREnter) 113 113 m_graph.m_plan.tierUpAndOSREnterBytecodes().append(bytecodeIndex); … … 171 171 172 172 NodeOrigin origin = node->origin; 173 if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame )173 if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame()) 174 174 return false; 175 175 … … 195 195 196 196 if (const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block)) { 197 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex ;197 unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex(); 198 198 naturalLoopsToLoopHint.add(loop, bytecodeIndex); 199 199 } -
trunk/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
r242192 r243232 148 148 if (SpecCellCheck & SpecEmpty) { 149 149 VirtualRegister local = node->variableAccessData()->local(); 150 auto* inlineCallFrame = node->origin.semantic.inlineCallFrame ;150 auto* inlineCallFrame = node->origin.semantic.inlineCallFrame(); 151 151 if ((local - (inlineCallFrame ? inlineCallFrame->stackOffset : 0)) == virtualRegisterForArgument(0)) { 152 152 // |this| can be the TDZ value. The call entrypoint won't have |this| as TDZ, -
trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
r233630 r243232 148 148 }; 149 149 150 if (codeOrigin.inlineCallFrame) 151 numVariables = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame)->numCalleeLocals() + VirtualRegister(codeOrigin.inlineCallFrame->stackOffset).toLocal() + 1; 150 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 151 if (inlineCallFrame) 152 numVariables = baselineCodeBlockForInlineCallFrame(inlineCallFrame)->numCalleeLocals() + VirtualRegister(inlineCallFrame->stackOffset).toLocal() + 1; 152 153 else 153 154 numVariables = baselineCodeBlock->numCalleeLocals(); -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r242955 r243232 1903 1903 1904 1904 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 1905 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 1906 const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr(); 1905 unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex(); 1906 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 1907 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 1907 1908 auto repatchingFunction = operationValueAddOptimize; 1908 1909 auto nonRepatchingFunction = operationValueAdd; … … 1922 1923 1923 1924 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 1924 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 1925 const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr(); 1925 unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex(); 1926 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 1927 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 1926 1928 auto repatchingFunction = operationValueSubOptimize; 1927 1929 auto nonRepatchingFunction = operationValueSub; … … 1941 1943 1942 1944 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 1943 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 1944 const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr(); 1945 unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex(); 1946 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 1947 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 1945 1948 auto repatchingFunction = operationValueMulOptimize; 1946 1949 auto nonRepatchingFunction = operationValueMul; … … 2202 2205 2203 2206 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 2204 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2205 const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr(); 2207 unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex(); 2208 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 2209 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 2206 2210 auto repatchingFunction = operationValueSubOptimize; 2207 2211 auto nonRepatchingFunction = operationValueSub; … … 2834 2838 DFG_ASSERT(m_graph, m_node, m_node->child1().useKind() == UntypedUse); 2835 2839 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 2836 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2837 const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr(); 2840 unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex(); 2841 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex); 2842 const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr(); 2838 2843 auto repatchingFunction = operationArithNegateOptimize; 2839 2844 auto nonRepatchingFunction = operationArithNegate; … … 4243 4248 void compileGetMyArgumentByVal() 4244 4249 { 4245 InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame ;4250 InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame(); 4246 4251 4247 4252 LValue originalIndex = lowInt32(m_node->child2()); … … 5817 5822 if (use->op() == PhantomSpread) { 5818 5823 if (use->child1()->op() == PhantomCreateRest) { 5819 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame ;5824 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame(); 5820 5825 unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip(); 5821 5826 LValue spreadLength = cachedSpreadLengths.ensure(inlineCallFrame, [&] () { … … 5858 5863 } else { 5859 5864 RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest); 5860 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame ;5865 InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame(); 5861 5866 unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip(); 5862 5867 … … 6050 6055 LBasicBlock lastNext = m_out.insertNewBlocksBefore(loopHeader); 6051 6056 6052 InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame ;6057 InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame(); 6053 6058 unsigned numberOfArgumentsToSkip = m_node->child1()->numberOfArgumentsToSkip(); 6054 6059 LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip); … … 8046 8051 8047 8052 RELEASE_ASSERT(target->op() == PhantomCreateRest); 8048 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame ;8053 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame(); 8049 8054 unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip(); 8050 8055 LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () { … … 8209 8214 8210 8215 RELEASE_ASSERT(target->op() == PhantomCreateRest); 8211 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame ;8216 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame(); 8212 8217 8213 8218 unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip(); … … 8489 8494 InlineCallFrame* inlineCallFrame; 8490 8495 if (node->child3()) 8491 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame ;8496 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame(); 8492 8497 else 8493 inlineCallFrame = node->origin.semantic.inlineCallFrame ;8498 inlineCallFrame = node->origin.semantic.inlineCallFrame(); 8494 8499 8495 8500 // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds. … … 8735 8740 InlineCallFrame* inlineCallFrame; 8736 8741 if (m_node->child1()) 8737 inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame ;8742 inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame(); 8738 8743 else 8739 inlineCallFrame = m_node->origin.semantic.inlineCallFrame ;8744 inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); 8740 8745 8741 8746 LValue length = nullptr; … … 8884 8889 8885 8890 ASSERT(target->op() == PhantomCreateRest); 8886 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame ;8891 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame(); 8887 8892 unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip(); 8888 8893 spreadLengths.append(cachedSpreadLengths.ensure(inlineCallFrame, [&] () { … … 8935 8940 8936 8941 RELEASE_ASSERT(target->op() == PhantomCreateRest); 8937 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame ;8942 InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame(); 8938 8943 8939 8944 LValue sourceStart = this->getArgumentsStart(inlineCallFrame, target->numberOfArgumentsToSkip()); … … 11468 11473 ArgumentsLength getArgumentsLength() 11469 11474 { 11470 return getArgumentsLength(m_node->origin.semantic.inlineCallFrame );11475 return getArgumentsLength(m_node->origin.semantic.inlineCallFrame()); 11471 11476 } 11472 11477 11473 11478 LValue getCurrentCallee() 11474 11479 { 11475 if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame ) {11480 if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame()) { 11476 11481 if (frame->isClosureCall) 11477 11482 return m_out.loadPtr(addressFor(frame->calleeRecovery.virtualRegister())); … … 11489 11494 LValue getArgumentsStart() 11490 11495 { 11491 return getArgumentsStart(m_node->origin.semantic.inlineCallFrame );11496 return getArgumentsStart(m_node->origin.semantic.inlineCallFrame()); 11492 11497 } 11493 11498 … … 16538 16543 // and jaz is inlined in baz. We want the callframe for jaz to appear to 16539 16544 // have caller be bar. 16540 codeOrigin = *codeOrigin.inlineCallFrame ->getCallerSkippingTailCalls();16545 codeOrigin = *codeOrigin.inlineCallFrame()->getCallerSkippingTailCalls(); 16541 16546 } 16542 16547 -
trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
r241927 r243232 287 287 if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) { 288 288 CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; 289 if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex )) {289 if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) { 290 290 jit.load32(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureIDOffset()), GPRInfo::regT1); 291 291 jit.store32(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructureID()); -
trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp
r240740 r243232 279 279 case PhantomDirectArguments: 280 280 case PhantomClonedArguments: { 281 if (!materialization->origin().inlineCallFrame ) {281 if (!materialization->origin().inlineCallFrame()) { 282 282 switch (materialization->type()) { 283 283 case PhantomDirectArguments: … … 304 304 // First figure out the argument count. If there isn't one then we represent the machine frame. 305 305 unsigned argumentCount = 0; 306 if (materialization->origin().inlineCallFrame ->isVarargs()) {306 if (materialization->origin().inlineCallFrame()->isVarargs()) { 307 307 for (unsigned i = materialization->properties().size(); i--;) { 308 308 const ExitPropertyValue& property = materialization->properties()[i]; … … 313 313 } 314 314 } else 315 argumentCount = materialization->origin().inlineCallFrame ->argumentCountIncludingThis;315 argumentCount = materialization->origin().inlineCallFrame()->argumentCountIncludingThis; 316 316 RELEASE_ASSERT(argumentCount); 317 317 318 318 JSFunction* callee = nullptr; 319 if (materialization->origin().inlineCallFrame ->isClosureCall) {319 if (materialization->origin().inlineCallFrame()->isClosureCall) { 320 320 for (unsigned i = materialization->properties().size(); i--;) { 321 321 const ExitPropertyValue& property = materialization->properties()[i]; … … 327 327 } 328 328 } else 329 callee = materialization->origin().inlineCallFrame ->calleeConstant();329 callee = materialization->origin().inlineCallFrame()->calleeConstant(); 330 330 RELEASE_ASSERT(callee); 331 331 … … 475 475 // and PhantomNewArrayBuffer are always bound to a specific op_new_array_buffer. 476 476 CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(materialization->origin(), exec->codeBlock()->baselineAlternative()); 477 const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex ).ptr();477 const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex()).ptr(); 478 478 if (!currentInstruction->is<OpNewArrayBuffer>()) { 479 479 // This case can happen if Object.keys, an OpCall is first converted into a NewArrayBuffer which is then converted into a PhantomNewArrayBuffer. -
trunk/Source/JavaScriptCore/interpreter/CallFrame.cpp
r241222 r243232 157 157 ASSERT(codeBlock()); 158 158 CodeOrigin codeOrigin = this->codeOrigin(); 159 for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame ; inlineCallFrame;) {159 for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame(); inlineCallFrame;) { 160 160 codeOrigin = inlineCallFrame->directCaller; 161 inlineCallFrame = codeOrigin.inlineCallFrame ;161 inlineCallFrame = codeOrigin.inlineCallFrame(); 162 162 } 163 return codeOrigin.bytecodeIndex ;163 return codeOrigin.bytecodeIndex(); 164 164 } 165 165 #endif -
trunk/Source/JavaScriptCore/interpreter/StackVisitor.cpp
r241222 r243232 97 97 if (m_frame.isInlinedFrame()) { 98 98 CodeOrigin codeOrigin = m_frame.inlineCallFrame()->directCaller; 99 while (codeOrigin.inlineCallFrame )100 codeOrigin = codeOrigin.inlineCallFrame ->directCaller;99 while (codeOrigin.inlineCallFrame()) 100 codeOrigin = codeOrigin.inlineCallFrame()->directCaller; 101 101 readNonInlinedFrame(m_frame.callFrame(), &codeOrigin); 102 102 } … … 145 145 146 146 CodeOrigin codeOrigin = codeBlock->codeOrigin(index); 147 if (!codeOrigin.inlineCallFrame ) {147 if (!codeOrigin.inlineCallFrame()) { 148 148 readNonInlinedFrame(callFrame, &codeOrigin); 149 149 return; … … 178 178 m_frame.m_codeBlock = callFrame->codeBlock(); 179 179 m_frame.m_bytecodeOffset = !m_frame.codeBlock() ? 0 180 : codeOrigin ? codeOrigin->bytecodeIndex 180 : codeOrigin ? codeOrigin->bytecodeIndex() 181 181 : callFrame->bytecodeOffset(); 182 182 … … 191 191 static int inlinedFrameOffset(CodeOrigin* codeOrigin) 192 192 { 193 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame ;193 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame(); 194 194 int frameOffset = inlineCallFrame ? inlineCallFrame->stackOffset : 0; 195 195 return frameOffset; … … 204 204 bool isInlined = !!frameOffset; 205 205 if (isInlined) { 206 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame ;206 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame(); 207 207 208 208 m_frame.m_callFrame = callFrame; … … 213 213 m_frame.m_argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis; 214 214 m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock.get(); 215 m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex ;215 m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex(); 216 216 217 217 JSFunction* callee = inlineCallFrame->calleeForCallFrame(callFrame); -
trunk/Source/JavaScriptCore/interpreter/StackVisitor.h
r241222 r243232 34 34 namespace JSC { 35 35 36 struct CodeOrigin;37 36 struct EntryFrame; 38 37 struct InlineCallFrame; 39 38 40 39 class CodeBlock; 40 class CodeOrigin; 41 41 class ExecState; 42 42 class JSCell; -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r242252 r243232 45 45 ExecutableBase* AssemblyHelpers::executableFor(const CodeOrigin& codeOrigin) 46 46 { 47 if (!codeOrigin.inlineCallFrame) 47 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 48 if (!inlineCallFrame) 48 49 return m_codeBlock->ownerExecutable(); 49 50 return codeOrigin.inlineCallFrame->baselineCodeBlock->ownerExecutable(); 50 return inlineCallFrame->baselineCodeBlock->ownerExecutable(); 51 51 } 52 52 -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r242252 r243232 1433 1433 bool isStrictModeFor(CodeOrigin codeOrigin) 1434 1434 { 1435 if (!codeOrigin.inlineCallFrame) 1435 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 1436 if (!inlineCallFrame) 1436 1437 return codeBlock()->isStrictMode(); 1437 return codeOrigin.inlineCallFrame->isStrictMode();1438 return inlineCallFrame->isStrictMode(); 1438 1439 } 1439 1440 … … 1475 1476 static VirtualRegister argumentsStart(const CodeOrigin& codeOrigin) 1476 1477 { 1477 return argumentsStart(codeOrigin.inlineCallFrame );1478 return argumentsStart(codeOrigin.inlineCallFrame()); 1478 1479 } 1479 1480 … … 1488 1489 static VirtualRegister argumentCount(const CodeOrigin& codeOrigin) 1489 1490 { 1490 return argumentCount(codeOrigin.inlineCallFrame );1491 return argumentCount(codeOrigin.inlineCallFrame()); 1491 1492 } 1492 1493 -
trunk/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp
r242776 r243232 192 192 CodeOrigin lastCodeOrigin(0, nullptr); 193 193 auto buildCodeOriginTable = [&] (const CodeOrigin& codeOrigin) { 194 intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex ) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex);194 intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex()) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex()); 195 195 lastCodeOrigin = codeOrigin; 196 196 if (delta > std::numeric_limits<int8_t>::max() || delta < std::numeric_limits<int8_t>::min() || delta == sentinelBytecodeDelta) { … … 200 200 codeOriginCompressor.write<int8_t>(static_cast<int8_t>(delta)); 201 201 202 int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame ? 1 : 0;202 int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame() ? 1 : 0; 203 203 codeOriginCompressor.write<int8_t>(hasInlineCallFrameByte); 204 204 if (hasInlineCallFrameByte) 205 codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame ));205 codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame())); 206 206 }; 207 207 … … 255 255 256 256 uintptr_t currentPC = 0; 257 CodeOrigin currentCodeOrigin(0, nullptr); 257 unsigned currentBytecodeIndex = 0; 258 InlineCallFrame* currentInlineCallFrame = nullptr; 258 259 259 260 DeltaCompresseionReader pcReader(m_compressedPCs, m_compressedPCBufferSize); … … 271 272 } 272 273 273 CodeOrigin previousOrigin = currentCodeOrigin;274 CodeOrigin previousOrigin = CodeOrigin(currentBytecodeIndex, currentInlineCallFrame); 274 275 { 275 276 int8_t value = codeOriginReader.read<int8_t>(); … … 280 281 delta = static_cast<intptr_t>(value); 281 282 282 current CodeOrigin.bytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentCodeOrigin.bytecodeIndex) + delta);283 currentBytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentBytecodeIndex) + delta); 283 284 284 285 int8_t hasInlineFrame = codeOriginReader.read<int8_t>(); 285 286 ASSERT(hasInlineFrame == 0 || hasInlineFrame == 1); 286 287 if (hasInlineFrame) 287 current CodeOrigin.inlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>());288 currentInlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>()); 288 289 else 289 current CodeOrigin.inlineCallFrame = nullptr;290 currentInlineCallFrame = nullptr; 290 291 } 291 292 -
trunk/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp
r208968 r243232 49 49 Vector<CodeOrigin> stack = codeOrigin.inlineStack(); 50 50 51 append(Origin(database, codeBlock, stack[0].bytecodeIndex ));51 append(Origin(database, codeBlock, stack[0].bytecodeIndex())); 52 52 53 53 for (unsigned i = 1; i < stack.size(); ++i) { 54 54 append(Origin( 55 database.ensureBytecodesFor(stack[i].inlineCallFrame ->baselineCodeBlock.get()),56 stack[i].bytecodeIndex ));55 database.ensureBytecodesFor(stack[i].inlineCallFrame()->baselineCodeBlock.get()), 56 stack[i].bytecodeIndex())); 57 57 } 58 58 } -
trunk/Source/JavaScriptCore/profiler/ProfilerOriginStack.h
r218794 r243232 34 34 35 35 class CodeBlock; 36 structCodeOrigin;36 class CodeOrigin; 37 37 38 38 namespace Profiler { -
trunk/Source/JavaScriptCore/runtime/ErrorInstance.cpp
r241222 r243232 67 67 CodeBlock* codeBlock; 68 68 CodeOrigin codeOrigin = callFrame->codeOrigin(); 69 if (codeOrigin && codeOrigin.inlineCallFrame )70 codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame );69 if (codeOrigin && codeOrigin.inlineCallFrame()) 70 codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame()); 71 71 else 72 72 codeBlock = callFrame->codeBlock(); -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
r242812 r243232 557 557 origin.walkUpInlineStack([&] (const CodeOrigin& codeOrigin) { 558 558 machineOrigin = codeOrigin; 559 appendCodeBlock(codeOrigin.inlineCallFrame ? codeOrigin.inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex); 559 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 560 appendCodeBlock(inlineCallFrame ? inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex()); 560 561 }); 561 562 562 563 if (Options::collectSamplingProfilerDataForJSCShell()) { 563 564 RELEASE_ASSERT(machineOrigin.isSet()); 564 RELEASE_ASSERT(!machineOrigin.inlineCallFrame );565 RELEASE_ASSERT(!machineOrigin.inlineCallFrame()); 565 566 566 567 StackFrame::CodeLocation machineLocation = stackTrace.frames.last().semanticLocation;
Note: See TracChangeset
for help on using the changeset viewer.