Changeset 243232 in webkit


Ignore:
Timestamp:
Mar 20, 2019 1:24:36 PM (5 years ago)
Author:
rmorisset@apple.com
Message:

Compress CodeOrigin into a single word in the common case
https://bugs.webkit.org/show_bug.cgi?id=195928

Reviewed by Saam Barati.

The trick is that pointers only take 48 bits on x86_64 in practice (and we can even use the bottom three bits of that thanks to alignment), and even less on ARM64.
So we can shove the bytecode index in the top bits almost all the time.
If the bytecodeIndex is too ginormous (1<<16 in practice on x86_64), we just set one bit at the bottom and store a pointer to some out-of-line storage instead.
Finally we represent an invalid bytecodeIndex (which used to be represented by UINT_MAX) by setting the second least signifcant bit.

The patch looks very long, but most of it is just replacing direct accesses to inlineCallFrame and bytecodeIndex by the relevant getters.

End result: CodeOrigin in the common case moves from 16 bytes (8 for InlineCallFrame*, 4 for unsigned bytecodeIndex, 4 of padding) to 8.
As a reference, during running JetStream2 we allocate more than 35M CodeOrigins. While they won't all be alive at the same time, it is still quite a lot of objects, so I am hoping for some small
improvement to RAMification from this work.

The one slightly tricky part is that we must implement copy and move assignment operators and constructors to make sure that any out-of-line storage belongs to a single CodeOrigin and is destroyed exactly once.

  • bytecode/ByValInfo.h:
  • bytecode/CallLinkStatus.cpp:

(JSC::CallLinkStatus::computeFor):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::globalObjectFor):
(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
(JSC::CodeBlock::bytecodeOffsetFromCallSiteIndex):

  • bytecode/CodeOrigin.cpp:

(JSC::CodeOrigin::inlineDepth const):
(JSC::CodeOrigin::isApproximatelyEqualTo const):
(JSC::CodeOrigin::approximateHash const):
(JSC::CodeOrigin::inlineStack const):
(JSC::CodeOrigin::codeOriginOwner const):
(JSC::CodeOrigin::stackOffset const):
(JSC::CodeOrigin::dump const):
(JSC::CodeOrigin::inlineDepthForCallFrame): Deleted.

  • bytecode/CodeOrigin.h:

(JSC::OutOfLineCodeOrigin::OutOfLineCodeOrigin):
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::~CodeOrigin):
(JSC::CodeOrigin::isSet const):
(JSC::CodeOrigin::isHashTableDeletedValue const):
(JSC::CodeOrigin::bytecodeIndex const):
(JSC::CodeOrigin::inlineCallFrame const):
(JSC::CodeOrigin::buildCompositeValue):
(JSC::CodeOrigin::hash const):
(JSC::CodeOrigin::operator== const):
(JSC::CodeOrigin::exitingInlineKind const): Deleted.

  • bytecode/DeferredSourceDump.h:
  • bytecode/GetByIdStatus.cpp:

(JSC::GetByIdStatus::computeForStubInfo):
(JSC::GetByIdStatus::computeFor):

  • bytecode/ICStatusMap.cpp:

(JSC::ICStatusContext::isInlined const):

  • bytecode/InByIdStatus.cpp:

(JSC::InByIdStatus::computeFor):
(JSC::InByIdStatus::computeForStubInfo):

  • bytecode/InlineCallFrame.cpp:

(JSC::InlineCallFrame::dumpInContext const):

  • bytecode/InlineCallFrame.h:

(JSC::InlineCallFrame::computeCallerSkippingTailCalls):
(JSC::InlineCallFrame::getCallerInlineFrameSkippingTailCalls):
(JSC::baselineCodeBlockForOriginAndBaselineCodeBlock):
(JSC::CodeOrigin::walkUpInlineStack):

  • bytecode/InstanceOfStatus.h:
  • bytecode/PutByIdStatus.cpp:

(JSC::PutByIdStatus::computeForStubInfo):
(JSC::PutByIdStatus::computeFor):

  • dfg/DFGAbstractInterpreterInlines.h:

(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):

  • dfg/DFGArgumentsEliminationPhase.cpp:
  • dfg/DFGArgumentsUtilities.cpp:

(JSC::DFG::argumentsInvolveStackSlot):
(JSC::DFG::emitCodeToGetArgumentsArrayLength):

  • dfg/DFGArrayMode.h:
  • dfg/DFGByteCodeParser.cpp:

(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::setLocal):
(JSC::DFG::ByteCodeParser::setArgument):
(JSC::DFG::ByteCodeParser::flushForTerminalImpl):
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::parseCodeBlock):
(JSC::DFG::ByteCodeParser::handlePutByVal):

  • dfg/DFGClobberize.h:

(JSC::DFG::clobberize):

  • dfg/DFGConstantFoldingPhase.cpp:

(JSC::DFG::ConstantFoldingPhase::foldConstants):

  • dfg/DFGFixupPhase.cpp:

(JSC::DFG::FixupPhase::attemptToMakeGetArrayLength):

  • dfg/DFGForAllKills.h:

(JSC::DFG::forAllKilledOperands):

  • dfg/DFGGraph.cpp:

(JSC::DFG::Graph::dumpCodeOrigin):
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
(JSC::DFG::Graph::willCatchExceptionInMachineFrame):

  • dfg/DFGGraph.h:

(JSC::DFG::Graph::executableFor):
(JSC::DFG::Graph::isStrictModeFor):
(JSC::DFG::Graph::hasExitSite):
(JSC::DFG::Graph::forAllLocalsLiveInBytecode):

  • dfg/DFGLiveCatchVariablePreservationPhase.cpp:

(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):

  • dfg/DFGMinifiedNode.cpp:

(JSC::DFG::MinifiedNode::fromNode):

  • dfg/DFGOSRAvailabilityAnalysisPhase.cpp:

(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::printOSRExit):
(JSC::DFG::OSRExit::compileExit):

  • dfg/DFGOSRExitBase.cpp:

(JSC::DFG::OSRExitBase::considerAddingAsFrequentExitSiteSlow):

  • dfg/DFGOSRExitCompilerCommon.cpp:

(JSC::DFG::handleExitCounts):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):

  • dfg/DFGOSRExitPreparation.cpp:

(JSC::DFG::prepareCodeOriginForOSRExit):

  • dfg/DFGObjectAllocationSinkingPhase.cpp:
  • dfg/DFGOperations.cpp:
  • dfg/DFGPreciseLocalClobberize.h:

(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):

  • dfg/DFGSpeculativeJIT.cpp:

(JSC::DFG::SpeculativeJIT::emitGetLength):
(JSC::DFG::SpeculativeJIT::emitGetCallee):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::compileValueAdd):
(JSC::DFG::SpeculativeJIT::compileValueSub):
(JSC::DFG::SpeculativeJIT::compileValueNegate):
(JSC::DFG::SpeculativeJIT::compileValueMul):
(JSC::DFG::SpeculativeJIT::compileForwardVarargs):
(JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):

  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::emitCall):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGTierUpCheckInjectionPhase.cpp:

(JSC::DFG::TierUpCheckInjectionPhase::run):
(JSC::DFG::TierUpCheckInjectionPhase::canOSREnterAtLoopHint):
(JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap):

  • dfg/DFGTypeCheckHoistingPhase.cpp:

(JSC::DFG::TypeCheckHoistingPhase::run):

  • dfg/DFGVariableEventStream.cpp:

(JSC::DFG::VariableEventStream::reconstruct const):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::compileValueAdd):
(JSC::FTL::DFG::LowerDFGToB3::compileValueSub):
(JSC::FTL::DFG::LowerDFGToB3::compileValueMul):
(JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub):
(JSC::FTL::DFG::LowerDFGToB3::compileValueNegate):
(JSC::FTL::DFG::LowerDFGToB3::compileGetMyArgumentByVal):
(JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
(JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsStart):
(JSC::FTL::DFG::LowerDFGToB3::codeOriginDescriptionOfCallSite const):

  • ftl/FTLOSRExitCompiler.cpp:

(JSC::FTL::compileStub):

  • ftl/FTLOperations.cpp:

(JSC::FTL::operationMaterializeObjectInOSR):

  • interpreter/CallFrame.cpp:

(JSC::CallFrame::bytecodeOffset):

  • interpreter/StackVisitor.cpp:

(JSC::StackVisitor::unwindToMachineCodeBlockFrame):
(JSC::StackVisitor::readFrame):
(JSC::StackVisitor::readNonInlinedFrame):
(JSC::inlinedFrameOffset):
(JSC::StackVisitor::readInlinedFrame):

  • interpreter/StackVisitor.h:
  • jit/AssemblyHelpers.cpp:

(JSC::AssemblyHelpers::executableFor):

  • jit/AssemblyHelpers.h:

(JSC::AssemblyHelpers::isStrictModeFor):
(JSC::AssemblyHelpers::argumentsStart):
(JSC::AssemblyHelpers::argumentCount):

  • jit/PCToCodeOriginMap.cpp:

(JSC::PCToCodeOriginMap::PCToCodeOriginMap):
(JSC::PCToCodeOriginMap::findPC const):

  • profiler/ProfilerOriginStack.cpp:

(JSC::Profiler::OriginStack::OriginStack):

  • profiler/ProfilerOriginStack.h:
  • runtime/ErrorInstance.cpp:

(JSC::appendSourceToError):

  • runtime/SamplingProfiler.cpp:

(JSC::SamplingProfiler::processUnverifiedStackTraces):

Location:
trunk/Source/JavaScriptCore
Files:
54 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r243207 r243232  
     12019-03-20  Robin Morisset  <rmorisset@apple.com>
     2
     3        Compress CodeOrigin into a single word in the common case
     4        https://bugs.webkit.org/show_bug.cgi?id=195928
     5
     6        Reviewed by Saam Barati.
     7
     8        The trick is that pointers only take 48 bits on x86_64 in practice (and we can even use the bottom three bits of that thanks to alignment), and even less on ARM64.
     9        So we can shove the bytecode index in the top bits almost all the time.
     10        If the bytecodeIndex is too ginormous (1<<16 in practice on x86_64), we just set one bit at the bottom and store a pointer to some out-of-line storage instead.
     11        Finally we represent an invalid bytecodeIndex (which used to be represented by UINT_MAX) by setting the second least signifcant bit.
     12
     13        The patch looks very long, but most of it is just replacing direct accesses to inlineCallFrame and bytecodeIndex by the relevant getters.
     14
     15        End result: CodeOrigin in the common case moves from 16 bytes (8 for InlineCallFrame*, 4 for unsigned bytecodeIndex, 4 of padding) to 8.
     16        As a reference, during running JetStream2 we allocate more than 35M CodeOrigins. While they won't all be alive at the same time, it is still quite a lot of objects, so I am hoping for some small
     17        improvement to RAMification from this work.
     18
     19        The one slightly tricky part is that we must implement copy and move assignment operators and constructors to make sure that any out-of-line storage belongs to a single CodeOrigin and is destroyed exactly once.
     20
     21        * bytecode/ByValInfo.h:
     22        * bytecode/CallLinkStatus.cpp:
     23        (JSC::CallLinkStatus::computeFor):
     24        * bytecode/CodeBlock.cpp:
     25        (JSC::CodeBlock::globalObjectFor):
     26        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
     27        (JSC::CodeBlock::bytecodeOffsetFromCallSiteIndex):
     28        * bytecode/CodeOrigin.cpp:
     29        (JSC::CodeOrigin::inlineDepth const):
     30        (JSC::CodeOrigin::isApproximatelyEqualTo const):
     31        (JSC::CodeOrigin::approximateHash const):
     32        (JSC::CodeOrigin::inlineStack const):
     33        (JSC::CodeOrigin::codeOriginOwner const):
     34        (JSC::CodeOrigin::stackOffset const):
     35        (JSC::CodeOrigin::dump const):
     36        (JSC::CodeOrigin::inlineDepthForCallFrame): Deleted.
     37        * bytecode/CodeOrigin.h:
     38        (JSC::OutOfLineCodeOrigin::OutOfLineCodeOrigin):
     39        (JSC::CodeOrigin::CodeOrigin):
     40        (JSC::CodeOrigin::~CodeOrigin):
     41        (JSC::CodeOrigin::isSet const):
     42        (JSC::CodeOrigin::isHashTableDeletedValue const):
     43        (JSC::CodeOrigin::bytecodeIndex const):
     44        (JSC::CodeOrigin::inlineCallFrame const):
     45        (JSC::CodeOrigin::buildCompositeValue):
     46        (JSC::CodeOrigin::hash const):
     47        (JSC::CodeOrigin::operator== const):
     48        (JSC::CodeOrigin::exitingInlineKind const): Deleted.
     49        * bytecode/DeferredSourceDump.h:
     50        * bytecode/GetByIdStatus.cpp:
     51        (JSC::GetByIdStatus::computeForStubInfo):
     52        (JSC::GetByIdStatus::computeFor):
     53        * bytecode/ICStatusMap.cpp:
     54        (JSC::ICStatusContext::isInlined const):
     55        * bytecode/InByIdStatus.cpp:
     56        (JSC::InByIdStatus::computeFor):
     57        (JSC::InByIdStatus::computeForStubInfo):
     58        * bytecode/InlineCallFrame.cpp:
     59        (JSC::InlineCallFrame::dumpInContext const):
     60        * bytecode/InlineCallFrame.h:
     61        (JSC::InlineCallFrame::computeCallerSkippingTailCalls):
     62        (JSC::InlineCallFrame::getCallerInlineFrameSkippingTailCalls):
     63        (JSC::baselineCodeBlockForOriginAndBaselineCodeBlock):
     64        (JSC::CodeOrigin::walkUpInlineStack):
     65        * bytecode/InstanceOfStatus.h:
     66        * bytecode/PutByIdStatus.cpp:
     67        (JSC::PutByIdStatus::computeForStubInfo):
     68        (JSC::PutByIdStatus::computeFor):
     69        * dfg/DFGAbstractInterpreterInlines.h:
     70        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
     71        * dfg/DFGArgumentsEliminationPhase.cpp:
     72        * dfg/DFGArgumentsUtilities.cpp:
     73        (JSC::DFG::argumentsInvolveStackSlot):
     74        (JSC::DFG::emitCodeToGetArgumentsArrayLength):
     75        * dfg/DFGArrayMode.h:
     76        * dfg/DFGByteCodeParser.cpp:
     77        (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
     78        (JSC::DFG::ByteCodeParser::setLocal):
     79        (JSC::DFG::ByteCodeParser::setArgument):
     80        (JSC::DFG::ByteCodeParser::flushForTerminalImpl):
     81        (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
     82        (JSC::DFG::ByteCodeParser::parseBlock):
     83        (JSC::DFG::ByteCodeParser::parseCodeBlock):
     84        (JSC::DFG::ByteCodeParser::handlePutByVal):
     85        * dfg/DFGClobberize.h:
     86        (JSC::DFG::clobberize):
     87        * dfg/DFGConstantFoldingPhase.cpp:
     88        (JSC::DFG::ConstantFoldingPhase::foldConstants):
     89        * dfg/DFGFixupPhase.cpp:
     90        (JSC::DFG::FixupPhase::attemptToMakeGetArrayLength):
     91        * dfg/DFGForAllKills.h:
     92        (JSC::DFG::forAllKilledOperands):
     93        * dfg/DFGGraph.cpp:
     94        (JSC::DFG::Graph::dumpCodeOrigin):
     95        (JSC::DFG::Graph::dump):
     96        (JSC::DFG::Graph::isLiveInBytecode):
     97        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
     98        (JSC::DFG::Graph::willCatchExceptionInMachineFrame):
     99        * dfg/DFGGraph.h:
     100        (JSC::DFG::Graph::executableFor):
     101        (JSC::DFG::Graph::isStrictModeFor):
     102        (JSC::DFG::Graph::hasExitSite):
     103        (JSC::DFG::Graph::forAllLocalsLiveInBytecode):
     104        * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
     105        (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
     106        * dfg/DFGMinifiedNode.cpp:
     107        (JSC::DFG::MinifiedNode::fromNode):
     108        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
     109        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
     110        * dfg/DFGOSRExit.cpp:
     111        (JSC::DFG::OSRExit::executeOSRExit):
     112        (JSC::DFG::reifyInlinedCallFrames):
     113        (JSC::DFG::adjustAndJumpToTarget):
     114        (JSC::DFG::printOSRExit):
     115        (JSC::DFG::OSRExit::compileExit):
     116        * dfg/DFGOSRExitBase.cpp:
     117        (JSC::DFG::OSRExitBase::considerAddingAsFrequentExitSiteSlow):
     118        * dfg/DFGOSRExitCompilerCommon.cpp:
     119        (JSC::DFG::handleExitCounts):
     120        (JSC::DFG::reifyInlinedCallFrames):
     121        (JSC::DFG::adjustAndJumpToTarget):
     122        * dfg/DFGOSRExitPreparation.cpp:
     123        (JSC::DFG::prepareCodeOriginForOSRExit):
     124        * dfg/DFGObjectAllocationSinkingPhase.cpp:
     125        * dfg/DFGOperations.cpp:
     126        * dfg/DFGPreciseLocalClobberize.h:
     127        (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
     128        * dfg/DFGSpeculativeJIT.cpp:
     129        (JSC::DFG::SpeculativeJIT::emitGetLength):
     130        (JSC::DFG::SpeculativeJIT::emitGetCallee):
     131        (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
     132        (JSC::DFG::SpeculativeJIT::compileValueAdd):
     133        (JSC::DFG::SpeculativeJIT::compileValueSub):
     134        (JSC::DFG::SpeculativeJIT::compileValueNegate):
     135        (JSC::DFG::SpeculativeJIT::compileValueMul):
     136        (JSC::DFG::SpeculativeJIT::compileForwardVarargs):
     137        (JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
     138        * dfg/DFGSpeculativeJIT32_64.cpp:
     139        (JSC::DFG::SpeculativeJIT::emitCall):
     140        * dfg/DFGSpeculativeJIT64.cpp:
     141        (JSC::DFG::SpeculativeJIT::emitCall):
     142        (JSC::DFG::SpeculativeJIT::compile):
     143        * dfg/DFGTierUpCheckInjectionPhase.cpp:
     144        (JSC::DFG::TierUpCheckInjectionPhase::run):
     145        (JSC::DFG::TierUpCheckInjectionPhase::canOSREnterAtLoopHint):
     146        (JSC::DFG::TierUpCheckInjectionPhase::buildNaturalLoopToLoopHintMap):
     147        * dfg/DFGTypeCheckHoistingPhase.cpp:
     148        (JSC::DFG::TypeCheckHoistingPhase::run):
     149        * dfg/DFGVariableEventStream.cpp:
     150        (JSC::DFG::VariableEventStream::reconstruct const):
     151        * ftl/FTLLowerDFGToB3.cpp:
     152        (JSC::FTL::DFG::LowerDFGToB3::compileValueAdd):
     153        (JSC::FTL::DFG::LowerDFGToB3::compileValueSub):
     154        (JSC::FTL::DFG::LowerDFGToB3::compileValueMul):
     155        (JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub):
     156        (JSC::FTL::DFG::LowerDFGToB3::compileValueNegate):
     157        (JSC::FTL::DFG::LowerDFGToB3::compileGetMyArgumentByVal):
     158        (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
     159        (JSC::FTL::DFG::LowerDFGToB3::compileSpread):
     160        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
     161        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
     162        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
     163        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
     164        (JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
     165        (JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
     166        (JSC::FTL::DFG::LowerDFGToB3::getArgumentsStart):
     167        (JSC::FTL::DFG::LowerDFGToB3::codeOriginDescriptionOfCallSite const):
     168        * ftl/FTLOSRExitCompiler.cpp:
     169        (JSC::FTL::compileStub):
     170        * ftl/FTLOperations.cpp:
     171        (JSC::FTL::operationMaterializeObjectInOSR):
     172        * interpreter/CallFrame.cpp:
     173        (JSC::CallFrame::bytecodeOffset):
     174        * interpreter/StackVisitor.cpp:
     175        (JSC::StackVisitor::unwindToMachineCodeBlockFrame):
     176        (JSC::StackVisitor::readFrame):
     177        (JSC::StackVisitor::readNonInlinedFrame):
     178        (JSC::inlinedFrameOffset):
     179        (JSC::StackVisitor::readInlinedFrame):
     180        * interpreter/StackVisitor.h:
     181        * jit/AssemblyHelpers.cpp:
     182        (JSC::AssemblyHelpers::executableFor):
     183        * jit/AssemblyHelpers.h:
     184        (JSC::AssemblyHelpers::isStrictModeFor):
     185        (JSC::AssemblyHelpers::argumentsStart):
     186        (JSC::AssemblyHelpers::argumentCount):
     187        * jit/PCToCodeOriginMap.cpp:
     188        (JSC::PCToCodeOriginMap::PCToCodeOriginMap):
     189        (JSC::PCToCodeOriginMap::findPC const):
     190        * profiler/ProfilerOriginStack.cpp:
     191        (JSC::Profiler::OriginStack::OriginStack):
     192        * profiler/ProfilerOriginStack.h:
     193        * runtime/ErrorInstance.cpp:
     194        (JSC::appendSourceToError):
     195        * runtime/SamplingProfiler.cpp:
     196        (JSC::SamplingProfiler::processUnverifiedStackTraces):
     197
    11982019-03-20  Devin Rousso  <drousso@apple.com>
    2199
  • trunk/Source/JavaScriptCore/bytecode/ByValInfo.h

    r236587 r243232  
    2828#include "ClassInfo.h"
    2929#include "CodeLocation.h"
    30 #include "CodeOrigin.h"
    3130#include "IndexingType.h"
    3231#include "JITStubRoutine.h"
  • trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp

    r240041 r243232  
    303303    if (CallLinkStatusInternal::verbose)
    304304        dataLog("Figuring out call profiling for ", codeOrigin, "\n");
    305     ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex);
     305    ExitSiteData exitSiteData = computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex());
    306306    if (CallLinkStatusInternal::verbose) {
    307307        dataLog("takesSlowPath = ", exitSiteData.takesSlowPath, "\n");
     
    347347        auto bless = [&] (CallLinkStatus& result) {
    348348            if (!context->isInlined(codeOrigin))
    349                 result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap, exitSiteData));
     349                result.merge(computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData));
    350350        };
    351351       
     
    394394    }
    395395   
    396     return computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap, exitSiteData);
     396    return computeFor(profiledBlock, codeOrigin.bytecodeIndex(), baselineMap, exitSiteData);
    397397}
    398398#endif
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r242928 r243232  
    20572057JSGlobalObject* CodeBlock::globalObjectFor(CodeOrigin codeOrigin)
    20582058{
    2059     if (!codeOrigin.inlineCallFrame)
     2059    auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     2060    if (!inlineCallFrame)
    20602061        return globalObject();
    2061     return codeOrigin.inlineCallFrame->baselineCodeBlock->globalObject();
     2062    return inlineCallFrame->baselineCodeBlock->globalObject();
    20622063}
    20632064
     
    24042405
    24052406    bool didTryToEnterInLoop = false;
    2406     for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     2407    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
    24072408        if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) {
    24082409            didTryToEnterInLoop = true;
     
    31763177        RELEASE_ASSERT(canGetCodeOrigin(callSiteIndex));
    31773178        CodeOrigin origin = codeOrigin(callSiteIndex);
    3178         bytecodeOffset = origin.bytecodeIndex;
     3179        bytecodeOffset = origin.bytecodeIndex();
    31793180#else
    31803181        RELEASE_ASSERT_NOT_REACHED();
  • trunk/Source/JavaScriptCore/bytecode/CodeOrigin.cpp

    r234086 r243232  
    3434namespace JSC {
    3535
    36 unsigned CodeOrigin::inlineDepthForCallFrame(InlineCallFrame* inlineCallFrame)
     36unsigned CodeOrigin::inlineDepth() const
    3737{
    3838    unsigned result = 1;
    39     for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame)
     39    for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame())
    4040        result++;
    4141    return result;
    42 }
    43 
    44 unsigned CodeOrigin::inlineDepth() const
    45 {
    46     return inlineDepthForCallFrame(inlineCallFrame);
    4742}
    4843
     
    6661        ASSERT(b.isSet());
    6762       
    68         if (a.bytecodeIndex != b.bytecodeIndex)
     63        if (a.bytecodeIndex() != b.bytecodeIndex())
    6964            return false;
    70        
    71         bool aHasInlineCallFrame = !!a.inlineCallFrame && a.inlineCallFrame != terminal;
    72         bool bHasInlineCallFrame = !!b.inlineCallFrame;
     65
     66        auto* aInlineCallFrame = a.inlineCallFrame();
     67        auto* bInlineCallFrame = b.inlineCallFrame();
     68        bool aHasInlineCallFrame = !!aInlineCallFrame && aInlineCallFrame != terminal;
     69        bool bHasInlineCallFrame = !!bInlineCallFrame;
    7370        if (aHasInlineCallFrame != bHasInlineCallFrame)
    7471            return false;
     
    7774            return true;
    7875       
    79         if (a.inlineCallFrame->baselineCodeBlock.get() != b.inlineCallFrame->baselineCodeBlock.get())
     76        if (aInlineCallFrame->baselineCodeBlock.get() != bInlineCallFrame->baselineCodeBlock.get())
    8077            return false;
    8178       
    82         a = a.inlineCallFrame->directCaller;
    83         b = b.inlineCallFrame->directCaller;
     79        a = aInlineCallFrame->directCaller;
     80        b = bInlineCallFrame->directCaller;
    8481    }
    8582}
     
    9592    CodeOrigin codeOrigin = *this;
    9693    for (;;) {
    97         result += codeOrigin.bytecodeIndex;
    98        
    99         if (!codeOrigin.inlineCallFrame)
     94        result += codeOrigin.bytecodeIndex();
     95
     96        auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     97
     98        if (!inlineCallFrame)
    10099            return result;
    101100       
    102         if (codeOrigin.inlineCallFrame == terminal)
     101        if (inlineCallFrame == terminal)
    103102            return result;
    104103       
    105         result += WTF::PtrHash<JSCell*>::hash(codeOrigin.inlineCallFrame->baselineCodeBlock.get());
     104        result += WTF::PtrHash<JSCell*>::hash(inlineCallFrame->baselineCodeBlock.get());
    106105       
    107         codeOrigin = codeOrigin.inlineCallFrame->directCaller;
     106        codeOrigin = inlineCallFrame->directCaller;
    108107    }
    109108}
     
    114113    result.last() = *this;
    115114    unsigned index = result.size() - 2;
    116     for (InlineCallFrame* current = inlineCallFrame; current; current = current->directCaller.inlineCallFrame)
     115    for (InlineCallFrame* current = inlineCallFrame(); current; current = current->directCaller.inlineCallFrame())
    117116        result[index--] = current->directCaller;
    118     RELEASE_ASSERT(!result[0].inlineCallFrame);
     117    RELEASE_ASSERT(!result[0].inlineCallFrame());
    119118    return result;
    120119}
     
    122121CodeBlock* CodeOrigin::codeOriginOwner() const
    123122{
     123    auto* inlineCallFrame = this->inlineCallFrame();
    124124    if (!inlineCallFrame)
    125         return 0;
     125        return nullptr;
    126126    return inlineCallFrame->baselineCodeBlock.get();
    127127}
     
    129129int CodeOrigin::stackOffset() const
    130130{
     131    auto* inlineCallFrame = this->inlineCallFrame();
    131132    if (!inlineCallFrame)
    132133        return 0;
    133    
    134134    return inlineCallFrame->stackOffset;
    135135}
     
    147147            out.print(" --> ");
    148148       
    149         if (InlineCallFrame* frame = stack[i].inlineCallFrame) {
     149        if (InlineCallFrame* frame = stack[i].inlineCallFrame()) {
    150150            out.print(frame->briefFunctionInformation(), ":<", RawPointer(frame->baselineCodeBlock.get()), "> ");
    151151            if (frame->isClosureCall)
     
    153153        }
    154154       
    155         out.print("bc#", stack[i].bytecodeIndex);
     155        out.print("bc#", stack[i].bytecodeIndex());
    156156    }
    157157}
  • trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h

    r234086 r243232  
    2626#pragma once
    2727
    28 #include "CodeBlockHash.h"
    29 #include "ExitingInlineKind.h"
    3028#include <limits.h>
    3129#include <wtf/HashMap.h>
     
    4038struct InlineCallFrame;
    4139
    42 struct CodeOrigin {
    43     static const unsigned invalidBytecodeIndex = UINT_MAX;
    44    
    45     // Bytecode offset that you'd use to re-execute this instruction, and the
    46     // bytecode index of the bytecode instruction that produces some result that
    47     // you're interested in (used for mapping Nodes whose values you're using
    48     // to bytecode instructions that have the appropriate value profile).
    49     unsigned bytecodeIndex;
    50    
    51     InlineCallFrame* inlineCallFrame;
    52    
     40class CodeOrigin {
     41public:
    5342    CodeOrigin()
    54         : bytecodeIndex(invalidBytecodeIndex)
    55         , inlineCallFrame(0)
     43#if CPU(ADDRESS64)
     44        : m_compositeValue(buildCompositeValue(nullptr, s_invalidBytecodeIndex))
     45#else
     46        : m_bytecodeIndex(s_invalidBytecodeIndex)
     47        , m_inlineCallFrame(nullptr)
     48#endif
    5649    {
    5750    }
    5851   
    5952    CodeOrigin(WTF::HashTableDeletedValueType)
    60         : bytecodeIndex(invalidBytecodeIndex)
    61         , inlineCallFrame(deletedMarker())
    62     {
    63     }
    64    
    65     explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0)
    66         : bytecodeIndex(bytecodeIndex)
    67         , inlineCallFrame(inlineCallFrame)
    68     {
    69         ASSERT(bytecodeIndex < invalidBytecodeIndex);
    70     }
    71    
    72     bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; }
     53#if CPU(ADDRESS64)
     54        : m_compositeValue(buildCompositeValue(deletedMarker(), s_invalidBytecodeIndex))
     55#else
     56        : m_bytecodeIndex(s_invalidBytecodeIndex)
     57        , m_inlineCallFrame(deletedMarker())
     58#endif
     59    {
     60    }
     61   
     62    explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = nullptr)
     63#if CPU(ADDRESS64)
     64        : m_compositeValue(buildCompositeValue(inlineCallFrame, bytecodeIndex))
     65#else
     66        : m_bytecodeIndex(bytecodeIndex)
     67        , m_inlineCallFrame(inlineCallFrame)
     68#endif
     69    {
     70        ASSERT(bytecodeIndex < s_invalidBytecodeIndex);
     71#if CPU(ADDRESS64)
     72        ASSERT(!(bitwise_cast<uintptr_t>(inlineCallFrame) & ~s_maskCompositeValueForPointer));
     73#endif
     74    }
     75   
     76#if CPU(ADDRESS64)
     77    CodeOrigin& operator=(const CodeOrigin& other)
     78    {
     79        if (this != &other) {
     80            if (UNLIKELY(isOutOfLine()))
     81                delete outOfLineCodeOrigin();
     82           
     83            if (UNLIKELY(other.isOutOfLine()))
     84                m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex());
     85            else
     86                m_compositeValue = other.m_compositeValue;
     87        }
     88        return *this;
     89    }
     90    CodeOrigin& operator=(CodeOrigin&& other)
     91    {
     92        if (this != &other) {
     93            if (UNLIKELY(isOutOfLine()))
     94                delete outOfLineCodeOrigin();
     95
     96            m_compositeValue = std::exchange(other.m_compositeValue, 0);
     97        }
     98        return *this;
     99    }
     100
     101    CodeOrigin(const CodeOrigin& other)
     102    {
     103        // We don't use the member initializer list because it would not let us optimize the common case where there is no out-of-line storage
     104        // (in which case we don't have to extract the components of the composite value just to reassemble it).
     105        if (UNLIKELY(other.isOutOfLine()))
     106            m_compositeValue = buildCompositeValue(other.inlineCallFrame(), other.bytecodeIndex());
     107        else
     108            m_compositeValue = other.m_compositeValue;
     109    }
     110    CodeOrigin(CodeOrigin&& other)
     111        : m_compositeValue(std::exchange(other.m_compositeValue, 0))
     112    {
     113    }
     114
     115    ~CodeOrigin()
     116    {
     117        if (UNLIKELY(isOutOfLine()))
     118            delete outOfLineCodeOrigin();
     119    }
     120#endif
     121   
     122    bool isSet() const
     123    {
     124#if CPU(ADDRESS64)
     125        return !(m_compositeValue & s_maskIsBytecodeIndexInvalid);
     126#else
     127        return m_bytecodeIndex != s_invalidBytecodeIndex;
     128#endif
     129    }
    73130    explicit operator bool() const { return isSet(); }
    74131   
    75132    bool isHashTableDeletedValue() const
    76133    {
    77         return bytecodeIndex == invalidBytecodeIndex && !!inlineCallFrame;
     134#if CPU(ADDRESS64)
     135        return !isSet() && (m_compositeValue & s_maskCompositeValueForPointer);
     136#else
     137        return m_bytecodeIndex == s_invalidBytecodeIndex && !!m_inlineCallFrame;
     138#endif
    78139    }
    79140   
     
    88149    int stackOffset() const;
    89150   
    90     static unsigned inlineDepthForCallFrame(InlineCallFrame*);
    91    
    92     ExitingInlineKind exitingInlineKind() const
    93     {
    94         return inlineCallFrame ? ExitFromInlined : ExitFromNotInlined;
    95     }
    96    
    97151    unsigned hash() const;
    98152    bool operator==(const CodeOrigin& other) const;
     
    113167    JS_EXPORT_PRIVATE void dump(PrintStream&) const;
    114168    void dumpInContext(PrintStream&, DumpContext*) const;
    115    
     169
     170    unsigned bytecodeIndex() const
     171    {
     172#if CPU(ADDRESS64)
     173        if (!isSet())
     174            return s_invalidBytecodeIndex;
     175        if (UNLIKELY(isOutOfLine()))
     176            return outOfLineCodeOrigin()->bytecodeIndex;
     177        return m_compositeValue >> (64 - s_freeBitsAtTop);
     178#else
     179        return m_bytecodeIndex;
     180#endif
     181    }
     182
     183    InlineCallFrame* inlineCallFrame() const
     184    {
     185#if CPU(ADDRESS64)
     186        if (UNLIKELY(isOutOfLine()))
     187            return outOfLineCodeOrigin()->inlineCallFrame;
     188        return bitwise_cast<InlineCallFrame*>(m_compositeValue & s_maskCompositeValueForPointer);
     189#else
     190        return m_inlineCallFrame;
     191#endif
     192    }
     193
    116194private:
     195    static constexpr unsigned s_invalidBytecodeIndex = UINT_MAX;
     196
     197#if CPU(ADDRESS64)
     198    static constexpr uintptr_t s_maskIsOutOfLine = 1;
     199    static constexpr uintptr_t s_maskIsBytecodeIndexInvalid = 2;
     200
     201    struct OutOfLineCodeOrigin {
     202        WTF_MAKE_FAST_ALLOCATED;
     203    public:
     204        InlineCallFrame* inlineCallFrame;
     205        unsigned bytecodeIndex;
     206       
     207        OutOfLineCodeOrigin(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex)
     208            : inlineCallFrame(inlineCallFrame)
     209            , bytecodeIndex(bytecodeIndex)
     210        {
     211        }
     212    };
     213   
     214    bool isOutOfLine() const
     215    {
     216        return m_compositeValue & s_maskIsOutOfLine;
     217    }
     218    OutOfLineCodeOrigin* outOfLineCodeOrigin() const
     219    {
     220        ASSERT(isOutOfLine());
     221        return bitwise_cast<OutOfLineCodeOrigin*>(m_compositeValue & s_maskCompositeValueForPointer);
     222    }
     223#endif
     224
    117225    static InlineCallFrame* deletedMarker()
    118226    {
    119         return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(1));
    120     }
     227        auto value = static_cast<uintptr_t>(1 << 3);
     228#if CPU(ADDRESS64)
     229        ASSERT(value & s_maskCompositeValueForPointer);
     230        ASSERT(!(value & ~s_maskCompositeValueForPointer));
     231#endif
     232        return bitwise_cast<InlineCallFrame*>(value);
     233    }
     234
     235#if CPU(X86_64) && CPU(ADDRESS64)
     236    static constexpr unsigned s_freeBitsAtTop = 16;
     237    static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000fffffffffff8;
     238#elif CPU(ARM64) && CPU(ADDRESS64)
     239    static constexpr unsigned s_freeBitsAtTop = 28;
     240    static constexpr uintptr_t s_maskCompositeValueForPointer = 0x0000000ffffffff8;
     241#endif
     242#if CPU(ADDRESS64)
     243    static uintptr_t buildCompositeValue(InlineCallFrame* inlineCallFrame, unsigned bytecodeIndex)
     244    {
     245        if (bytecodeIndex == s_invalidBytecodeIndex)
     246            return bitwise_cast<uintptr_t>(inlineCallFrame) | s_maskIsBytecodeIndexInvalid;
     247
     248        if (UNLIKELY(bytecodeIndex >= 1 << s_freeBitsAtTop)) {
     249            auto* outOfLine = new OutOfLineCodeOrigin(inlineCallFrame, bytecodeIndex);
     250            return bitwise_cast<uintptr_t>(outOfLine) | s_maskIsOutOfLine;
     251        }
     252
     253        uintptr_t encodedBytecodeIndex = static_cast<uintptr_t>(bytecodeIndex) << (64 - s_freeBitsAtTop);
     254        ASSERT(!(encodedBytecodeIndex & bitwise_cast<uintptr_t>(inlineCallFrame)));
     255        return encodedBytecodeIndex | bitwise_cast<uintptr_t>(inlineCallFrame);
     256    }
     257
     258    // The bottom bit indicates whether to look at an out-of-line implementation (because of a bytecode index which is too big for us to store).
     259    // The next bit indicates whether this is an invalid bytecode (which depending on the InlineCallFrame* can either indicate an unset CodeOrigin,
     260    // or a deletion marker for a hash table).
     261    // The next bit is free
     262    // The next 64-s_freeBitsAtTop-3 are the InlineCallFrame* or the OutOfLineCodeOrigin*
     263    // Finally the last s_freeBitsAtTop are the bytecodeIndex if it is inline
     264    uintptr_t m_compositeValue;
     265#else
     266    unsigned m_bytecodeIndex;
     267    InlineCallFrame* m_inlineCallFrame;
     268#endif
    121269};
    122270
    123271inline unsigned CodeOrigin::hash() const
    124272{
    125     return WTF::IntHash<unsigned>::hash(bytecodeIndex) +
    126         WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame);
     273    return WTF::IntHash<unsigned>::hash(bytecodeIndex()) +
     274        WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame());
    127275}
    128276
    129277inline bool CodeOrigin::operator==(const CodeOrigin& other) const
    130278{
    131     return bytecodeIndex == other.bytecodeIndex
    132         && inlineCallFrame == other.inlineCallFrame;
     279#if CPU(ADDRESS64)
     280    if (m_compositeValue == other.m_compositeValue)
     281        return true;
     282#endif
     283    return bytecodeIndex() == other.bytecodeIndex()
     284        && inlineCallFrame() == other.inlineCallFrame();
    133285}
    134286
  • trunk/Source/JavaScriptCore/bytecode/DeferredSourceDump.h

    r235684 r243232  
    2626#pragma once
    2727
    28 #include "CodeOrigin.h"
    2928#include "JITCode.h"
    3029#include "Strong.h"
  • trunk/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp

    r242659 r243232  
    131131GetByIdStatus GetByIdStatus::computeForStubInfo(const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, CodeOrigin codeOrigin, UniquedStringImpl* uid)
    132132{
     133    unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
    133134    GetByIdStatus result = GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
    134135        locker, profiledBlock, stubInfo, uid,
    135         CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex));
    136 
    137     if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex))
     136        CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex));
     137
     138    if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, bytecodeIndex))
    138139        return result.slowVersion();
    139140    return result;
     
    303304    ICStatusContextStack& icContextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
    304305{
    305     CallLinkStatus::ExitSiteData callExitSiteData =
    306         CallLinkStatus::computeExitSiteData(profiledBlock, codeOrigin.bytecodeIndex);
    307     ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex);
     306    unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
     307    CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex);
     308    ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex);
    308309   
    309310    for (ICStatusContext* context : icContextStack) {
     
    315316                // inlined and not-inlined.
    316317                GetByIdStatus baselineResult = computeFor(
    317                     profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,
     318                    profiledBlock, baselineMap, bytecodeIndex, uid, didExit,
    318319                    callExitSiteData);
    319320                baselineResult.merge(result);
     
    340341    }
    341342   
    342     return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);
     343    return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData);
    343344}
    344345
  • trunk/Source/JavaScriptCore/bytecode/ICStatusMap.cpp

    r234086 r243232  
    4040bool ICStatusContext::isInlined(CodeOrigin codeOrigin) const
    4141{
    42     return codeOrigin.inlineCallFrame && codeOrigin.inlineCallFrame != inlineCallFrame;
     42    auto* originInlineCallFrame = codeOrigin.inlineCallFrame();
     43    return originInlineCallFrame && originInlineCallFrame != inlineCallFrame;
    4344}
    4445
  • trunk/Source/JavaScriptCore/bytecode/InByIdStatus.cpp

    r242109 r243232  
    7474    ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
    7575{
    76     ExitFlag didExit = hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex);
     76    unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
     77    ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex);
    7778   
    7879    for (ICStatusContext* context : contextStack) {
     
    8283            if (!context->isInlined(codeOrigin)) {
    8384                InByIdStatus baselineResult = computeFor(
    84                     profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);
     85                    profiledBlock, baselineMap, bytecodeIndex, uid, didExit);
    8586                baselineResult.merge(result);
    8687                return baselineResult;
     
    107108    }
    108109   
    109     return computeFor(profiledBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit);
     110    return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit);
    110111}
    111112#endif // ENABLE(JIT)
     
    116117    InByIdStatus result = InByIdStatus::computeForStubInfoWithoutExitSiteFeedback(locker, stubInfo, uid);
    117118
    118     if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex))
     119    if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, codeOrigin.bytecodeIndex()))
    119120        return InByIdStatus(TakesSlowPath);
    120121    return result;
  • trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.cpp

    r221528 r243232  
    7070    if (isStrictMode())
    7171        out.print(" (StrictMode)");
    72     out.print(", bc#", directCaller.bytecodeIndex, ", ", static_cast<Kind>(kind));
     72    out.print(", bc#", directCaller.bytecodeIndex(), ", ", static_cast<Kind>(kind));
    7373    if (isClosureCall)
    7474        out.print(", closure call");
  • trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.h

    r236495 r243232  
    153153            callKind = inlineCallFrame->kind;
    154154            codeOrigin = &inlineCallFrame->directCaller;
    155             inlineCallFrame = codeOrigin->inlineCallFrame;
     155            inlineCallFrame = codeOrigin->inlineCallFrame();
    156156        } while (inlineCallFrame && tailCallee);
    157157
     
    173173    {
    174174        CodeOrigin* caller = getCallerSkippingTailCalls();
    175         return caller ? caller->inlineCallFrame : nullptr;
     175        return caller ? caller->inlineCallFrame() : nullptr;
    176176    }
    177177   
     
    242242{
    243243    ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
    244     if (codeOrigin.inlineCallFrame)
    245         return baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame);
     244    auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     245    if (inlineCallFrame)
     246        return baselineCodeBlockForInlineCallFrame(inlineCallFrame);
    246247    return baselineCodeBlock;
    247248}
    248249
     250// This function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
    249251template <typename Function>
    250252inline void CodeOrigin::walkUpInlineStack(const Function& function)
     
    253255    while (true) {
    254256        function(codeOrigin);
    255         if (!codeOrigin.inlineCallFrame)
     257        auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     258        if (!inlineCallFrame)
    256259            break;
    257         codeOrigin = codeOrigin.inlineCallFrame->directCaller;
     260        codeOrigin = inlineCallFrame->directCaller;
    258261    }
    259262}
  • trunk/Source/JavaScriptCore/bytecode/InstanceOfStatus.h

    r234086 r243232  
    2626#pragma once
    2727
    28 #include "CodeOrigin.h"
    2928#include "ConcurrentJSLock.h"
    3029#include "ICStatusMap.h"
  • trunk/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp

    r240138 r243232  
    123123    return computeForStubInfo(
    124124        locker, baselineBlock, stubInfo, uid,
    125         CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex));
     125        CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex()));
    126126}
    127127
     
    238238PutByIdStatus PutByIdStatus::computeFor(CodeBlock* baselineBlock, ICStatusMap& baselineMap, ICStatusContextStack& contextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
    239239{
    240     CallLinkStatus::ExitSiteData callExitSiteData =
    241         CallLinkStatus::computeExitSiteData(baselineBlock, codeOrigin.bytecodeIndex);
    242     ExitFlag didExit = hasBadCacheExitSite(baselineBlock, codeOrigin.bytecodeIndex);
     240    unsigned bytecodeIndex = codeOrigin.bytecodeIndex();
     241    CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(baselineBlock, bytecodeIndex);
     242    ExitFlag didExit = hasBadCacheExitSite(baselineBlock, bytecodeIndex);
    243243
    244244    for (ICStatusContext* context : contextStack) {
     
    248248            if (!context->isInlined(codeOrigin)) {
    249249                PutByIdStatus baselineResult = computeFor(
    250                     baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit,
     250                    baselineBlock, baselineMap, bytecodeIndex, uid, didExit,
    251251                    callExitSiteData);
    252252                baselineResult.merge(result);
     
    273273    }
    274274   
    275     return computeFor(baselineBlock, baselineMap, codeOrigin.bytecodeIndex, uid, didExit, callExitSiteData);
     275    return computeFor(baselineBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData);
    276276}
    277277
  • trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h

    r243206 r243232  
    22142214    case GetMyArgumentByValOutOfBounds: {
    22152215        JSValue index = forNode(node->child2()).m_value;
    2216         InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame;
     2216        InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame();
    22172217
    22182218        if (index && index.isUInt32()) {
  • trunk/Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp

    r242954 r243232  
    530530                    Operands<bool>& clobberedByThisBlock = clobberedByBlock[block];
    531531                   
    532                     if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame) {
     532                    if (InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame()) {
    533533                        if (inlineCallFrame->isVarargs()) {
    534534                            isClobberedByBlock |= clobberedByThisBlock.operand(
     
    760760                    if (m_graph.varArgChild(node, 1)->isInt32Constant()) {
    761761                        unsigned index = m_graph.varArgChild(node, 1)->asUInt32();
    762                         InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     762                        InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    763763                        index += numberOfArgumentsToSkip;
    764764                       
     
    862862
    863863                            ASSERT(candidate->op() == PhantomCreateRest);
    864                             InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     864                            InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    865865                            return inlineCallFrame && !inlineCallFrame->isVarargs();
    866866                        });
     
    888888                                ASSERT(candidate->op() == PhantomCreateRest);
    889889                                unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
    890                                 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     890                                InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    891891                                unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
    892892                                if (frameArgumentCount >= numberOfArgumentsToSkip)
     
    936936                                    ASSERT(candidate->op() == PhantomCreateRest);
    937937                                    unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
    938                                     InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     938                                    InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    939939                                    unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
    940940                                    for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) {
     
    971971                        varargsData->offset += numberOfArgumentsToSkip;
    972972
    973                         InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     973                        InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    974974
    975975                        if (inlineCallFrame
     
    11121112
    11131113                            ASSERT(candidate->op() == PhantomCreateRest);
    1114                             InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     1114                            InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    11151115                            return inlineCallFrame && !inlineCallFrame->isVarargs();
    11161116                        });
     
    11511151
    11521152                                ASSERT(candidate->op() == PhantomCreateRest);
    1153                                 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     1153                                InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    11541154                                unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
    11551155                                for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->argumentCountIncludingThis; ++i) {
     
    11761176                        varargsData->firstVarArgOffset += numberOfArgumentsToSkip;
    11771177
    1178                         InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
     1178                        InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
    11791179                        if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
    11801180                            Vector<Node*> arguments;
  • trunk/Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp

    r232070 r243232  
    5555bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister reg)
    5656{
    57     return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame, reg);
     57    return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), reg);
    5858}
    5959
     
    107107    }
    108108   
    109     InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame;
     109    InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
    110110
    111111    unsigned numberOfArgumentsToSkip = 0;
  • trunk/Source/JavaScriptCore/dfg/DFGArrayMode.h

    r239951 r243232  
    3333namespace JSC {
    3434
    35 struct CodeOrigin;
     35class CodeOrigin;
    3636
    3737namespace DFG {
  • trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp

    r243176 r243232  
    386386    {
    387387        ASSERT(node->op() == GetLocal);
    388         ASSERT(node->origin.semantic.bytecodeIndex == m_currentIndex);
     388        ASSERT(node->origin.semantic.bytecodeIndex() == m_currentIndex);
    389389        ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
    390390        LazyOperandValueProfileKey key(m_currentIndex, node->local());
     
    443443        VariableAccessData* variableAccessData = newVariableAccessData(operand);
    444444        variableAccessData->mergeStructureCheckHoistingFailed(
    445             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadCache));
     445            m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache));
    446446        variableAccessData->mergeCheckArrayHoistingFailed(
    447             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadIndexingType));
     447            m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
    448448        Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
    449449        m_currentBlock->variablesAtTail.local(local) = node;
     
    499499       
    500500        variableAccessData->mergeStructureCheckHoistingFailed(
    501             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadCache));
     501            m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadCache));
    502502        variableAccessData->mergeCheckArrayHoistingFailed(
    503             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex, BadIndexingType));
     503            m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
    504504        Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
    505505        m_currentBlock->variablesAtTail.argument(argument) = node;
     
    564564        origin.walkUpInlineStack(
    565565            [&] (CodeOrigin origin) {
    566                 unsigned bytecodeIndex = origin.bytecodeIndex;
    567                 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame;
     566                unsigned bytecodeIndex = origin.bytecodeIndex();
     567                InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
    568568                flushImpl(inlineCallFrame, addFlushDirect);
    569569
     
    869869
    870870            InlineStackEntry* stack = m_inlineStackTop;
    871             while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame)
     871            while (stack->m_inlineCallFrame != codeOrigin->inlineCallFrame())
    872872                stack = stack->m_caller;
    873873
    874             bytecodeIndex = codeOrigin->bytecodeIndex;
     874            bytecodeIndex = codeOrigin->bytecodeIndex();
    875875            CodeBlock* profiledBlock = stack->m_profiledBlock;
    876876            ConcurrentJSLocker locker(profiledBlock->m_lock);
     
    53975397            {
    53985398                ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
    5399                 ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex)).byValInfo;
     5399                ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo;
    54005400                // FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null.
    54015401                // At that time, there is no information.
     
    60276027
    60286028                    Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
    6029                     setArgument->origin.forExit.bytecodeIndex = exitBytecodeIndex;
     6029                    setArgument->origin.forExit = CodeOrigin(exitBytecodeIndex, setArgument->origin.forExit.inlineCallFrame());
    60306030                    m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument);
    60316031                    entrypointArguments[argument] = setArgument;
     
    70897089        Vector<DeferredSourceDump>& deferredSourceDump = m_graph.m_plan.callback()->ensureDeferredSourceDump();
    70907090        if (inlineCallFrame()) {
    7091             DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex);
     7091            DeferredSourceDump dump(codeBlock->baselineVersion(), m_codeBlock, JITCode::DFGJIT, inlineCallFrame()->directCaller.bytecodeIndex());
    70927092            deferredSourceDump.append(dump);
    70937093        } else
     
    71777177        {
    71787178            ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
    7179             ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex)).byValInfo;
     7179            ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo;
    71807180            // FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null.
    71817181            // At that time, there is no information.
  • trunk/Source/JavaScriptCore/dfg/DFGClobberize.h

    r242715 r243232  
    111111    // to the runtime, and that call may walk stack. Therefore, each node must read() anything that a stack
    112112    // scan would read. That's what this does.
    113     for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     113    for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
    114114        if (inlineCallFrame->isClosureCall)
    115115            read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::callee));
     
    124124    // a scope which is expected to be flushed to the stack.
    125125    if (graph.hasDebuggerEnabled()) {
    126         ASSERT(!node->origin.semantic.inlineCallFrame);
     126        ASSERT(!node->origin.semantic.inlineCallFrame());
    127127        read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister()));
    128128    }
     
    711711
    712712    case CallEval:
    713         ASSERT(!node->origin.semantic.inlineCallFrame);
     713        ASSERT(!node->origin.semantic.inlineCallFrame());
    714714        read(AbstractHeap(Stack, graph.m_codeBlock->scopeRegister()));
    715715        read(AbstractHeap(Stack, virtualRegisterForArgument(0)));
  • trunk/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp

    r241228 r243232  
    364364                unsigned index = checkedIndex.unsafeGet();
    365365                Node* arguments = node->child1().node();
    366                 InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame;
     366                InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
    367367               
    368368                // Don't try to do anything if the index is known to be outside our static bounds. Note
  • trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp

    r242954 r243232  
    34313431        CodeBlock* profiledBlock = m_graph.baselineCodeBlockFor(node->origin.semantic);
    34323432        ArrayProfile* arrayProfile =
    3433             profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex);
     3433            profiledBlock->getArrayProfile(node->origin.semantic.bytecodeIndex());
    34343434        ArrayMode arrayMode = ArrayMode(Array::SelectUsingPredictions, Array::Read);
    34353435        if (arrayProfile) {
  • trunk/Source/JavaScriptCore/dfg/DFGForAllKills.h

    r209121 r243232  
    6969    // It's easier to do this if the inline call frames are the same. This is way faster than the
    7070    // other loop, below.
    71     if (before.inlineCallFrame == after.inlineCallFrame) {
    72         int stackOffset = before.inlineCallFrame ? before.inlineCallFrame->stackOffset : 0;
    73         CodeBlock* codeBlock = graph.baselineCodeBlockFor(before.inlineCallFrame);
     71    auto* beforeInlineCallFrame = before.inlineCallFrame();
     72    if (beforeInlineCallFrame == after.inlineCallFrame()) {
     73        int stackOffset = beforeInlineCallFrame ? beforeInlineCallFrame->stackOffset : 0;
     74        CodeBlock* codeBlock = graph.baselineCodeBlockFor(beforeInlineCallFrame);
    7475        FullBytecodeLiveness& fullLiveness = graph.livenessFor(codeBlock);
    75         const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex);
    76         const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex);
     76        const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex());
     77        const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex());
    7778       
    7879        (liveBefore & ~liveAfter).forEachSetBit(
  • trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp

    r242945 r243232  
    121121        return false;
    122122   
    123     if (previousNode->origin.semantic.inlineCallFrame == currentNode->origin.semantic.inlineCallFrame)
     123    if (previousNode->origin.semantic.inlineCallFrame() == currentNode->origin.semantic.inlineCallFrame())
    124124        return false;
    125125   
     
    129129    unsigned indexOfDivergence = commonSize;
    130130    for (unsigned i = 0; i < commonSize; ++i) {
    131         if (previousInlineStack[i].inlineCallFrame != currentInlineStack[i].inlineCallFrame) {
     131        if (previousInlineStack[i].inlineCallFrame() != currentInlineStack[i].inlineCallFrame()) {
    132132            indexOfDivergence = i;
    133133            break;
     
    141141        out.print(prefix);
    142142        printWhiteSpace(out, i * 2);
    143         out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame, context), "\n");
     143        out.print("<-- ", inContext(*previousInlineStack[i].inlineCallFrame(), context), "\n");
    144144        hasPrinted = true;
    145145    }
     
    149149        out.print(prefix);
    150150        printWhiteSpace(out, i * 2);
    151         out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame, context), "\n");
     151        out.print("--> ", inContext(*currentInlineStack[i].inlineCallFrame(), context), "\n");
    152152        hasPrinted = true;
    153153    }
     
    395395        out.print(comma, "ClobbersExit");
    396396    if (node->origin.isSet()) {
    397         out.print(comma, "bc#", node->origin.semantic.bytecodeIndex);
     397        out.print(comma, "bc#", node->origin.semantic.bytecodeIndex());
    398398        if (node->origin.semantic != node->origin.forExit && node->origin.forExit.isSet())
    399399            out.print(comma, "exit: ", node->origin.forExit);
     
    11241124        if (verbose)
    11251125            dataLog("reg = ", reg, "\n");
    1126        
     1126
     1127        auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
    11271128        if (operand.offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
    11281129            if (reg.isArgument()) {
    11291130                RELEASE_ASSERT(reg.offset() < CallFrame::headerSizeInRegisters);
    1130                
    1131                 if (codeOriginPtr->inlineCallFrame->isClosureCall
     1131
     1132
     1133                if (inlineCallFrame->isClosureCall
    11321134                    && reg.offset() == CallFrameSlot::callee) {
    11331135                    if (verbose)
     
    11361138                }
    11371139               
    1138                 if (codeOriginPtr->inlineCallFrame->isVarargs()
     1140                if (inlineCallFrame->isVarargs()
    11391141                    && reg.offset() == CallFrameSlot::argumentCount) {
    11401142                    if (verbose)
     
    11481150            if (verbose)
    11491151                dataLog("Asking the bytecode liveness.\n");
    1150             return livenessFor(codeOriginPtr->inlineCallFrame).operandIsLive(
    1151                 reg.offset(), codeOriginPtr->bytecodeIndex);
    1152         }
    1153        
    1154         InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame;
     1152            return livenessFor(inlineCallFrame).operandIsLive(reg.offset(), codeOriginPtr->bytecodeIndex());
     1153        }
     1154
    11551155        if (!inlineCallFrame) {
    11561156            if (verbose)
     
    16231623                        profiledBlock,
    16241624                        LazyOperandValueProfileKey(
    1625                             node->origin.semantic.bytecodeIndex, node->local()));
     1625                            node->origin.semantic.bytecodeIndex(), node->local()));
    16261626                }
    16271627            }
    16281628
    16291629            if (node->hasHeapPrediction())
    1630                 return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
     1630                return &profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex());
    16311631
    16321632            if (profiledBlock->hasBaselineJITProfiling()) {
    1633                 if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex))
     1633                if (ArithProfile* result = profiledBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex()))
    16341634                    return result;
    16351635            }
     
    17291729        return false;
    17301730
    1731     unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex;
     1731    unsigned bytecodeIndexToCheck = codeOrigin.bytecodeIndex();
    17321732    while (1) {
    1733         InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
     1733        InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame();
    17341734        CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
    17351735        if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) {
     
    17421742            return false;
    17431743
    1744         bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex;
    1745         codeOrigin = codeOrigin.inlineCallFrame->directCaller;
     1744        bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex();
     1745        codeOrigin = inlineCallFrame->directCaller;
    17461746    }
    17471747
  • trunk/Source/JavaScriptCore/dfg/DFGGraph.h

    r242945 r243232  
    425425    ScriptExecutable* executableFor(const CodeOrigin& codeOrigin)
    426426    {
    427         return executableFor(codeOrigin.inlineCallFrame);
     427        return executableFor(codeOrigin.inlineCallFrame());
    428428    }
    429429   
     
    442442    bool isStrictModeFor(CodeOrigin codeOrigin)
    443443    {
    444         if (!codeOrigin.inlineCallFrame)
     444        if (!codeOrigin.inlineCallFrame())
    445445            return m_codeBlock->isStrictMode();
    446         return codeOrigin.inlineCallFrame->isStrictMode();
     446        return codeOrigin.inlineCallFrame()->isStrictMode();
    447447    }
    448448   
     
    464464    bool hasExitSite(const CodeOrigin& codeOrigin, ExitKind exitKind)
    465465    {
    466         return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex, exitKind));
     466        return baselineCodeBlockFor(codeOrigin)->unlinkedCodeBlock()->hasExitSite(FrequentExitSite(codeOrigin.bytecodeIndex(), exitKind));
    467467    }
    468468   
     
    828828       
    829829        for (;;) {
    830             InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame;
     830            InlineCallFrame* inlineCallFrame = codeOriginPtr->inlineCallFrame();
    831831            VirtualRegister stackOffset(inlineCallFrame ? inlineCallFrame->stackOffset : 0);
    832832           
     
    840840            CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
    841841            FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock);
    842             const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex);
     842            const FastBitVector& liveness = fullLiveness.getLiveness(codeOriginPtr->bytecodeIndex());
    843843            for (unsigned relativeLocal = codeBlock->numCalleeLocals(); relativeLocal--;) {
    844844                VirtualRegister reg = stackOffset + virtualRegisterForLocal(relativeLocal);
  • trunk/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp

    r221225 r243232  
    126126                return cachedHandlerResult;
    127127
    128             unsigned bytecodeIndexToCheck = origin.bytecodeIndex;
     128            unsigned bytecodeIndexToCheck = origin.bytecodeIndex();
    129129
    130130            cachedCodeOrigin = origin;
    131131
    132132            while (1) {
    133                 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame;
     133                InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
    134134                CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
    135135                if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeIndexToCheck)) {
     
    150150                }
    151151
    152                 bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex;
     152                bytecodeIndexToCheck = inlineCallFrame->directCaller.bytecodeIndex();
    153153                origin = inlineCallFrame->directCaller;
    154154            }
     
    200200
    201201            if (currentExceptionHandler && (node->op() == SetLocal || node->op() == SetArgument)) {
    202                 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
     202                InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame();
    203203                if (inlineCallFrame)
    204204                    seenInlineCallFrames.add(inlineCallFrame);
  • trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp

    r209764 r243232  
    4444    else {
    4545        ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments);
    46         result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame);
     46        result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame());
    4747    }
    4848    return result;
  • trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp

    r240364 r243232  
    259259    case PhantomDirectArguments:
    260260    case PhantomClonedArguments: {
    261         InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
     261        InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame();
    262262        if (!inlineCallFrame) {
    263263            // We don't need to record anything about how the arguments are to be recovered. It's just a
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r241927 r243232  
    394394                CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
    395395                CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
    396                 arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex);
     396                arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex());
    397397                if (arrayProfile)
    398398                    extraInitializationLevel = std::max(extraInitializationLevel, ExtraInitializationLevel::ArrayProfileUpdate);
     
    407407        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
    408408        const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap();
    409         CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex);
     409        CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex());
    410410        ASSERT(codeLocation);
    411411
     
    749749
    750750    const CodeOrigin* codeOrigin;
    751     for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
    752         InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     751    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
     752        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
    753753        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
    754754        InlineCallFrame::Kind trueCallerCallKind;
     
    768768        } else {
    769769            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
    770             unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
     770            unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
    771771            MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget;
    772772
     
    800800            }
    801801
    802             if (trueCaller->inlineCallFrame)
    803                 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     802            if (trueCaller->inlineCallFrame())
     803                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue);
    804804
    805805            void* targetAddress = jumpTarget.executableAddress();
     
    823823        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
    824824#if USE(JSVALUE64)
    825         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     825        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
    826826        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
    827827        if (!inlineCallFrame->isClosureCall)
     
    840840    if (codeOrigin) {
    841841#if USE(JSVALUE64)
    842         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     842        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
    843843#else
    844844        const Instruction* instruction = outermostBaselineCodeBlock->instructions().at(codeOrigin->bytecodeIndex).ptr();
     
    869869    }
    870870
    871     if (exit.m_codeOrigin.inlineCallFrame)
    872         context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
     871    auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
     872    if (exitInlineCallFrame)
     873        context.fp() = context.fp<uint8_t*>() + exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue);
    873874
    874875    void* jumpTarget = exitState->jumpTarget;
     
    890891    CodeBlock* alternative = codeBlock->alternative();
    891892    ExitKind kind = exit.m_kind;
    892     unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
     893    unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
    893894
    894895    dataLog("Speculation failure in ", *codeBlock);
     
    11001101        debugInfo->codeBlock = jit.codeBlock();
    11011102        debugInfo->kind = exit.m_kind;
    1102         debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
     1103        debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
    11031104
    11041105        jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
     
    11491150
    11501151            CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
    1151             if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
     1152            if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) {
    11521153#if USE(JSVALUE64)
    11531154                GPRReg usedRegister;
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitBase.cpp

    r234086 r243232  
    4444    if (sourceProfiledCodeBlock) {
    4545        ExitingInlineKind inlineKind;
    46         if (m_codeOriginForExitProfile.inlineCallFrame)
     46        if (m_codeOriginForExitProfile.inlineCallFrame())
    4747            inlineKind = ExitFromInlined;
    4848        else
     
    5353            site = FrequentExitSite(HoistingFailed, jitType, inlineKind);
    5454        else
    55             site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex, m_kind, jitType, inlineKind);
     55            site = FrequentExitSite(m_codeOriginForExitProfile.bytecodeIndex(), m_kind, jitType, inlineKind);
    5656        ExitProfile::add(sourceProfiledCodeBlock, site);
    5757    }
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r241222 r243232  
    7474    AssemblyHelpers::JumpList loopThreshold;
    7575   
    76     for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     76    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
    7777        loopThreshold.append(
    7878            jit.branchTest8(
     
    146146
    147147    const CodeOrigin* codeOrigin;
    148     for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
    149         InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     148    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
     149        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
    150150        CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(*codeOrigin);
    151151        InlineCallFrame::Kind trueCallerCallKind;
     
    167167        } else {
    168168            CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
    169             unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
     169            unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
    170170            void* jumpTarget = nullptr;
    171171
     
    199199            }
    200200
    201             if (trueCaller->inlineCallFrame) {
     201            if (trueCaller->inlineCallFrame()) {
    202202                jit.addPtr(
    203                     AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue)),
     203                    AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue)),
    204204                    GPRInfo::callFrameRegister,
    205205                    GPRInfo::regT3);
     
    232232#if USE(JSVALUE64)
    233233        jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
    234         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     234        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
    235235        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
    236236        if (!inlineCallFrame->isClosureCall)
     
    250250    if (codeOrigin) {
    251251#if USE(JSVALUE64)
    252         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
     252        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex()).bits();
    253253#else
    254254        const Instruction* instruction = jit.baselineCodeBlock()->instructions().at(codeOrigin->bytecodeIndex).ptr();
     
    305305    }
    306306
    307     if (exit.m_codeOrigin.inlineCallFrame)
    308         jit.addPtr(AssemblyHelpers::TrustedImm32(exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister);
     307    auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
     308    if (exitInlineCallFrame)
     309        jit.addPtr(AssemblyHelpers::TrustedImm32(exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister);
    309310
    310311    CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin);
    311312    ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion());
    312313    ASSERT(codeBlockForExit->jitType() == JITCode::BaselineJIT);
    313     CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex);
     314    CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex());
    314315    ASSERT(codeLocation);
    315316
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp

    r241582 r243232  
    4242    DeferGC deferGC(vm.heap);
    4343   
    44     for (; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->directCaller) {
    45         CodeBlock* codeBlock = codeOrigin.inlineCallFrame->baselineCodeBlock.get();
     44    for (; codeOrigin.inlineCallFrame(); codeOrigin = codeOrigin.inlineCallFrame()->directCaller) {
     45        CodeBlock* codeBlock = codeOrigin.inlineCallFrame()->baselineCodeBlock.get();
    4646        JITWorklist::ensureGlobalWorklist().compileNow(codeBlock);
    4747    }
  • trunk/Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp

    r240447 r243232  
    12691269            forEachEscapee([&] (HashMap<Node*, Allocation>& escapees, Node* where) {
    12701270                for (Node* allocation : escapees.keys()) {
    1271                     InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame;
     1271                    InlineCallFrame* inlineCallFrame = allocation->origin.semantic.inlineCallFrame();
    12721272                    if (!inlineCallFrame)
    12731273                        continue;
    1274                     if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame)
     1274                    if ((inlineCallFrame->isClosureCall || inlineCallFrame->isVarargs()) && inlineCallFrame != where->origin.semantic.inlineCallFrame())
    12751275                        m_sinkCandidates.remove(allocation);
    12761276                }
  • trunk/Source/JavaScriptCore/dfg/DFGOperations.cpp

    r243081 r243232  
    28962896
    28972897    CodeOrigin origin = exec->codeOrigin();
     2898    auto* inlineCallFrame = origin.inlineCallFrame();
    28982899    bool strictMode;
    2899     if (origin.inlineCallFrame)
    2900         strictMode = origin.inlineCallFrame->baselineCodeBlock->isStrictMode();
     2900    if (inlineCallFrame)
     2901        strictMode = inlineCallFrame->baselineCodeBlock->isStrictMode();
    29012902    else
    29022903        strictMode = exec->codeBlock()->isStrictMode();
     
    30453046   
    30463047    bool didTryToEnterIntoInlinedLoops = false;
    3047     for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
     3048    for (InlineCallFrame* inlineCallFrame = exit->m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
    30483049        if (inlineCallFrame->baselineCodeBlock->ownerExecutable()->didTryToEnterInLoop()) {
    30493050            didTryToEnterIntoInlinedLoops = true;
  • trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h

    r234178 r243232  
    131131                return;
    132132            }
    133             InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame;
     133            InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame();
    134134            unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip();
    135135            readFrame(inlineCallFrame, numberOfArgumentsToSkip);
     
    193193                InlineCallFrame* inlineCallFrame;
    194194                if (m_node->hasArgumentsChild() && m_node->argumentsChild())
    195                     inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame;
     195                    inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame();
    196196                else
    197                     inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
     197                    inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
    198198
    199199                unsigned numberOfArgumentsToSkip = 0;
     
    221221
    222222        case GetArgument: {
    223             InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
     223            InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
    224224            unsigned indexIncludingThis = m_node->argumentIndex();
    225225            if (!inlineCallFrame) {
     
    249249       
    250250            // Read all of the inline arguments and call frame headers that we didn't already capture.
    251             for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {
     251            for (InlineCallFrame* inlineCallFrame = m_node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->getCallerInlineFrameSkippingTailCalls()) {
    252252                if (!inlineCallFrame->isStrictMode()) {
    253253                    for (unsigned i = inlineCallFrame->argumentsWithFixup.size(); i--;)
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp

    r242955 r243232  
    188188void SpeculativeJIT::emitGetLength(CodeOrigin origin, GPRReg lengthGPR, bool includeThis)
    189189{
    190     emitGetLength(origin.inlineCallFrame, lengthGPR, includeThis);
     190    emitGetLength(origin.inlineCallFrame(), lengthGPR, includeThis);
    191191}
    192192
    193193void SpeculativeJIT::emitGetCallee(CodeOrigin origin, GPRReg calleeGPR)
    194194{
    195     if (origin.inlineCallFrame) {
    196         if (origin.inlineCallFrame->isClosureCall) {
     195    auto* inlineCallFrame = origin.inlineCallFrame();
     196    if (inlineCallFrame) {
     197        if (inlineCallFrame->isClosureCall) {
    197198            m_jit.loadPtr(
    198                 JITCompiler::addressFor(origin.inlineCallFrame->calleeRecovery.virtualRegister()),
     199                JITCompiler::addressFor(inlineCallFrame->calleeRecovery.virtualRegister()),
    199200                calleeGPR);
    200201        } else {
    201202            m_jit.move(
    202                 TrustedImmPtr::weakPointer(m_jit.graph(), origin.inlineCallFrame->calleeRecovery.constant().asCell()),
     203                TrustedImmPtr::weakPointer(m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()),
    203204                calleeGPR);
    204205        }
     
    18641865                "SpeculativeJIT generating Node @%d (bc#%u) at JIT offset 0x%x",
    18651866                (int)m_currentNode->index(),
    1866                 m_currentNode->origin.semantic.bytecodeIndex, m_jit.debugOffset());
     1867                m_currentNode->origin.semantic.bytecodeIndex(), m_jit.debugOffset());
    18671868            dataLog("\n");
    18681869        }
     
    39383939
    39393940    CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
    3940     ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
    3941     const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
     3941    unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
     3942    ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     3943    const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    39423944    JITAddIC* addIC = m_jit.codeBlock()->addJITAddIC(arithProfile, instruction);
    39433945    auto repatchingFunction = operationValueAddOptimize;
     
    39623964
    39633965        CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
    3964         ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
    3965         const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
     3966        unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
     3967        ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     3968        const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    39663969        JITSubIC* subIC = m_jit.codeBlock()->addJITSubIC(arithProfile, instruction);
    39673970        auto repatchingFunction = operationValueSubOptimize;
     
    45544557{
    45554558    CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
    4556     ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
    4557     const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
     4559    unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
     4560    ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     4561    const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    45584562    JITNegIC* negIC = m_jit.codeBlock()->addJITNegIC(arithProfile, instruction);
    45594563    auto repatchingFunction = operationArithNegateOptimize;
     
    47774781
    47784782    CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic);
    4779     ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
    4780     const Instruction* instruction = baselineCodeBlock->instructions().at(node->origin.semantic.bytecodeIndex).ptr();
     4783    unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
     4784    ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     4785    const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    47814786    JITMulIC* mulIC = m_jit.codeBlock()->addJITMulIC(arithProfile, instruction);
    47824787    auto repatchingFunction = operationValueMulOptimize;
     
    72327237    InlineCallFrame* inlineCallFrame;
    72337238    if (node->child1())
    7234         inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame;
     7239        inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame();
    72357240    else
    7236         inlineCallFrame = node->origin.semantic.inlineCallFrame;
     7241        inlineCallFrame = node->origin.semantic.inlineCallFrame();
    72377242
    72387243    GPRTemporary length(this);
     
    73977402    unsigned knownLength;
    73987403    bool lengthIsKnown; // if false, lengthGPR will have the length.
    7399     if (node->origin.semantic.inlineCallFrame
    7400         && !node->origin.semantic.inlineCallFrame->isVarargs()) {
    7401         knownLength = node->origin.semantic.inlineCallFrame->argumentCountIncludingThis - 1;
     7404    auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
     7405    if (inlineCallFrame
     7406        && !inlineCallFrame->isVarargs()) {
     7407        knownLength = inlineCallFrame->argumentCountIncludingThis - 1;
    74027408        lengthIsKnown = true;
    74037409    } else {
     
    74727478        addSlowPathGenerator(WTFMove(generator));
    74737479    }
    7474        
    7475     if (node->origin.semantic.inlineCallFrame) {
    7476         if (node->origin.semantic.inlineCallFrame->isClosureCall) {
     7480
     7481    if (inlineCallFrame) {
     7482        if (inlineCallFrame->isClosureCall) {
    74777483            m_jit.loadPtr(
    74787484                JITCompiler::addressFor(
    7479                     node->origin.semantic.inlineCallFrame->calleeRecovery.virtualRegister()),
     7485                    inlineCallFrame->calleeRecovery.virtualRegister()),
    74807486                scratch1GPR);
    74817487        } else {
    74827488            m_jit.move(
    74837489                TrustedImmPtr::weakPointer(
    7484                     m_jit.graph(), node->origin.semantic.inlineCallFrame->calleeRecovery.constant().asCell()),
     7490                    m_jit.graph(), inlineCallFrame->calleeRecovery.constant().asCell()),
    74857491                scratch1GPR);
    74867492        }
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

    r242715 r243232  
    619619            InlineCallFrame* inlineCallFrame;
    620620            if (node->child3())
    621                 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
     621                inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
    622622            else
    623                 inlineCallFrame = node->origin.semantic.inlineCallFrame;
     623                inlineCallFrame = node->origin.semantic.inlineCallFrame();
    624624            // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
    625625            emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase);
     
    769769
    770770    CodeOrigin staticOrigin = node->origin.semantic;
    771     ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls());
    772     ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()));
     771    InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame();
     772    ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls());
     773    ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls()));
    773774    CodeOrigin dynamicOrigin =
    774         isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
     775        isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
    775776    CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size());
    776777   
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r242715 r243232  
    580580            InlineCallFrame* inlineCallFrame;
    581581            if (node->child3())
    582                 inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
     582                inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
    583583            else
    584                 inlineCallFrame = node->origin.semantic.inlineCallFrame;
     584                inlineCallFrame = node->origin.semantic.inlineCallFrame();
    585585            // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
    586586            emitSetupVarargsFrameFastCase(*m_jit.vm(), m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, inlineCallFrame, data->firstVarArgOffset, slowCase);
     
    721721
    722722    CodeOrigin staticOrigin = node->origin.semantic;
    723     ASSERT(!isTail || !staticOrigin.inlineCallFrame || !staticOrigin.inlineCallFrame->getCallerSkippingTailCalls());
    724     ASSERT(!isEmulatedTail || (staticOrigin.inlineCallFrame && staticOrigin.inlineCallFrame->getCallerSkippingTailCalls()));
     723    InlineCallFrame* staticInlineCallFrame = staticOrigin.inlineCallFrame();
     724    ASSERT(!isTail || !staticInlineCallFrame || !staticInlineCallFrame->getCallerSkippingTailCalls());
     725    ASSERT(!isEmulatedTail || (staticInlineCallFrame && staticInlineCallFrame->getCallerSkippingTailCalls()));
    725726    CodeOrigin dynamicOrigin =
    726         isEmulatedTail ? *staticOrigin.inlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
     727        isEmulatedTail ? *staticInlineCallFrame->getCallerSkippingTailCalls() : staticOrigin;
    727728
    728729    CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size());
     
    50155016        Vector<SilentRegisterSavePlan> savePlans;
    50165017        silentSpillAllRegistersImpl(false, savePlans, InvalidGPRReg);
    5017         unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
     5018        unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
    50185019
    50195020        addSlowPathGeneratorLambda([=]() {
     
    50445045       
    50455046    case CheckTierUpAndOSREnter: {
    5046         ASSERT(!node->origin.semantic.inlineCallFrame);
     5047        ASSERT(!node->origin.semantic.inlineCallFrame());
    50475048
    50485049        GPRTemporary temp(this);
    50495050        GPRReg tempGPR = temp.gpr();
    50505051
    5051         unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
     5052        unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
    50525053        auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex);
    50535054        DFG_ASSERT(m_jit.graph(), node, triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
  • trunk/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp

    r234178 r243232  
    109109                insertionSet.insertNode(nodeIndex + 1, SpecNone, tierUpType, origin);
    110110
    111                 unsigned bytecodeIndex = origin.semantic.bytecodeIndex;
     111                unsigned bytecodeIndex = origin.semantic.bytecodeIndex();
    112112                if (canOSREnter)
    113113                    m_graph.m_plan.tierUpAndOSREnterBytecodes().append(bytecodeIndex);
     
    171171
    172172        NodeOrigin origin = node->origin;
    173         if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame)
     173        if (level != FTL::CanCompileAndOSREnter || origin.semantic.inlineCallFrame())
    174174            return false;
    175175
     
    195195
    196196                if (const NaturalLoop* loop = naturalLoops.innerMostLoopOf(block)) {
    197                     unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
     197                    unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex();
    198198                    naturalLoopsToLoopHint.add(loop, bytecodeIndex);
    199199                }
  • trunk/Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp

    r242192 r243232  
    148148                        if (SpecCellCheck & SpecEmpty) {
    149149                            VirtualRegister local = node->variableAccessData()->local();
    150                             auto* inlineCallFrame = node->origin.semantic.inlineCallFrame;
     150                            auto* inlineCallFrame = node->origin.semantic.inlineCallFrame();
    151151                            if ((local - (inlineCallFrame ? inlineCallFrame->stackOffset : 0)) == virtualRegisterForArgument(0)) {
    152152                                // |this| can be the TDZ value. The call entrypoint won't have |this| as TDZ,
  • trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp

    r233630 r243232  
    148148    };
    149149
    150     if (codeOrigin.inlineCallFrame)
    151         numVariables = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame)->numCalleeLocals() + VirtualRegister(codeOrigin.inlineCallFrame->stackOffset).toLocal() + 1;
     150    auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     151    if (inlineCallFrame)
     152        numVariables = baselineCodeBlockForInlineCallFrame(inlineCallFrame)->numCalleeLocals() + VirtualRegister(inlineCallFrame->stackOffset).toLocal() + 1;
    152153    else
    153154        numVariables = baselineCodeBlock->numCalleeLocals();
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r242955 r243232  
    19031903
    19041904        CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
    1905         ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
    1906         const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
     1905        unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
     1906        ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     1907        const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    19071908        auto repatchingFunction = operationValueAddOptimize;
    19081909        auto nonRepatchingFunction = operationValueAdd;
     
    19221923
    19231924        CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
    1924         ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
    1925         const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
     1925        unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
     1926        ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     1927        const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    19261928        auto repatchingFunction = operationValueSubOptimize;
    19271929        auto nonRepatchingFunction = operationValueSub;
     
    19411943
    19421944        CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
    1943         ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
    1944         const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
     1945        unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
     1946        ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     1947        const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    19451948        auto repatchingFunction = operationValueMulOptimize;
    19461949        auto nonRepatchingFunction = operationValueMul;
     
    22022205
    22032206            CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
    2204             ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
    2205             const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
     2207            unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
     2208            ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     2209            const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    22062210            auto repatchingFunction = operationValueSubOptimize;
    22072211            auto nonRepatchingFunction = operationValueSub;
     
    28342838        DFG_ASSERT(m_graph, m_node, m_node->child1().useKind() == UntypedUse);
    28352839        CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic);
    2836         ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex);
    2837         const Instruction* instruction = baselineCodeBlock->instructions().at(m_node->origin.semantic.bytecodeIndex).ptr();
     2840        unsigned bytecodeIndex = m_node->origin.semantic.bytecodeIndex();
     2841        ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(bytecodeIndex);
     2842        const Instruction* instruction = baselineCodeBlock->instructions().at(bytecodeIndex).ptr();
    28382843        auto repatchingFunction = operationArithNegateOptimize;
    28392844        auto nonRepatchingFunction = operationArithNegate;
     
    42434248    void compileGetMyArgumentByVal()
    42444249    {
    4245         InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
     4250        InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
    42464251       
    42474252        LValue originalIndex = lowInt32(m_node->child2());
     
    58175822                    if (use->op() == PhantomSpread) {
    58185823                        if (use->child1()->op() == PhantomCreateRest) {
    5819                             InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
     5824                            InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame();
    58205825                            unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
    58215826                            LValue spreadLength = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
     
    58585863                        } else {
    58595864                            RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest);
    5860                             InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
     5865                            InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame();
    58615866                            unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
    58625867
     
    60506055            LBasicBlock lastNext = m_out.insertNewBlocksBefore(loopHeader);
    60516056
    6052             InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
     6057            InlineCallFrame* inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
    60536058            unsigned numberOfArgumentsToSkip = m_node->child1()->numberOfArgumentsToSkip();
    60546059            LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip);
     
    80468051
    80478052            RELEASE_ASSERT(target->op() == PhantomCreateRest);
    8048             InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
     8053            InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
    80498054            unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
    80508055            LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
     
    82098214
    82108215                        RELEASE_ASSERT(target->op() == PhantomCreateRest);
    8211                         InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
     8216                        InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
    82128217
    82138218                        unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
     
    84898494                    InlineCallFrame* inlineCallFrame;
    84908495                    if (node->child3())
    8491                         inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame;
     8496                        inlineCallFrame = node->child3()->origin.semantic.inlineCallFrame();
    84928497                    else
    8493                         inlineCallFrame = node->origin.semantic.inlineCallFrame;
     8498                        inlineCallFrame = node->origin.semantic.inlineCallFrame();
    84948499
    84958500                    // emitSetupVarargsFrameFastCase modifies the stack pointer if it succeeds.
     
    87358740        InlineCallFrame* inlineCallFrame;
    87368741        if (m_node->child1())
    8737             inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame;
     8742            inlineCallFrame = m_node->child1()->origin.semantic.inlineCallFrame();
    87388743        else
    8739             inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
     8744            inlineCallFrame = m_node->origin.semantic.inlineCallFrame();
    87408745
    87418746        LValue length = nullptr;
     
    88848889
    88858890            ASSERT(target->op() == PhantomCreateRest);
    8886             InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
     8891            InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
    88878892            unsigned numberOfArgumentsToSkip = target->numberOfArgumentsToSkip();
    88888893            spreadLengths.append(cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
     
    89358940
    89368941            RELEASE_ASSERT(target->op() == PhantomCreateRest);
    8937             InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame;
     8942            InlineCallFrame* inlineCallFrame = target->origin.semantic.inlineCallFrame();
    89388943
    89398944            LValue sourceStart = this->getArgumentsStart(inlineCallFrame, target->numberOfArgumentsToSkip());
     
    1146811473    ArgumentsLength getArgumentsLength()
    1146911474    {
    11470         return getArgumentsLength(m_node->origin.semantic.inlineCallFrame);
     11475        return getArgumentsLength(m_node->origin.semantic.inlineCallFrame());
    1147111476    }
    1147211477   
    1147311478    LValue getCurrentCallee()
    1147411479    {
    11475         if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame) {
     11480        if (InlineCallFrame* frame = m_node->origin.semantic.inlineCallFrame()) {
    1147611481            if (frame->isClosureCall)
    1147711482                return m_out.loadPtr(addressFor(frame->calleeRecovery.virtualRegister()));
     
    1148911494    LValue getArgumentsStart()
    1149011495    {
    11491         return getArgumentsStart(m_node->origin.semantic.inlineCallFrame);
     11496        return getArgumentsStart(m_node->origin.semantic.inlineCallFrame());
    1149211497    }
    1149311498   
     
    1653816543            // and jaz is inlined in baz. We want the callframe for jaz to appear to
    1653916544            // have caller be bar.
    16540             codeOrigin = *codeOrigin.inlineCallFrame->getCallerSkippingTailCalls();
     16545            codeOrigin = *codeOrigin.inlineCallFrame()->getCallerSkippingTailCalls();
    1654116546        }
    1654216547
  • trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp

    r241927 r243232  
    287287        if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) {
    288288            CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
    289             if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
     289            if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) {
    290290                jit.load32(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureIDOffset()), GPRInfo::regT1);
    291291                jit.store32(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructureID());
  • trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp

    r240740 r243232  
    279279    case PhantomDirectArguments:
    280280    case PhantomClonedArguments: {
    281         if (!materialization->origin().inlineCallFrame) {
     281        if (!materialization->origin().inlineCallFrame()) {
    282282            switch (materialization->type()) {
    283283            case PhantomDirectArguments:
     
    304304        // First figure out the argument count. If there isn't one then we represent the machine frame.
    305305        unsigned argumentCount = 0;
    306         if (materialization->origin().inlineCallFrame->isVarargs()) {
     306        if (materialization->origin().inlineCallFrame()->isVarargs()) {
    307307            for (unsigned i = materialization->properties().size(); i--;) {
    308308                const ExitPropertyValue& property = materialization->properties()[i];
     
    313313            }
    314314        } else
    315             argumentCount = materialization->origin().inlineCallFrame->argumentCountIncludingThis;
     315            argumentCount = materialization->origin().inlineCallFrame()->argumentCountIncludingThis;
    316316        RELEASE_ASSERT(argumentCount);
    317317       
    318318        JSFunction* callee = nullptr;
    319         if (materialization->origin().inlineCallFrame->isClosureCall) {
     319        if (materialization->origin().inlineCallFrame()->isClosureCall) {
    320320            for (unsigned i = materialization->properties().size(); i--;) {
    321321                const ExitPropertyValue& property = materialization->properties()[i];
     
    327327            }
    328328        } else
    329             callee = materialization->origin().inlineCallFrame->calleeConstant();
     329            callee = materialization->origin().inlineCallFrame()->calleeConstant();
    330330        RELEASE_ASSERT(callee);
    331331       
     
    475475        // and PhantomNewArrayBuffer are always bound to a specific op_new_array_buffer.
    476476        CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(materialization->origin(), exec->codeBlock()->baselineAlternative());
    477         const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex).ptr();
     477        const Instruction* currentInstruction = codeBlock->instructions().at(materialization->origin().bytecodeIndex()).ptr();
    478478        if (!currentInstruction->is<OpNewArrayBuffer>()) {
    479479            // This case can happen if Object.keys, an OpCall is first converted into a NewArrayBuffer which is then converted into a PhantomNewArrayBuffer.
  • trunk/Source/JavaScriptCore/interpreter/CallFrame.cpp

    r241222 r243232  
    157157        ASSERT(codeBlock());
    158158        CodeOrigin codeOrigin = this->codeOrigin();
    159         for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame; inlineCallFrame;) {
     159        for (InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame(); inlineCallFrame;) {
    160160            codeOrigin = inlineCallFrame->directCaller;
    161             inlineCallFrame = codeOrigin.inlineCallFrame;
     161            inlineCallFrame = codeOrigin.inlineCallFrame();
    162162        }
    163         return codeOrigin.bytecodeIndex;
     163        return codeOrigin.bytecodeIndex();
    164164    }
    165165#endif
  • trunk/Source/JavaScriptCore/interpreter/StackVisitor.cpp

    r241222 r243232  
    9797    if (m_frame.isInlinedFrame()) {
    9898        CodeOrigin codeOrigin = m_frame.inlineCallFrame()->directCaller;
    99         while (codeOrigin.inlineCallFrame)
    100             codeOrigin = codeOrigin.inlineCallFrame->directCaller;
     99        while (codeOrigin.inlineCallFrame())
     100            codeOrigin = codeOrigin.inlineCallFrame()->directCaller;
    101101        readNonInlinedFrame(m_frame.callFrame(), &codeOrigin);
    102102    }
     
    145145
    146146    CodeOrigin codeOrigin = codeBlock->codeOrigin(index);
    147     if (!codeOrigin.inlineCallFrame) {
     147    if (!codeOrigin.inlineCallFrame()) {
    148148        readNonInlinedFrame(callFrame, &codeOrigin);
    149149        return;
     
    178178        m_frame.m_codeBlock = callFrame->codeBlock();
    179179        m_frame.m_bytecodeOffset = !m_frame.codeBlock() ? 0
    180             : codeOrigin ? codeOrigin->bytecodeIndex
     180            : codeOrigin ? codeOrigin->bytecodeIndex()
    181181            : callFrame->bytecodeOffset();
    182182
     
    191191static int inlinedFrameOffset(CodeOrigin* codeOrigin)
    192192{
    193     InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     193    InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
    194194    int frameOffset = inlineCallFrame ? inlineCallFrame->stackOffset : 0;
    195195    return frameOffset;
     
    204204    bool isInlined = !!frameOffset;
    205205    if (isInlined) {
    206         InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
     206        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
    207207
    208208        m_frame.m_callFrame = callFrame;
     
    213213            m_frame.m_argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis;
    214214        m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock.get();
    215         m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex;
     215        m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex();
    216216
    217217        JSFunction* callee = inlineCallFrame->calleeForCallFrame(callFrame);
  • trunk/Source/JavaScriptCore/interpreter/StackVisitor.h

    r241222 r243232  
    3434namespace JSC {
    3535
    36 struct CodeOrigin;
    3736struct EntryFrame;
    3837struct InlineCallFrame;
    3938
    4039class CodeBlock;
     40class CodeOrigin;
    4141class ExecState;
    4242class JSCell;
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp

    r242252 r243232  
    4545ExecutableBase* AssemblyHelpers::executableFor(const CodeOrigin& codeOrigin)
    4646{
    47     if (!codeOrigin.inlineCallFrame)
     47    auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     48    if (!inlineCallFrame)
    4849        return m_codeBlock->ownerExecutable();
    49    
    50     return codeOrigin.inlineCallFrame->baselineCodeBlock->ownerExecutable();
     50    return inlineCallFrame->baselineCodeBlock->ownerExecutable();
    5151}
    5252
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h

    r242252 r243232  
    14331433    bool isStrictModeFor(CodeOrigin codeOrigin)
    14341434    {
    1435         if (!codeOrigin.inlineCallFrame)
     1435        auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     1436        if (!inlineCallFrame)
    14361437            return codeBlock()->isStrictMode();
    1437         return codeOrigin.inlineCallFrame->isStrictMode();
     1438        return inlineCallFrame->isStrictMode();
    14381439    }
    14391440   
     
    14751476    static VirtualRegister argumentsStart(const CodeOrigin& codeOrigin)
    14761477    {
    1477         return argumentsStart(codeOrigin.inlineCallFrame);
     1478        return argumentsStart(codeOrigin.inlineCallFrame());
    14781479    }
    14791480
     
    14881489    static VirtualRegister argumentCount(const CodeOrigin& codeOrigin)
    14891490    {
    1490         return argumentCount(codeOrigin.inlineCallFrame);
     1491        return argumentCount(codeOrigin.inlineCallFrame());
    14911492    }
    14921493   
  • trunk/Source/JavaScriptCore/jit/PCToCodeOriginMap.cpp

    r242776 r243232  
    192192    CodeOrigin lastCodeOrigin(0, nullptr);
    193193    auto buildCodeOriginTable = [&] (const CodeOrigin& codeOrigin) {
    194         intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex);
     194        intptr_t delta = static_cast<intptr_t>(codeOrigin.bytecodeIndex()) - static_cast<intptr_t>(lastCodeOrigin.bytecodeIndex());
    195195        lastCodeOrigin = codeOrigin;
    196196        if (delta > std::numeric_limits<int8_t>::max() || delta < std::numeric_limits<int8_t>::min() || delta == sentinelBytecodeDelta) {
     
    200200            codeOriginCompressor.write<int8_t>(static_cast<int8_t>(delta));
    201201
    202         int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame ? 1 : 0;
     202        int8_t hasInlineCallFrameByte = codeOrigin.inlineCallFrame() ? 1 : 0;
    203203        codeOriginCompressor.write<int8_t>(hasInlineCallFrameByte);
    204204        if (hasInlineCallFrameByte)
    205             codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame));
     205            codeOriginCompressor.write<uintptr_t>(bitwise_cast<uintptr_t>(codeOrigin.inlineCallFrame()));
    206206    };
    207207
     
    255255
    256256    uintptr_t currentPC = 0;
    257     CodeOrigin currentCodeOrigin(0, nullptr);
     257    unsigned currentBytecodeIndex = 0;
     258    InlineCallFrame* currentInlineCallFrame = nullptr;
    258259
    259260    DeltaCompresseionReader pcReader(m_compressedPCs, m_compressedPCBufferSize);
     
    271272        }
    272273
    273         CodeOrigin previousOrigin = currentCodeOrigin;
     274        CodeOrigin previousOrigin = CodeOrigin(currentBytecodeIndex, currentInlineCallFrame);
    274275        {
    275276            int8_t value = codeOriginReader.read<int8_t>();
     
    280281                delta = static_cast<intptr_t>(value);
    281282
    282             currentCodeOrigin.bytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentCodeOrigin.bytecodeIndex) + delta);
     283            currentBytecodeIndex = static_cast<unsigned>(static_cast<intptr_t>(currentBytecodeIndex) + delta);
    283284
    284285            int8_t hasInlineFrame = codeOriginReader.read<int8_t>();
    285286            ASSERT(hasInlineFrame == 0 || hasInlineFrame == 1);
    286287            if (hasInlineFrame)
    287                 currentCodeOrigin.inlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>());
     288                currentInlineCallFrame = bitwise_cast<InlineCallFrame*>(codeOriginReader.read<uintptr_t>());
    288289            else
    289                 currentCodeOrigin.inlineCallFrame = nullptr;
     290                currentInlineCallFrame = nullptr;
    290291        }
    291292
  • trunk/Source/JavaScriptCore/profiler/ProfilerOriginStack.cpp

    r208968 r243232  
    4949    Vector<CodeOrigin> stack = codeOrigin.inlineStack();
    5050   
    51     append(Origin(database, codeBlock, stack[0].bytecodeIndex));
     51    append(Origin(database, codeBlock, stack[0].bytecodeIndex()));
    5252   
    5353    for (unsigned i = 1; i < stack.size(); ++i) {
    5454        append(Origin(
    55             database.ensureBytecodesFor(stack[i].inlineCallFrame->baselineCodeBlock.get()),
    56             stack[i].bytecodeIndex));
     55            database.ensureBytecodesFor(stack[i].inlineCallFrame()->baselineCodeBlock.get()),
     56            stack[i].bytecodeIndex()));
    5757    }
    5858}
  • trunk/Source/JavaScriptCore/profiler/ProfilerOriginStack.h

    r218794 r243232  
    3434
    3535class CodeBlock;
    36 struct CodeOrigin;
     36class CodeOrigin;
    3737
    3838namespace Profiler {
  • trunk/Source/JavaScriptCore/runtime/ErrorInstance.cpp

    r241222 r243232  
    6767    CodeBlock* codeBlock;
    6868    CodeOrigin codeOrigin = callFrame->codeOrigin();
    69     if (codeOrigin && codeOrigin.inlineCallFrame)
    70         codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame);
     69    if (codeOrigin && codeOrigin.inlineCallFrame())
     70        codeBlock = baselineCodeBlockForInlineCallFrame(codeOrigin.inlineCallFrame());
    7171    else
    7272        codeBlock = callFrame->codeBlock();
  • trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp

    r242812 r243232  
    557557            origin.walkUpInlineStack([&] (const CodeOrigin& codeOrigin) {
    558558                machineOrigin = codeOrigin;
    559                 appendCodeBlock(codeOrigin.inlineCallFrame ? codeOrigin.inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex);
     559                auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     560                appendCodeBlock(inlineCallFrame ? inlineCallFrame->baselineCodeBlock.get() : machineCodeBlock, codeOrigin.bytecodeIndex());
    560561            });
    561562
    562563            if (Options::collectSamplingProfilerDataForJSCShell()) {
    563564                RELEASE_ASSERT(machineOrigin.isSet());
    564                 RELEASE_ASSERT(!machineOrigin.inlineCallFrame);
     565                RELEASE_ASSERT(!machineOrigin.inlineCallFrame());
    565566
    566567                StackFrame::CodeLocation machineLocation = stackTrace.frames.last().semanticLocation;
Note: See TracChangeset for help on using the changeset viewer.