Changeset 202214 in webkit


Ignore:
Timestamp:
Jun 19, 2016 12:42:18 PM (8 years ago)
Author:
sbarati@apple.com
Message:

We should be able to generate more types of ICs inline
https://bugs.webkit.org/show_bug.cgi?id=158719
<rdar://problem/26825641>

Reviewed by Filip Pizlo.

This patch changes how we emit code for *byId ICs inline.
We no longer keep data labels to patch structure checks, etc.
Instead, we just regenerate the entire IC into a designated
region of code that the Baseline/DFG/FTL JIT will emit inline.
This makes it much simpler to patch inline ICs. All that's
needed to patch an inline IC is to memcpy the code from
a macro assembler inline using LinkBuffer. This architecture
will be easy to extend into other forms of ICs, such as one
for add, in the future.

To support this change, I've reworked the fields inside
StructureStubInfo. It now has one field that is the CodeLocationLabel
of the start of the inline IC. Then it has a few ints that track deltas
to other locations in the IC such as the slow path start, slow path call, the
ICs 'done' location. We used to perform math on these ints in a bunch of different
places. I've consolidated that math into methods inside StructureStubInfo.

To generate inline ICs, I've implemented a new class called InlineAccess.
InlineAccess is stateless: it just has a bunch of static methods for
generating code into the inline region specified by StructureStubInfo.
Repatch will now decide when it wants to generate such an inline
IC, and it will ask InlineAccess to do so.

I've implemented three types of inline ICs to begin with (extending
this in the future should be easy):

  • Self property loads (both inline and out of line offsets).
  • Self property replace (both inline and out of line offsets).
  • Array length on specific array types.

(An easy extension would be to implement JSString length.)

To know how much inline space to reserve, I've implemented a
method that stubs out the various inline cache shapes and
dumps their size. This is used to determine how much space
to save inline. When InlineAccess ends up generating more
code than can fit inline, we will fall back to generating
code with PolymorphicAccess instead.

To make generating code into already allocated executable memory
efficient, I've made AssemblerData have 128 bytes of inline storage.
This saves us a malloc when splatting code into the inline region.

This patch also tidies up LinkBuffer's API for generating
into already allocated executable memory. Now, when generating
code that has less size than the already allocated space, LinkBuffer
will fill the extra space with nops. Also, if branch compaction shrinks
the code, LinkBuffer will add a nop sled at the end of the shrunken
code to take up the entire allocated size.

This looks like it could be a 1% octane progression.

  • CMakeLists.txt:
  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/ARM64Assembler.h:

(JSC::ARM64Assembler::nop):
(JSC::ARM64Assembler::fillNops):

  • assembler/ARMv7Assembler.h:

(JSC::ARMv7Assembler::nopw):
(JSC::ARMv7Assembler::nopPseudo16):
(JSC::ARMv7Assembler::nopPseudo32):
(JSC::ARMv7Assembler::fillNops):
(JSC::ARMv7Assembler::dmbSY):

  • assembler/AbstractMacroAssembler.h:

(JSC::AbstractMacroAssembler::addLinkTask):
(JSC::AbstractMacroAssembler::emitNops):
(JSC::AbstractMacroAssembler::AbstractMacroAssembler):

  • assembler/AssemblerBuffer.h:

(JSC::AssemblerData::AssemblerData):
(JSC::AssemblerData::operator=):
(JSC::AssemblerData::~AssemblerData):
(JSC::AssemblerData::buffer):
(JSC::AssemblerData::grow):
(JSC::AssemblerData::isInlineBuffer):
(JSC::AssemblerBuffer::AssemblerBuffer):
(JSC::AssemblerBuffer::ensureSpace):
(JSC::AssemblerBuffer::codeSize):
(JSC::AssemblerBuffer::setCodeSize):
(JSC::AssemblerBuffer::label):
(JSC::AssemblerBuffer::debugOffset):
(JSC::AssemblerBuffer::releaseAssemblerData):

  • assembler/LinkBuffer.cpp:

(JSC::LinkBuffer::copyCompactAndLinkCode):
(JSC::LinkBuffer::linkCode):
(JSC::LinkBuffer::allocate):
(JSC::LinkBuffer::performFinalization):
(JSC::LinkBuffer::shrink): Deleted.

  • assembler/LinkBuffer.h:

(JSC::LinkBuffer::LinkBuffer):
(JSC::LinkBuffer::debugAddress):
(JSC::LinkBuffer::size):
(JSC::LinkBuffer::wasAlreadyDisassembled):
(JSC::LinkBuffer::didAlreadyDisassemble):
(JSC::LinkBuffer::applyOffset):
(JSC::LinkBuffer::code):

  • assembler/MacroAssemblerARM64.h:

(JSC::MacroAssemblerARM64::patchableBranch32):
(JSC::MacroAssemblerARM64::patchableBranch64):

  • assembler/MacroAssemblerARMv7.h:

(JSC::MacroAssemblerARMv7::patchableBranch32):
(JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):

  • assembler/X86Assembler.h:

(JSC::X86Assembler::nop):
(JSC::X86Assembler::fillNops):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::printGetByIdCacheStatus):

  • bytecode/InlineAccess.cpp: Added.

(JSC::InlineAccess::dumpCacheSizesAndCrash):
(JSC::linkCodeInline):
(JSC::InlineAccess::generateSelfPropertyAccess):
(JSC::getScratchRegister):
(JSC::hasFreeRegister):
(JSC::InlineAccess::canGenerateSelfPropertyReplace):
(JSC::InlineAccess::generateSelfPropertyReplace):
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::rewireStubAsJump):

  • bytecode/InlineAccess.h: Added.

(JSC::InlineAccess::sizeForPropertyAccess):
(JSC::InlineAccess::sizeForPropertyReplace):
(JSC::InlineAccess::sizeForLengthAccess):

  • bytecode/PolymorphicAccess.cpp:

(JSC::PolymorphicAccess::regenerate):

  • bytecode/StructureStubInfo.cpp:

(JSC::StructureStubInfo::initGetByIdSelf):
(JSC::StructureStubInfo::initArrayLength):
(JSC::StructureStubInfo::initPutByIdReplace):
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::propagateTransitions):
(JSC::StructureStubInfo::containsPC):

  • bytecode/StructureStubInfo.h:

(JSC::StructureStubInfo::considerCaching):
(JSC::StructureStubInfo::slowPathCallLocation):
(JSC::StructureStubInfo::doneLocation):
(JSC::StructureStubInfo::slowPathStartLocation):
(JSC::StructureStubInfo::patchableJumpForIn):
(JSC::StructureStubInfo::valueRegs):

  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::link):

  • dfg/DFGOSRExitCompilerCommon.cpp:

(JSC::DFG::reifyInlinedCallFrames):

  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::cachedGetById):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::cachedGetById):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::compileIn):
(JSC::FTL::DFG::LowerDFGToB3::getById):

  • jit/JITInlineCacheGenerator.cpp:

(JSC::JITByIdGenerator::finalize):
(JSC::JITByIdGenerator::generateFastCommon):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
(JSC::JITGetByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::JITPutByIdGenerator):
(JSC::JITPutByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::slowPathFunction):
(JSC::JITByIdGenerator::generateFastPathChecks): Deleted.

  • jit/JITInlineCacheGenerator.h:

(JSC::JITByIdGenerator::reportSlowPathCall):
(JSC::JITByIdGenerator::slowPathBegin):
(JSC::JITByIdGenerator::slowPathJump):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):

  • jit/JITPropertyAccess.cpp:

(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):

  • jit/JITPropertyAccess32_64.cpp:

(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):

  • jit/Repatch.cpp:

(JSC::repatchCall):
(JSC::tryCacheGetByID):
(JSC::repatchGetByID):
(JSC::appropriateGenericPutByIdFunction):
(JSC::tryCachePutByID):
(JSC::repatchPutByID):
(JSC::tryRepatchIn):
(JSC::repatchIn):
(JSC::linkSlowFor):
(JSC::resetGetByID):
(JSC::resetPutByID):
(JSC::resetIn):
(JSC::repatchByIdSelfAccess): Deleted.
(JSC::resetGetByIDCheckAndLoad): Deleted.
(JSC::resetPutByIDCheckAndLoad): Deleted.
(JSC::replaceWithJump): Deleted.

Location:
trunk/Source/JavaScriptCore
Files:
2 added
26 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/CMakeLists.txt

    r202157 r202214  
    201201    bytecode/GetByIdStatus.cpp
    202202    bytecode/GetByIdVariant.cpp
     203    bytecode/InlineAccess.cpp
    203204    bytecode/InlineCallFrame.cpp
    204205    bytecode/InlineCallFrameSet.cpp
  • trunk/Source/JavaScriptCore/ChangeLog

    r202205 r202214  
     12016-06-19  Saam Barati  <sbarati@apple.com>
     2
     3        We should be able to generate more types of ICs inline
     4        https://bugs.webkit.org/show_bug.cgi?id=158719
     5        <rdar://problem/26825641>
     6
     7        Reviewed by Filip Pizlo.
     8
     9        This patch changes how we emit code for *byId ICs inline.
     10        We no longer keep data labels to patch structure checks, etc.
     11        Instead, we just regenerate the entire IC into a designated
     12        region of code that the Baseline/DFG/FTL JIT will emit inline.
     13        This makes it much simpler to patch inline ICs. All that's
     14        needed to patch an inline IC is to memcpy the code from
     15        a macro assembler inline using LinkBuffer. This architecture
     16        will be easy to extend into other forms of ICs, such as one
     17        for add, in the future.
     18
     19        To support this change, I've reworked the fields inside
     20        StructureStubInfo. It now has one field that is the CodeLocationLabel
     21        of the start of the inline IC. Then it has a few ints that track deltas
     22        to other locations in the IC such as the slow path start, slow path call, the
     23        ICs 'done' location. We used to perform math on these ints in a bunch of different
     24        places. I've consolidated that math into methods inside StructureStubInfo.
     25
     26        To generate inline ICs, I've implemented a new class called InlineAccess.
     27        InlineAccess is stateless: it just has a bunch of static methods for
     28        generating code into the inline region specified by StructureStubInfo.
     29        Repatch will now decide when it wants to generate such an inline
     30        IC, and it will ask InlineAccess to do so.
     31
     32        I've implemented three types of inline ICs to begin with (extending
     33        this in the future should be easy):
     34        - Self property loads (both inline and out of line offsets).
     35        - Self property replace (both inline and out of line offsets).
     36        - Array length on specific array types.
     37        (An easy extension would be to implement JSString length.)
     38
     39        To know how much inline space to reserve, I've implemented a
     40        method that stubs out the various inline cache shapes and
     41        dumps their size. This is used to determine how much space
     42        to save inline. When InlineAccess ends up generating more
     43        code than can fit inline, we will fall back to generating
     44        code with PolymorphicAccess instead.
     45
     46        To make generating code into already allocated executable memory
     47        efficient, I've made AssemblerData have 128 bytes of inline storage.
     48        This saves us a malloc when splatting code into the inline region.
     49
     50        This patch also tidies up LinkBuffer's API for generating
     51        into already allocated executable memory. Now, when generating
     52        code that has less size than the already allocated space, LinkBuffer
     53        will fill the extra space with nops. Also, if branch compaction shrinks
     54        the code, LinkBuffer will add a nop sled at the end of the shrunken
     55        code to take up the entire allocated size.
     56
     57        This looks like it could be a 1% octane progression.
     58
     59        * CMakeLists.txt:
     60        * JavaScriptCore.xcodeproj/project.pbxproj:
     61        * assembler/ARM64Assembler.h:
     62        (JSC::ARM64Assembler::nop):
     63        (JSC::ARM64Assembler::fillNops):
     64        * assembler/ARMv7Assembler.h:
     65        (JSC::ARMv7Assembler::nopw):
     66        (JSC::ARMv7Assembler::nopPseudo16):
     67        (JSC::ARMv7Assembler::nopPseudo32):
     68        (JSC::ARMv7Assembler::fillNops):
     69        (JSC::ARMv7Assembler::dmbSY):
     70        * assembler/AbstractMacroAssembler.h:
     71        (JSC::AbstractMacroAssembler::addLinkTask):
     72        (JSC::AbstractMacroAssembler::emitNops):
     73        (JSC::AbstractMacroAssembler::AbstractMacroAssembler):
     74        * assembler/AssemblerBuffer.h:
     75        (JSC::AssemblerData::AssemblerData):
     76        (JSC::AssemblerData::operator=):
     77        (JSC::AssemblerData::~AssemblerData):
     78        (JSC::AssemblerData::buffer):
     79        (JSC::AssemblerData::grow):
     80        (JSC::AssemblerData::isInlineBuffer):
     81        (JSC::AssemblerBuffer::AssemblerBuffer):
     82        (JSC::AssemblerBuffer::ensureSpace):
     83        (JSC::AssemblerBuffer::codeSize):
     84        (JSC::AssemblerBuffer::setCodeSize):
     85        (JSC::AssemblerBuffer::label):
     86        (JSC::AssemblerBuffer::debugOffset):
     87        (JSC::AssemblerBuffer::releaseAssemblerData):
     88        * assembler/LinkBuffer.cpp:
     89        (JSC::LinkBuffer::copyCompactAndLinkCode):
     90        (JSC::LinkBuffer::linkCode):
     91        (JSC::LinkBuffer::allocate):
     92        (JSC::LinkBuffer::performFinalization):
     93        (JSC::LinkBuffer::shrink): Deleted.
     94        * assembler/LinkBuffer.h:
     95        (JSC::LinkBuffer::LinkBuffer):
     96        (JSC::LinkBuffer::debugAddress):
     97        (JSC::LinkBuffer::size):
     98        (JSC::LinkBuffer::wasAlreadyDisassembled):
     99        (JSC::LinkBuffer::didAlreadyDisassemble):
     100        (JSC::LinkBuffer::applyOffset):
     101        (JSC::LinkBuffer::code):
     102        * assembler/MacroAssemblerARM64.h:
     103        (JSC::MacroAssemblerARM64::patchableBranch32):
     104        (JSC::MacroAssemblerARM64::patchableBranch64):
     105        * assembler/MacroAssemblerARMv7.h:
     106        (JSC::MacroAssemblerARMv7::patchableBranch32):
     107        (JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
     108        * assembler/X86Assembler.h:
     109        (JSC::X86Assembler::nop):
     110        (JSC::X86Assembler::fillNops):
     111        * bytecode/CodeBlock.cpp:
     112        (JSC::CodeBlock::printGetByIdCacheStatus):
     113        * bytecode/InlineAccess.cpp: Added.
     114        (JSC::InlineAccess::dumpCacheSizesAndCrash):
     115        (JSC::linkCodeInline):
     116        (JSC::InlineAccess::generateSelfPropertyAccess):
     117        (JSC::getScratchRegister):
     118        (JSC::hasFreeRegister):
     119        (JSC::InlineAccess::canGenerateSelfPropertyReplace):
     120        (JSC::InlineAccess::generateSelfPropertyReplace):
     121        (JSC::InlineAccess::isCacheableArrayLength):
     122        (JSC::InlineAccess::generateArrayLength):
     123        (JSC::InlineAccess::rewireStubAsJump):
     124        * bytecode/InlineAccess.h: Added.
     125        (JSC::InlineAccess::sizeForPropertyAccess):
     126        (JSC::InlineAccess::sizeForPropertyReplace):
     127        (JSC::InlineAccess::sizeForLengthAccess):
     128        * bytecode/PolymorphicAccess.cpp:
     129        (JSC::PolymorphicAccess::regenerate):
     130        * bytecode/StructureStubInfo.cpp:
     131        (JSC::StructureStubInfo::initGetByIdSelf):
     132        (JSC::StructureStubInfo::initArrayLength):
     133        (JSC::StructureStubInfo::initPutByIdReplace):
     134        (JSC::StructureStubInfo::deref):
     135        (JSC::StructureStubInfo::aboutToDie):
     136        (JSC::StructureStubInfo::propagateTransitions):
     137        (JSC::StructureStubInfo::containsPC):
     138        * bytecode/StructureStubInfo.h:
     139        (JSC::StructureStubInfo::considerCaching):
     140        (JSC::StructureStubInfo::slowPathCallLocation):
     141        (JSC::StructureStubInfo::doneLocation):
     142        (JSC::StructureStubInfo::slowPathStartLocation):
     143        (JSC::StructureStubInfo::patchableJumpForIn):
     144        (JSC::StructureStubInfo::valueRegs):
     145        * dfg/DFGJITCompiler.cpp:
     146        (JSC::DFG::JITCompiler::link):
     147        * dfg/DFGOSRExitCompilerCommon.cpp:
     148        (JSC::DFG::reifyInlinedCallFrames):
     149        * dfg/DFGSpeculativeJIT32_64.cpp:
     150        (JSC::DFG::SpeculativeJIT::cachedGetById):
     151        * dfg/DFGSpeculativeJIT64.cpp:
     152        (JSC::DFG::SpeculativeJIT::cachedGetById):
     153        * ftl/FTLLowerDFGToB3.cpp:
     154        (JSC::FTL::DFG::LowerDFGToB3::compileIn):
     155        (JSC::FTL::DFG::LowerDFGToB3::getById):
     156        * jit/JITInlineCacheGenerator.cpp:
     157        (JSC::JITByIdGenerator::finalize):
     158        (JSC::JITByIdGenerator::generateFastCommon):
     159        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
     160        (JSC::JITGetByIdGenerator::generateFastPath):
     161        (JSC::JITPutByIdGenerator::JITPutByIdGenerator):
     162        (JSC::JITPutByIdGenerator::generateFastPath):
     163        (JSC::JITPutByIdGenerator::slowPathFunction):
     164        (JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
     165        * jit/JITInlineCacheGenerator.h:
     166        (JSC::JITByIdGenerator::reportSlowPathCall):
     167        (JSC::JITByIdGenerator::slowPathBegin):
     168        (JSC::JITByIdGenerator::slowPathJump):
     169        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
     170        * jit/JITPropertyAccess.cpp:
     171        (JSC::JIT::emitGetByValWithCachedId):
     172        (JSC::JIT::emit_op_try_get_by_id):
     173        (JSC::JIT::emit_op_get_by_id):
     174        * jit/JITPropertyAccess32_64.cpp:
     175        (JSC::JIT::emitGetByValWithCachedId):
     176        (JSC::JIT::emit_op_try_get_by_id):
     177        (JSC::JIT::emit_op_get_by_id):
     178        * jit/Repatch.cpp:
     179        (JSC::repatchCall):
     180        (JSC::tryCacheGetByID):
     181        (JSC::repatchGetByID):
     182        (JSC::appropriateGenericPutByIdFunction):
     183        (JSC::tryCachePutByID):
     184        (JSC::repatchPutByID):
     185        (JSC::tryRepatchIn):
     186        (JSC::repatchIn):
     187        (JSC::linkSlowFor):
     188        (JSC::resetGetByID):
     189        (JSC::resetPutByID):
     190        (JSC::resetIn):
     191        (JSC::repatchByIdSelfAccess): Deleted.
     192        (JSC::resetGetByIDCheckAndLoad): Deleted.
     193        (JSC::resetPutByIDCheckAndLoad): Deleted.
     194        (JSC::replaceWithJump): Deleted.
     195
    11962016-06-19  Filip Pizlo  <fpizlo@apple.com>
    2197
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r202157 r202214  
    12741274                72AAF7CD1D0D31B3005E60BE /* JSCustomGetterSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */; };
    12751275                72AAF7CE1D0D31B3005E60BE /* JSCustomGetterSetterFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */; };
     1276                7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7905BB661D12050E0019FE57 /* InlineAccess.cpp */; };
     1277                7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */ = {isa = PBXBuildFile; fileRef = 7905BB671D12050E0019FE57 /* InlineAccess.h */; settings = {ATTRIBUTES = (Private, ); }; };
    12761278                79160DBD1C8E3EC8008C085A /* ProxyRevoke.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */; };
    12771279                79160DBE1C8E3EC8008C085A /* ProxyRevoke.h in Headers */ = {isa = PBXBuildFile; fileRef = 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */; settings = {ATTRIBUTES = (Private, ); }; };
     
    34583460                72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCustomGetterSetterFunction.cpp; sourceTree = "<group>"; };
    34593461                72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSCustomGetterSetterFunction.h; sourceTree = "<group>"; };
     3462                7905BB661D12050E0019FE57 /* InlineAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InlineAccess.cpp; sourceTree = "<group>"; };
     3463                7905BB671D12050E0019FE57 /* InlineAccess.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InlineAccess.h; sourceTree = "<group>"; };
    34603464                79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProxyRevoke.cpp; sourceTree = "<group>"; };
    34613465                79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProxyRevoke.h; sourceTree = "<group>"; };
     
    65876591                                0F0332C218B01763005F979A /* GetByIdVariant.h */,
    65886592                                0F0B83A814BCF55E00885B4F /* HandlerInfo.h */,
     6593                                7905BB661D12050E0019FE57 /* InlineAccess.cpp */,
     6594                                7905BB671D12050E0019FE57 /* InlineAccess.h */,
    65896595                                148A7BED1B82975A002D9157 /* InlineCallFrame.cpp */,
    65906596                                148A7BEE1B82975A002D9157 /* InlineCallFrame.h */,
     
    73317337                                43C392AB1C3BEB0500241F53 /* AssemblerCommon.h in Headers */,
    73327338                                86EC9DD01328DF82002B2AD7 /* DFGOperations.h in Headers */,
     7339                                7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */,
    73337340                                A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */,
    73347341                                0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */,
     
    93169323                                527773DE1AAF83AC00BDE7E8 /* RuntimeType.cpp in Sources */,
    93179324                                0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */,
     9325                                7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */,
    93189326                                0FE050271AA9095600D33B33 /* ScopedArguments.cpp in Sources */,
    93199327                                0FE0502F1AAA806900D33B33 /* ScopedArgumentsTable.cpp in Sources */,
  • trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h

    r199299 r202214  
    14851485    }
    14861486   
    1487     static void fillNops(void* base, size_t size)
     1487    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
    14881488    {
    14891489        RELEASE_ASSERT(!(size % sizeof(int32_t)));
     
    14911491        for (int32_t* ptr = static_cast<int32_t*>(base); n--;) {
    14921492            int insn = nopPseudo();
    1493             performJITMemcpy(ptr++, &insn, sizeof(int));
     1493            if (isCopyingToExecutableMemory)
     1494                performJITMemcpy(ptr++, &insn, sizeof(int));
     1495            else
     1496                memcpy(ptr++, &insn, sizeof(int));
    14941497        }
    14951498    }
  • trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h

    r200742 r202214  
    20052005    }
    20062006   
     2007    static constexpr int16_t nopPseudo16()
     2008    {
     2009        return OP_NOP_T1;
     2010    }
     2011
     2012    static constexpr int32_t nopPseudo32()
     2013    {
     2014        return OP_NOP_T2a | (OP_NOP_T2b << 16);
     2015    }
     2016
     2017    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     2018    {
     2019        RELEASE_ASSERT(!(size % sizeof(int16_t)));
     2020
     2021        char* ptr = static_cast<char*>(base);
     2022        const size_t num32s = size / sizeof(int32_t);
     2023        for (size_t i = 0; i < num32s; i++) {
     2024            const int32_t insn = nopPseudo32();
     2025            if (isCopyingToExecutableMemory)
     2026                performJITMemcpy(ptr, &insn, sizeof(int32_t));
     2027            else
     2028                memcpy(ptr, &insn, sizeof(int32_t));
     2029            ptr += sizeof(int32_t);
     2030        }
     2031
     2032        const size_t num16s = (size % sizeof(int32_t)) / sizeof(int16_t);
     2033        ASSERT(num16s == 0 || num16s == 1);
     2034        ASSERT(num16s * sizeof(int16_t) + num32s * sizeof(int32_t) == size);
     2035        if (num16s) {
     2036            const int16_t insn = nopPseudo16();
     2037            if (isCopyingToExecutableMemory)
     2038                performJITMemcpy(ptr, &insn, sizeof(int16_t));
     2039            else
     2040                memcpy(ptr, &insn, sizeof(int16_t));
     2041        }
     2042    }
     2043
    20072044    void dmbSY()
    20082045    {
  • trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h

    r199946 r202214  
    2828
    2929#include "AbortReason.h"
    30 #include "AssemblerBuffer.h"
    3130#include "CodeLocation.h"
    3231#include "MacroAssemblerCodeRef.h"
     
    10411040    }
    10421041
     1042    void emitNops(size_t memoryToFillWithNopsInBytes)
     1043    {
     1044        AssemblerBuffer& buffer = m_assembler.buffer();
     1045        size_t startCodeSize = buffer.codeSize();
     1046        size_t targetCodeSize = startCodeSize + memoryToFillWithNopsInBytes;
     1047        buffer.ensureSpace(memoryToFillWithNopsInBytes);
     1048        bool isCopyingToExecutableMemory = false;
     1049        AssemblerType::fillNops(static_cast<char*>(buffer.data()) + startCodeSize, memoryToFillWithNopsInBytes, isCopyingToExecutableMemory);
     1050        buffer.setCodeSize(targetCodeSize);
     1051    }
     1052
    10431053protected:
    10441054    AbstractMacroAssembler()
  • trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h

    r198708 r202214  
    6363
    6464    class AssemblerData {
     65        WTF_MAKE_NONCOPYABLE(AssemblerData);
     66        static const size_t InlineCapacity = 128;
    6567    public:
    6668        AssemblerData()
    67             : m_buffer(nullptr)
    68             , m_capacity(0)
    69         {
    70         }
    71 
    72         AssemblerData(unsigned initialCapacity)
    73         {
    74             m_capacity = initialCapacity;
    75             m_buffer = static_cast<char*>(fastMalloc(m_capacity));
     69            : m_buffer(m_inlineBuffer)
     70            , m_capacity(InlineCapacity)
     71        {
     72        }
     73
     74        AssemblerData(size_t initialCapacity)
     75        {
     76            if (initialCapacity <= InlineCapacity) {
     77                m_capacity = InlineCapacity;
     78                m_buffer = m_inlineBuffer;
     79            } else {
     80                m_capacity = initialCapacity;
     81                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
     82            }
    7683        }
    7784
    7885        AssemblerData(AssemblerData&& other)
    7986        {
    80             m_buffer = other.m_buffer;
     87            if (other.isInlineBuffer()) {
     88                ASSERT(other.m_capacity == InlineCapacity);
     89                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
     90                m_buffer = m_inlineBuffer;
     91            } else
     92                m_buffer = other.m_buffer;
     93            m_capacity = other.m_capacity;
     94
    8195            other.m_buffer = nullptr;
     96            other.m_capacity = 0;
     97        }
     98
     99        AssemblerData& operator=(AssemblerData&& other)
     100        {
     101            if (m_buffer && !isInlineBuffer())
     102                fastFree(m_buffer);
     103
     104            if (other.isInlineBuffer()) {
     105                ASSERT(other.m_capacity == InlineCapacity);
     106                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
     107                m_buffer = m_inlineBuffer;
     108            } else
     109                m_buffer = other.m_buffer;
    82110            m_capacity = other.m_capacity;
    83             other.m_capacity = 0;
    84         }
    85 
    86         AssemblerData& operator=(AssemblerData&& other)
    87         {
    88             m_buffer = other.m_buffer;
     111
    89112            other.m_buffer = nullptr;
    90             m_capacity = other.m_capacity;
    91113            other.m_capacity = 0;
    92114            return *this;
     
    95117        ~AssemblerData()
    96118        {
    97             fastFree(m_buffer);
     119            if (m_buffer && !isInlineBuffer())
     120                fastFree(m_buffer);
    98121        }
    99122
     
    105128        {
    106129            m_capacity = m_capacity + m_capacity / 2 + extraCapacity;
    107             m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
     130            if (isInlineBuffer()) {
     131                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
     132                memcpy(m_buffer, m_inlineBuffer, InlineCapacity);
     133            } else
     134                m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
    108135        }
    109136
    110137    private:
     138        bool isInlineBuffer() const { return m_buffer == m_inlineBuffer; }
    111139        char* m_buffer;
     140        char m_inlineBuffer[InlineCapacity];
    112141        unsigned m_capacity;
    113142    };
    114143
    115144    class AssemblerBuffer {
    116         static const int initialCapacity = 128;
    117145    public:
    118146        AssemblerBuffer()
    119             : m_storage(initialCapacity)
     147            : m_storage()
    120148            , m_index(0)
    121149        {
     
    129157        void ensureSpace(unsigned space)
    130158        {
    131             if (!isAvailable(space))
     159            while (!isAvailable(space))
    132160                outOfLineGrow();
    133161        }
     
    157185        }
    158186
     187        void setCodeSize(size_t index)
     188        {
     189            // Warning: Only use this if you know exactly what you are doing.
     190            // For example, say you want 40 bytes of nops, it's ok to grow
     191            // and then fill 40 bytes of nops using bigger instructions.
     192            m_index = index;
     193            ASSERT(m_index <= m_storage.capacity());
     194        }
     195
    159196        AssemblerLabel label() const
    160197        {
     
    164201        unsigned debugOffset() { return m_index; }
    165202
    166         AssemblerData releaseAssemblerData() { return WTFMove(m_storage); }
     203        AssemblerData&& releaseAssemblerData() { return WTFMove(m_storage); }
    167204
    168205        // LocalWriter is a trick to keep the storage buffer and the index
  • trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp

    r201380 r202214  
    9999void LinkBuffer::copyCompactAndLinkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
    100100{
    101     m_initialSize = macroAssembler.m_assembler.codeSize();
    102     allocate(m_initialSize, ownerUID, effort);
     101    allocate(macroAssembler, ownerUID, effort);
     102    const size_t initialSize = macroAssembler.m_assembler.codeSize();
    103103    if (didFailToAllocate())
    104104        return;
     105
    105106    Vector<LinkRecord, 0, UnsafeVectorOverflow>& jumpsToLink = macroAssembler.jumpsToLink();
    106107    m_assemblerStorage = macroAssembler.m_assembler.buffer().releaseAssemblerData();
     
    108109
    109110    AssemblerData outBuffer(m_size);
     111
    110112    uint8_t* outData = reinterpret_cast<uint8_t*>(outBuffer.buffer());
    111113    uint8_t* codeOutData = reinterpret_cast<uint8_t*>(m_code);
     
    114116    int writePtr = 0;
    115117    unsigned jumpCount = jumpsToLink.size();
    116     for (unsigned i = 0; i < jumpCount; ++i) {
    117         int offset = readPtr - writePtr;
    118         ASSERT(!(offset & 1));
    119            
    120         // Copy the instructions from the last jump to the current one.
    121         size_t regionSize = jumpsToLink[i].from() - readPtr;
    122         InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
    123         InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
    124         InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
    125         ASSERT(!(regionSize % 2));
    126         ASSERT(!(readPtr % 2));
    127         ASSERT(!(writePtr % 2));
    128         while (copySource != copyEnd)
    129             *copyDst++ = *copySource++;
    130         recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
    131         readPtr += regionSize;
    132         writePtr += regionSize;
    133            
    134         // Calculate absolute address of the jump target, in the case of backwards
    135         // branches we need to be precise, forward branches we are pessimistic
    136         const uint8_t* target;
    137         if (jumpsToLink[i].to() >= jumpsToLink[i].from())
    138             target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
    139         else
    140             target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
    141            
    142         JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
    143         // Compact branch if we can...
    144         if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
    145             // Step back in the write stream
    146             int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
    147             if (delta) {
    148                 writePtr -= delta;
    149                 recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
     118    if (m_shouldPerformBranchCompaction) {
     119        for (unsigned i = 0; i < jumpCount; ++i) {
     120            int offset = readPtr - writePtr;
     121            ASSERT(!(offset & 1));
     122               
     123            // Copy the instructions from the last jump to the current one.
     124            size_t regionSize = jumpsToLink[i].from() - readPtr;
     125            InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
     126            InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
     127            InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
     128            ASSERT(!(regionSize % 2));
     129            ASSERT(!(readPtr % 2));
     130            ASSERT(!(writePtr % 2));
     131            while (copySource != copyEnd)
     132                *copyDst++ = *copySource++;
     133            recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
     134            readPtr += regionSize;
     135            writePtr += regionSize;
     136               
     137            // Calculate absolute address of the jump target, in the case of backwards
     138            // branches we need to be precise, forward branches we are pessimistic
     139            const uint8_t* target;
     140            if (jumpsToLink[i].to() >= jumpsToLink[i].from())
     141                target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
     142            else
     143                target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
     144               
     145            JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
     146            // Compact branch if we can...
     147            if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
     148                // Step back in the write stream
     149                int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
     150                if (delta) {
     151                    writePtr -= delta;
     152                    recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
     153                }
    150154            }
     155            jumpsToLink[i].setFrom(writePtr);
    151156        }
    152         jumpsToLink[i].setFrom(writePtr);
     157    } else {
     158        if (!ASSERT_DISABLED) {
     159            for (unsigned i = 0; i < jumpCount; ++i)
     160                ASSERT(!MacroAssembler::canCompact(jumpsToLink[i].type()));
     161        }
    153162    }
    154163    // Copy everything after the last jump
    155     memcpy(outData + writePtr, inData + readPtr, m_initialSize - readPtr);
    156     recordLinkOffsets(m_assemblerStorage, readPtr, m_initialSize, readPtr - writePtr);
     164    memcpy(outData + writePtr, inData + readPtr, initialSize - readPtr);
     165    recordLinkOffsets(m_assemblerStorage, readPtr, initialSize, readPtr - writePtr);
    157166       
    158167    for (unsigned i = 0; i < jumpCount; ++i) {
     
    163172
    164173    jumpsToLink.clear();
    165     shrink(writePtr + m_initialSize - readPtr);
    166 
    167     performJITMemcpy(m_code, outBuffer.buffer(), m_size);
     174
     175    size_t compactSize = writePtr + initialSize - readPtr;
     176    if (m_executableMemory) {
     177        m_size = compactSize;
     178        m_executableMemory->shrink(m_size);
     179    } else {
     180        size_t nopSizeInBytes = initialSize - compactSize;
     181        bool isCopyingToExecutableMemory = false;
     182        MacroAssembler::AssemblerType_T::fillNops(outData + compactSize, nopSizeInBytes, isCopyingToExecutableMemory);
     183    }
     184
     185    performJITMemcpy(m_code, outData, m_size);
    168186
    169187#if DUMP_LINK_STATISTICS
    170     dumpLinkStatistics(m_code, m_initialSize, m_size);
     188    dumpLinkStatistics(m_code, initialSize, m_size);
    171189#endif
    172190#if DUMP_CODE
     
    183201    macroAssembler.m_assembler.buffer().flushConstantPool(false);
    184202#endif
    185     AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
    186     allocate(buffer.codeSize(), ownerUID, effort);
     203    allocate(macroAssembler, ownerUID, effort);
    187204    if (!m_didAllocate)
    188205        return;
    189206    ASSERT(m_code);
     207    AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
    190208#if CPU(ARM_TRADITIONAL)
    191209    macroAssembler.m_assembler.prepareExecutableCopy(m_code);
     
    199217#elif CPU(ARM64)
    200218    copyCompactAndLinkCode<uint32_t>(macroAssembler, ownerUID, effort);
    201 #endif
     219#endif // !ENABLE(BRANCH_COMPACTION)
    202220
    203221    m_linkTasks = WTFMove(macroAssembler.m_linkTasks);
    204222}
    205223
    206 void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort)
    207 {
     224void LinkBuffer::allocate(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
     225{
     226    size_t initialSize = macroAssembler.m_assembler.codeSize();
    208227    if (m_code) {
    209228        if (initialSize > m_size)
    210229            return;
    211230       
     231        size_t nopsToFillInBytes = m_size - initialSize;
     232        macroAssembler.emitNops(nopsToFillInBytes);
    212233        m_didAllocate = true;
    213         m_size = initialSize;
    214234        return;
    215235    }
     
    222242    m_size = initialSize;
    223243    m_didAllocate = true;
    224 }
    225 
    226 void LinkBuffer::shrink(size_t newSize)
    227 {
    228     if (!m_executableMemory)
    229         return;
    230     m_size = newSize;
    231     m_executableMemory->shrink(m_size);
    232244}
    233245
  • trunk/Source/JavaScriptCore/assembler/LinkBuffer.h

    r199710 r202214  
    8383    LinkBuffer(VM& vm, MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort = JITCompilationMustSucceed)
    8484        : m_size(0)
    85 #if ENABLE(BRANCH_COMPACTION)
    86         , m_initialSize(0)
    87 #endif
    8885        , m_didAllocate(false)
    8986        , m_code(0)
     
    9693    }
    9794
    98     LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed)
     95    LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed, bool shouldPerformBranchCompaction = true)
    9996        : m_size(size)
    100 #if ENABLE(BRANCH_COMPACTION)
    101         , m_initialSize(0)
    102 #endif
    10397        , m_didAllocate(false)
    10498        , m_code(code)
     
    108102#endif
    109103    {
     104#if ENABLE(BRANCH_COMPACTION)
     105        m_shouldPerformBranchCompaction = shouldPerformBranchCompaction;
     106#else
     107        UNUSED_PARAM(shouldPerformBranchCompaction);
     108#endif
    110109        linkCode(macroAssembler, 0, effort);
    111110    }
     
    251250    }
    252251
    253     // FIXME: this does not account for the AssemblerData size!
    254     size_t size()
    255     {
    256         return m_size;
    257     }
     252    size_t size() const { return m_size; }
    258253   
    259254    bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; }
     
    279274        return src;
    280275    }
    281    
     276
    282277    // Keep this private! - the underlying code should only be obtained externally via finalizeCode().
    283278    void* code()
     
    286281    }
    287282   
    288     void allocate(size_t initialSize, void* ownerUID, JITCompilationEffort);
    289     void shrink(size_t newSize);
     283    void allocate(MacroAssembler&, void* ownerUID, JITCompilationEffort);
    290284
    291285    JS_EXPORT_PRIVATE void linkCode(MacroAssembler&, void* ownerUID, JITCompilationEffort);
     
    308302    size_t m_size;
    309303#if ENABLE(BRANCH_COMPACTION)
    310     size_t m_initialSize;
    311304    AssemblerData m_assemblerStorage;
     305    bool m_shouldPerformBranchCompaction { true };
    312306#endif
    313307    bool m_didAllocate;
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h

    r201208 r202214  
    30883088    }
    30893089
     3090    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
     3091    {
     3092        m_makeJumpPatchable = true;
     3093        Jump result = branch32(cond, left, imm);
     3094        m_makeJumpPatchable = false;
     3095        return PatchableJump(result);
     3096    }
     3097
    30903098    PatchableJump patchableBranch64(RelationalCondition cond, RegisterID reg, TrustedImm64 imm)
    30913099    {
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h

    r199894 r202214  
    18691869    }
    18701870
     1871    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
     1872    {
     1873        m_makeJumpPatchable = true;
     1874        Jump result = branch32(cond, left, imm);
     1875        m_makeJumpPatchable = false;
     1876        return PatchableJump(result);
     1877    }
     1878
    18711879    PatchableJump patchableBranchPtrWithPatch(RelationalCondition cond, Address left, DataLabelPtr& dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0))
    18721880    {
  • trunk/Source/JavaScriptCore/assembler/X86Assembler.h

    r201208 r202214  
    29472947    }
    29482948
    2949     static void fillNops(void* base, size_t size)
    2950     {
     2949    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     2950    {
     2951        UNUSED_PARAM(isCopyingToExecutableMemory);
    29512952#if CPU(X86_64)
    29522953        static const uint8_t nops[10][10] = {
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r202157 r202214  
    441441        case CacheType::Unset:
    442442            out.printf("unset");
     443            break;
     444        case CacheType::ArrayLength:
     445            out.printf("ArrayLength");
    443446            break;
    444447        default:
  • trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp

    r201703 r202214  
    15521552   
    15531553    state.baseGPR = static_cast<GPRReg>(stubInfo.patch.baseGPR);
    1554     state.valueRegs = JSValueRegs(
    1555 #if USE(JSVALUE32_64)
    1556         static_cast<GPRReg>(stubInfo.patch.valueTagGPR),
    1557 #endif
    1558         static_cast<GPRReg>(stubInfo.patch.valueGPR));
     1554    state.valueRegs = stubInfo.valueRegs();
    15591555
    15601556    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
     
    17541750    }
    17551751
    1756     CodeLocationLabel successLabel =
    1757         stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToDone);
     1752    CodeLocationLabel successLabel = stubInfo.doneLocation();
    17581753       
    17591754    linkBuffer.link(state.success, successLabel);
    17601755
    1761     linkBuffer.link(
    1762         failure,
    1763         stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
     1756    linkBuffer.link(failure, stubInfo.slowPathStartLocation());
    17641757   
    17651758    if (verbose)
  • trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp

    r201657 r202214  
    6464}
    6565
     66void StructureStubInfo::initArrayLength()
     67{
     68    cacheType = CacheType::ArrayLength;
     69}
     70
    6671void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
    6772{
     
    8893    case CacheType::GetByIdSelf:
    8994    case CacheType::PutByIdReplace:
     95    case CacheType::ArrayLength:
    9096        return;
    9197    }
     
    103109    case CacheType::GetByIdSelf:
    104110    case CacheType::PutByIdReplace:
     111    case CacheType::ArrayLength:
    105112        return;
    106113    }
     
    258265    switch (cacheType) {
    259266    case CacheType::Unset:
     267    case CacheType::ArrayLength:
    260268        return true;
    261269    case CacheType::GetByIdSelf:
     
    276284    return u.stub->containsPC(pc);
    277285}
    278 #endif
     286
     287#endif // ENABLE(JIT)
    279288
    280289} // namespace JSC
  • trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h

    r200405 r202214  
    5858    GetByIdSelf,
    5959    PutByIdReplace,
    60     Stub
     60    Stub,
     61    ArrayLength
    6162};
    6263
     
    6970
    7071    void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
     72    void initArrayLength();
    7173    void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
    7274    void initStub(CodeBlock*, std::unique_ptr<PolymorphicAccess>);
     
    144146    }
    145147
    146     CodeLocationCall callReturnLocation;
     148    bool containsPC(void* pc) const;
    147149
    148150    CodeOrigin codeOrigin;
    149151    CallSiteIndex callSiteIndex;
    150 
    151     bool containsPC(void* pc) const;
    152152
    153153    union {
     
    166166   
    167167    struct {
     168        CodeLocationLabel start; // This is either the start of the inline IC for *byId caches, or the location of patchable jump for 'in' caches.
     169        RegisterSet usedRegisters;
     170        uint32_t inlineSize;
     171        int32_t deltaFromStartToSlowPathCallLocation;
     172        int32_t deltaFromStartToSlowPathStart;
     173
    168174        int8_t baseGPR;
     175        int8_t valueGPR;
    169176#if USE(JSVALUE32_64)
    170177        int8_t valueTagGPR;
    171178        int8_t baseTagGPR;
    172179#endif
    173         int8_t valueGPR;
    174         RegisterSet usedRegisters;
    175         int32_t deltaCallToDone;
    176         int32_t deltaCallToJump;
    177         int32_t deltaCallToSlowCase;
    178         int32_t deltaCheckImmToCall;
    179 #if USE(JSVALUE64)
    180         int32_t deltaCallToLoadOrStore;
    181 #else
    182         int32_t deltaCallToTagLoadOrStore;
    183         int32_t deltaCallToPayloadLoadOrStore;
     180    } patch;
     181
     182    CodeLocationCall slowPathCallLocation() { return patch.start.callAtOffset(patch.deltaFromStartToSlowPathCallLocation); }
     183    CodeLocationLabel doneLocation() { return patch.start.labelAtOffset(patch.inlineSize); }
     184    CodeLocationLabel slowPathStartLocation() { return patch.start.labelAtOffset(patch.deltaFromStartToSlowPathStart); }
     185    CodeLocationJump patchableJumpForIn()
     186    {
     187        ASSERT(accessType == AccessType::In);
     188        return patch.start.jumpAtOffset(0);
     189    }
     190
     191    JSValueRegs valueRegs() const
     192    {
     193        return JSValueRegs(
     194#if USE(JSVALUE32_64)
     195            static_cast<GPRReg>(patch.valueTagGPR),
    184196#endif
    185     } patch;
     197            static_cast<GPRReg>(patch.valueGPR));
     198    }
     199
    186200
    187201    AccessType accessType;
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

    r200879 r202214  
    259259    for (unsigned i = 0; i < m_ins.size(); ++i) {
    260260        StructureStubInfo& info = *m_ins[i].m_stubInfo;
    261         CodeLocationCall callReturnLocation = linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call());
    262         info.patch.deltaCallToDone = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_done));
    263         info.patch.deltaCallToJump = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_jump));
    264         info.callReturnLocation = callReturnLocation;
    265         info.patch.deltaCallToSlowCase = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
     261
     262        CodeLocationLabel start = linkBuffer.locationOf(m_ins[i].m_jump);
     263        info.patch.start = start;
     264
     265        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
     266            start, linkBuffer.locationOf(m_ins[i].m_done));
     267        RELEASE_ASSERT(inlineSize >= 0);
     268        info.patch.inlineSize = inlineSize;
     269
     270        info.patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
     271            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call()));
     272
     273        info.patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
     274            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
    266275    }
    267276   
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r199303 r202214  
    187187                RELEASE_ASSERT(stubInfo);
    188188
    189                 jumpTarget = stubInfo->callReturnLocation.labelAtOffset(
    190                     stubInfo->patch.deltaCallToDone).executableAddress();
     189                jumpTarget = stubInfo->doneLocation().executableAddress();
    191190                break;
    192191            }
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

    r202125 r202214  
    198198    CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
    199199    JITGetByIdGenerator gen(
    200         m_jit.codeBlock(), codeOrigin, callSite, usedRegisters,
    201         JSValueRegs(baseTagGPROrNone, basePayloadGPR),
    202         JSValueRegs(resultTagGPR, resultPayloadGPR), type);
     200        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
     201        JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), type);
    203202   
    204203    gen.generateFastPath(m_jit);
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r202125 r202214  
    169169    }
    170170    JITGetByIdGenerator gen(
    171         m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, JSValueRegs(baseGPR),
    172         JSValueRegs(resultGPR), type);
     171        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
     172        JSValueRegs(baseGPR), JSValueRegs(resultGPR), type);
    173173    gen.generateFastPath(m_jit);
    174174   
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r202141 r202214  
    61846184                                jit.addLinkTask(
    61856185                                    [=] (LinkBuffer& linkBuffer) {
    6186                                         CodeLocationCall callReturnLocation =
    6187                                             linkBuffer.locationOf(slowPathCall);
    6188                                         stubInfo->patch.deltaCallToDone =
    6189                                             CCallHelpers::differenceBetweenCodePtr(
    6190                                                 callReturnLocation,
    6191                                                 linkBuffer.locationOf(done));
    6192                                         stubInfo->patch.deltaCallToJump =
    6193                                             CCallHelpers::differenceBetweenCodePtr(
    6194                                                 callReturnLocation,
    6195                                                 linkBuffer.locationOf(jump));
    6196                                         stubInfo->callReturnLocation = callReturnLocation;
    6197                                         stubInfo->patch.deltaCallToSlowCase =
    6198                                             CCallHelpers::differenceBetweenCodePtr(
    6199                                                 callReturnLocation,
    6200                                                 linkBuffer.locationOf(slowPathBegin));
     6186                                        CodeLocationLabel start = linkBuffer.locationOf(jump);
     6187                                        stubInfo->patch.start = start;
     6188                                        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
     6189                                            start, linkBuffer.locationOf(done));
     6190                                        RELEASE_ASSERT(inlineSize >= 0);
     6191                                        stubInfo->patch.inlineSize = inlineSize;
     6192
     6193                                        stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
     6194                                            start, linkBuffer.locationOf(slowPathCall));
     6195
     6196                                        stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
     6197                                            start, linkBuffer.locationOf(slowPathBegin));
     6198
    62016199                                    });
    62026200                            });
     
    76177615                auto generator = Box<JITGetByIdGenerator>::create(
    76187616                    jit.codeBlock(), node->origin.semantic, callSiteIndex,
    7619                     params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
     7617                    params.unavailableRegisters(), uid, JSValueRegs(params[1].gpr()),
    76207618                    JSValueRegs(params[0].gpr()), type);
    76217619
  • trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp

    r199303 r202214  
    3030
    3131#include "CodeBlock.h"
     32#include "InlineAccess.h"
     33#include "JSCInlines.h"
    3234#include "LinkBuffer.h"
    33 #include "JSCInlines.h"
    3435#include "StructureStubInfo.h"
    3536
     
    7071void JITByIdGenerator::finalize(LinkBuffer& fastPath, LinkBuffer& slowPath)
    7172{
    72     CodeLocationCall callReturnLocation = slowPath.locationOf(m_call);
    73     m_stubInfo->callReturnLocation = callReturnLocation;
    74     m_stubInfo->patch.deltaCheckImmToCall = MacroAssembler::differenceBetweenCodePtr(
    75         fastPath.locationOf(m_structureImm), callReturnLocation);
    76     m_stubInfo->patch.deltaCallToJump = MacroAssembler::differenceBetweenCodePtr(
    77         callReturnLocation, fastPath.locationOf(m_structureCheck));
    78 #if USE(JSVALUE64)
    79     m_stubInfo->patch.deltaCallToLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
    80         callReturnLocation, fastPath.locationOf(m_loadOrStore));
    81 #else
    82     m_stubInfo->patch.deltaCallToTagLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
    83         callReturnLocation, fastPath.locationOf(m_tagLoadOrStore));
    84     m_stubInfo->patch.deltaCallToPayloadLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
    85         callReturnLocation, fastPath.locationOf(m_loadOrStore));
    86 #endif
    87     m_stubInfo->patch.deltaCallToSlowCase = MacroAssembler::differenceBetweenCodePtr(
    88         callReturnLocation, slowPath.locationOf(m_slowPathBegin));
    89     m_stubInfo->patch.deltaCallToDone = MacroAssembler::differenceBetweenCodePtr(
    90         callReturnLocation, fastPath.locationOf(m_done));
     73    ASSERT(m_start.isSet());
     74    CodeLocationLabel start = fastPath.locationOf(m_start);
     75    m_stubInfo->patch.start = start;
     76
     77    int32_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
     78        start, fastPath.locationOf(m_done));
     79    ASSERT(inlineSize > 0);
     80    m_stubInfo->patch.inlineSize = inlineSize;
     81
     82    m_stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
     83        start, slowPath.locationOf(m_slowPathCall));
     84    m_stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
     85        start, slowPath.locationOf(m_slowPathBegin));
    9186}
    9287
     
    9691}
    9792
    98 void JITByIdGenerator::generateFastPathChecks(MacroAssembler& jit)
     93void JITByIdGenerator::generateFastCommon(MacroAssembler& jit, size_t inlineICSize)
    9994{
    100     m_structureCheck = jit.patchableBranch32WithPatch(
    101         MacroAssembler::NotEqual,
    102         MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()),
    103         m_structureImm, MacroAssembler::TrustedImm32(0));
     95    m_start = jit.label();
     96    size_t startSize = jit.m_assembler.buffer().codeSize();
     97    m_slowPathJump = jit.jump();
     98    size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
     99    size_t nopsToEmitInBytes = inlineICSize - jumpSize;
     100    jit.emitNops(nopsToEmitInBytes);
     101    ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
     102    m_done = jit.label();
    104103}
    105104
    106105JITGetByIdGenerator::JITGetByIdGenerator(
    107106    CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet& usedRegisters,
    108     JSValueRegs base, JSValueRegs value, AccessType accessType)
    109     : JITByIdGenerator(
    110         codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
     107    UniquedStringImpl* propertyName, JSValueRegs base, JSValueRegs value, AccessType accessType)
     108    : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
     109    , m_isLengthAccess(propertyName == codeBlock->vm()->propertyNames->length.impl())
    111110{
    112111    RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
     
    115114void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit)
    116115{
    117     generateFastPathChecks(jit);
    118    
    119 #if USE(JSVALUE64)
    120     m_loadOrStore = jit.load64WithCompactAddressOffsetPatch(
    121         MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
    122 #else
    123     m_tagLoadOrStore = jit.load32WithCompactAddressOffsetPatch(
    124         MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.tagGPR()).label();
    125     m_loadOrStore = jit.load32WithCompactAddressOffsetPatch(
    126         MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
    127 #endif
    128    
    129     m_done = jit.label();
     116    generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
    130117}
    131118
     
    144131void JITPutByIdGenerator::generateFastPath(MacroAssembler& jit)
    145132{
    146     generateFastPathChecks(jit);
    147    
    148 #if USE(JSVALUE64)
    149     m_loadOrStore = jit.store64WithAddressOffsetPatch(
    150         m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
    151 #else
    152     m_tagLoadOrStore = jit.store32WithAddressOffsetPatch(
    153         m_value.tagGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
    154     m_loadOrStore = jit.store32WithAddressOffsetPatch(
    155         m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
    156 #endif
    157    
    158     m_done = jit.label();
     133    generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
    159134}
    160135
  • trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h

    r199303 r202214  
    6969    {
    7070        m_slowPathBegin = slowPathBegin;
    71         m_call = call;
     71        m_slowPathCall = call;
    7272    }
    7373   
    7474    MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; }
    75     MacroAssembler::Jump slowPathJump() const { return m_structureCheck.m_jump; }
     75    MacroAssembler::Jump slowPathJump() const
     76    {
     77        ASSERT(m_slowPathJump.isSet());
     78        return m_slowPathJump;
     79    }
    7680
    7781    void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
     
    7983   
    8084protected:
    81     void generateFastPathChecks(MacroAssembler&);
     85    void generateFastCommon(MacroAssembler&, size_t size);
    8286   
    8387    JSValueRegs m_base;
    8488    JSValueRegs m_value;
    8589   
    86     MacroAssembler::DataLabel32 m_structureImm;
    87     MacroAssembler::PatchableJump m_structureCheck;
    88     AssemblerLabel m_loadOrStore;
    89 #if USE(JSVALUE32_64)
    90     AssemblerLabel m_tagLoadOrStore;
    91 #endif
     90    MacroAssembler::Label m_start;
    9291    MacroAssembler::Label m_done;
    9392    MacroAssembler::Label m_slowPathBegin;
    94     MacroAssembler::Call m_call;
     93    MacroAssembler::Call m_slowPathCall;
     94    MacroAssembler::Jump m_slowPathJump;
    9595};
    9696
     
    100100
    101101    JITGetByIdGenerator(
    102         CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, JSValueRegs base,
    103         JSValueRegs value, AccessType);
     102        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, UniquedStringImpl* propertyName,
     103        JSValueRegs base, JSValueRegs value, AccessType);
    104104   
    105105    void generateFastPath(MacroAssembler&);
     106
     107private:
     108    bool m_isLengthAccess;
    106109};
    107110
  • trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp

    r202157 r202214  
    223223    JITGetByIdGenerator gen(
    224224        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
    225         JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     225        propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
    226226    gen.generateFastPath(*this);
    227227
     
    572572    int resultVReg = currentInstruction[1].u.operand;
    573573    int baseVReg = currentInstruction[2].u.operand;
     574    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
    574575
    575576    emitGetVirtualRegister(baseVReg, regT0);
     
    579580    JITGetByIdGenerator gen(
    580581        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
    581         JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
     582        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
    582583    gen.generateFastPath(*this);
    583584    addSlowCase(gen.slowPathJump());
     
    620621    JITGetByIdGenerator gen(
    621622        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
    622         JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     623        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
    623624    gen.generateFastPath(*this);
    624625    addSlowCase(gen.slowPathJump());
  • trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp

    r202157 r202214  
    293293    JITGetByIdGenerator gen(
    294294        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
    295         JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     295        propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
    296296    gen.generateFastPath(*this);
    297297
     
    588588    int dst = currentInstruction[1].u.operand;
    589589    int base = currentInstruction[2].u.operand;
     590    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
    590591
    591592    emitLoad(base, regT1, regT0);
     
    594595    JITGetByIdGenerator gen(
    595596        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
    596         JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
     597        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
    597598    gen.generateFastPath(*this);
    598599    addSlowCase(gen.slowPathJump());
     
    635636    JITGetByIdGenerator gen(
    636637        m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
    637         JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     638        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
    638639    gen.generateFastPath(*this);
    639640    addSlowCase(gen.slowPathJump());
  • trunk/Source/JavaScriptCore/jit/Repatch.cpp

    r201562 r202214  
    3939#include "GetterSetter.h"
    4040#include "ICStats.h"
     41#include "InlineAccess.h"
    4142#include "JIT.h"
    4243#include "JITInlines.h"
     
    9192}
    9293
    93 static void repatchByIdSelfAccess(
    94     CodeBlock* codeBlock, StructureStubInfo& stubInfo, Structure* structure,
    95     PropertyOffset offset, const FunctionPtr& slowPathFunction,
    96     bool compact)
    97 {
    98     // Only optimize once!
    99     repatchCall(codeBlock, stubInfo.callReturnLocation, slowPathFunction);
    100 
    101     // Patch the structure check & the offset of the load.
    102     MacroAssembler::repatchInt32(
    103         stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall),
    104         bitwise_cast<int32_t>(structure->id()));
    105 #if USE(JSVALUE64)
    106     if (compact)
    107         MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
    108     else
    109         MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
    110 #elif USE(JSVALUE32_64)
    111     if (compact) {
    112         MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
    113         MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
    114     } else {
    115         MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
    116         MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
    117     }
    118 #endif
    119 }
    120 
    121 static void resetGetByIDCheckAndLoad(StructureStubInfo& stubInfo)
    122 {
    123     CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
    124     if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
    125         MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
    126             MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
    127             MacroAssembler::Address(
    128                 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
    129                 JSCell::structureIDOffset()),
    130             static_cast<int32_t>(unusedPointer));
    131     }
    132     MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
    133 #if USE(JSVALUE64)
    134     MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
    135 #else
    136     MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
    137     MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
    138 #endif
    139 }
    140 
    141 static void resetPutByIDCheckAndLoad(StructureStubInfo& stubInfo)
    142 {
    143     CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
    144     if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
    145         MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
    146             MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
    147             MacroAssembler::Address(
    148                 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
    149                 JSCell::structureIDOffset()),
    150             static_cast<int32_t>(unusedPointer));
    151     }
    152     MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
    153 #if USE(JSVALUE64)
    154     MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
    155 #else
    156     MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
    157     MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
    158 #endif
    159 }
    160 
    161 static void replaceWithJump(StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)
    162 {
    163     RELEASE_ASSERT(target);
    164    
    165     if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
    166         MacroAssembler::replaceWithJump(
    167             MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(
    168                 stubInfo.callReturnLocation.dataLabel32AtOffset(
    169                     -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),
    170             CodeLocationLabel(target));
    171         return;
    172     }
    173 
    174     resetGetByIDCheckAndLoad(stubInfo);
    175    
    176     MacroAssembler::repatchJump(
    177         stubInfo.callReturnLocation.jumpAtOffset(
    178             stubInfo.patch.deltaCallToJump),
    179         CodeLocationLabel(target));
    180 }
    181 
    18294enum InlineCacheAction {
    18395    GiveUpOnCache,
     
    242154
    243155    if (propertyName == vm.propertyNames->length) {
    244         if (isJSArray(baseValue))
     156        if (isJSArray(baseValue)) {
     157            if (stubInfo.cacheType == CacheType::Unset
     158                && slot.slotBase() == baseValue
     159                && InlineAccess::isCacheableArrayLength(stubInfo, jsCast<JSArray*>(baseValue))) {
     160
     161                bool generatedCodeInline = InlineAccess::generateArrayLength(*codeBlock->vm(), stubInfo, jsCast<JSArray*>(baseValue));
     162                if (generatedCodeInline) {
     163                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
     164                    stubInfo.initArrayLength();
     165                    return RetryCacheLater;
     166                }
     167            }
     168
    245169            newCase = AccessCase::getLength(vm, codeBlock, AccessCase::ArrayLength);
    246         else if (isJSString(baseValue))
     170        } else if (isJSString(baseValue))
    247171            newCase = AccessCase::getLength(vm, codeBlock, AccessCase::StringLength);
    248172        else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(baseValue)) {
     
    277201        if (action != AttemptToCache)
    278202            return action;
    279        
     203
    280204        // Optimize self access.
    281205        if (stubInfo.cacheType == CacheType::Unset
     
    283207            && slot.slotBase() == baseValue
    284208            && !slot.watchpointSet()
    285             && isInlineOffset(slot.cachedOffset())
    286             && MacroAssembler::isCompactPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
    287             && action == AttemptToCache
    288209            && !structure->needImpurePropertyWatchpoint()
    289210            && !loadTargetFromProxy) {
    290             LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
    291             structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
    292             repatchByIdSelfAccess(codeBlock, stubInfo, structure, slot.cachedOffset(), appropriateOptimizingGetByIdFunction(kind), true);
    293             stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
    294             return RetryCacheLater;
     211
     212            bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(*codeBlock->vm(), stubInfo, structure, slot.cachedOffset());
     213            if (generatedCodeInline) {
     214                LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
     215                structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
     216                repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
     217                stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
     218                return RetryCacheLater;
     219            }
    295220        }
    296221
     
    371296       
    372297        RELEASE_ASSERT(result.code());
    373         replaceWithJump(stubInfo, result.code());
     298        InlineAccess::rewireStubAsJump(exec->vm(), stubInfo, CodeLocationLabel(result.code()));
    374299    }
    375300   
     
    383308   
    384309    if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
    385         repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericGetByIdFunction(kind));
     310        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind));
    386311}
    387312
     
    434359       
    435360            if (stubInfo.cacheType == CacheType::Unset
    436                 && isInlineOffset(slot.cachedOffset())
    437                 && MacroAssembler::isPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
     361                && InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset())
    438362                && !structure->needImpurePropertyWatchpoint()
    439363                && !structure->inferredTypeFor(ident.impl())) {
    440364               
    441                 LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
    442                
    443                 repatchByIdSelfAccess(
    444                     codeBlock, stubInfo, structure, slot.cachedOffset(),
    445                     appropriateOptimizingPutByIdFunction(slot, putKind), false);
    446                 stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
    447                 return RetryCacheLater;
     365                bool generatedCodeInline = InlineAccess::generateSelfPropertyReplace(vm, stubInfo, structure, slot.cachedOffset());
     366                if (generatedCodeInline) {
     367                    LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
     368                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind));
     369                    stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
     370                    return RetryCacheLater;
     371                }
    448372            }
    449373
     
    525449       
    526450        RELEASE_ASSERT(result.code());
    527         resetPutByIDCheckAndLoad(stubInfo);
    528         MacroAssembler::repatchJump(
    529             stubInfo.callReturnLocation.jumpAtOffset(
    530                 stubInfo.patch.deltaCallToJump),
    531             CodeLocationLabel(result.code()));
     451
     452        InlineAccess::rewireStubAsJump(vm, stubInfo, CodeLocationLabel(result.code()));
    532453    }
    533454   
     
    541462   
    542463    if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache)
    543         repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericPutByIdFunction(slot, putKind));
     464        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind));
    544465}
    545466
     
    587508       
    588509        RELEASE_ASSERT(result.code());
     510
    589511        MacroAssembler::repatchJump(
    590             stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),
     512            stubInfo.patchableJumpForIn(),
    591513            CodeLocationLabel(result.code()));
    592514    }
     
    601523    SuperSamplerScope superSamplerScope(false);
    602524    if (tryRepatchIn(exec, base, ident, wasFound, slot, stubInfo) == GiveUpOnCache)
    603         repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, operationIn);
     525        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), operationIn);
    604526}
    605527
     
    973895void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind)
    974896{
    975     repatchCall(codeBlock, stubInfo.callReturnLocation, appropriateOptimizingGetByIdFunction(kind));
    976     resetGetByIDCheckAndLoad(stubInfo);
    977     MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
     897    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
     898    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
    978899}
    979900
    980901void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo)
    981902{
    982     V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.callReturnLocation).executableAddress());
     903    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress());
    983904    V_JITOperation_ESsiJJI optimizedFunction;
    984905    if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize)
     
    992913        optimizedFunction = operationPutByIdDirectNonStrictOptimize;
    993914    }
    994     repatchCall(codeBlock, stubInfo.callReturnLocation, optimizedFunction);
    995     resetPutByIDCheckAndLoad(stubInfo);
    996     MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
     915
     916    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), optimizedFunction);
     917    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
    997918}
    998919
    999920void resetIn(CodeBlock*, StructureStubInfo& stubInfo)
    1000921{
    1001     MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
     922    MacroAssembler::repatchJump(stubInfo.patchableJumpForIn(), stubInfo.slowPathStartLocation());
    1002923}
    1003924
Note: See TracChangeset for help on using the changeset viewer.