Changeset 202214 in webkit
- Timestamp:
- Jun 19, 2016 12:42:18 PM (8 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 2 added
- 26 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r202157 r202214 201 201 bytecode/GetByIdStatus.cpp 202 202 bytecode/GetByIdVariant.cpp 203 bytecode/InlineAccess.cpp 203 204 bytecode/InlineCallFrame.cpp 204 205 bytecode/InlineCallFrameSet.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r202205 r202214 1 2016-06-19 Saam Barati <sbarati@apple.com> 2 3 We should be able to generate more types of ICs inline 4 https://bugs.webkit.org/show_bug.cgi?id=158719 5 <rdar://problem/26825641> 6 7 Reviewed by Filip Pizlo. 8 9 This patch changes how we emit code for *byId ICs inline. 10 We no longer keep data labels to patch structure checks, etc. 11 Instead, we just regenerate the entire IC into a designated 12 region of code that the Baseline/DFG/FTL JIT will emit inline. 13 This makes it much simpler to patch inline ICs. All that's 14 needed to patch an inline IC is to memcpy the code from 15 a macro assembler inline using LinkBuffer. This architecture 16 will be easy to extend into other forms of ICs, such as one 17 for add, in the future. 18 19 To support this change, I've reworked the fields inside 20 StructureStubInfo. It now has one field that is the CodeLocationLabel 21 of the start of the inline IC. Then it has a few ints that track deltas 22 to other locations in the IC such as the slow path start, slow path call, the 23 ICs 'done' location. We used to perform math on these ints in a bunch of different 24 places. I've consolidated that math into methods inside StructureStubInfo. 25 26 To generate inline ICs, I've implemented a new class called InlineAccess. 27 InlineAccess is stateless: it just has a bunch of static methods for 28 generating code into the inline region specified by StructureStubInfo. 29 Repatch will now decide when it wants to generate such an inline 30 IC, and it will ask InlineAccess to do so. 31 32 I've implemented three types of inline ICs to begin with (extending 33 this in the future should be easy): 34 - Self property loads (both inline and out of line offsets). 35 - Self property replace (both inline and out of line offsets). 36 - Array length on specific array types. 37 (An easy extension would be to implement JSString length.) 38 39 To know how much inline space to reserve, I've implemented a 40 method that stubs out the various inline cache shapes and 41 dumps their size. This is used to determine how much space 42 to save inline. When InlineAccess ends up generating more 43 code than can fit inline, we will fall back to generating 44 code with PolymorphicAccess instead. 45 46 To make generating code into already allocated executable memory 47 efficient, I've made AssemblerData have 128 bytes of inline storage. 48 This saves us a malloc when splatting code into the inline region. 49 50 This patch also tidies up LinkBuffer's API for generating 51 into already allocated executable memory. Now, when generating 52 code that has less size than the already allocated space, LinkBuffer 53 will fill the extra space with nops. Also, if branch compaction shrinks 54 the code, LinkBuffer will add a nop sled at the end of the shrunken 55 code to take up the entire allocated size. 56 57 This looks like it could be a 1% octane progression. 58 59 * CMakeLists.txt: 60 * JavaScriptCore.xcodeproj/project.pbxproj: 61 * assembler/ARM64Assembler.h: 62 (JSC::ARM64Assembler::nop): 63 (JSC::ARM64Assembler::fillNops): 64 * assembler/ARMv7Assembler.h: 65 (JSC::ARMv7Assembler::nopw): 66 (JSC::ARMv7Assembler::nopPseudo16): 67 (JSC::ARMv7Assembler::nopPseudo32): 68 (JSC::ARMv7Assembler::fillNops): 69 (JSC::ARMv7Assembler::dmbSY): 70 * assembler/AbstractMacroAssembler.h: 71 (JSC::AbstractMacroAssembler::addLinkTask): 72 (JSC::AbstractMacroAssembler::emitNops): 73 (JSC::AbstractMacroAssembler::AbstractMacroAssembler): 74 * assembler/AssemblerBuffer.h: 75 (JSC::AssemblerData::AssemblerData): 76 (JSC::AssemblerData::operator=): 77 (JSC::AssemblerData::~AssemblerData): 78 (JSC::AssemblerData::buffer): 79 (JSC::AssemblerData::grow): 80 (JSC::AssemblerData::isInlineBuffer): 81 (JSC::AssemblerBuffer::AssemblerBuffer): 82 (JSC::AssemblerBuffer::ensureSpace): 83 (JSC::AssemblerBuffer::codeSize): 84 (JSC::AssemblerBuffer::setCodeSize): 85 (JSC::AssemblerBuffer::label): 86 (JSC::AssemblerBuffer::debugOffset): 87 (JSC::AssemblerBuffer::releaseAssemblerData): 88 * assembler/LinkBuffer.cpp: 89 (JSC::LinkBuffer::copyCompactAndLinkCode): 90 (JSC::LinkBuffer::linkCode): 91 (JSC::LinkBuffer::allocate): 92 (JSC::LinkBuffer::performFinalization): 93 (JSC::LinkBuffer::shrink): Deleted. 94 * assembler/LinkBuffer.h: 95 (JSC::LinkBuffer::LinkBuffer): 96 (JSC::LinkBuffer::debugAddress): 97 (JSC::LinkBuffer::size): 98 (JSC::LinkBuffer::wasAlreadyDisassembled): 99 (JSC::LinkBuffer::didAlreadyDisassemble): 100 (JSC::LinkBuffer::applyOffset): 101 (JSC::LinkBuffer::code): 102 * assembler/MacroAssemblerARM64.h: 103 (JSC::MacroAssemblerARM64::patchableBranch32): 104 (JSC::MacroAssemblerARM64::patchableBranch64): 105 * assembler/MacroAssemblerARMv7.h: 106 (JSC::MacroAssemblerARMv7::patchableBranch32): 107 (JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch): 108 * assembler/X86Assembler.h: 109 (JSC::X86Assembler::nop): 110 (JSC::X86Assembler::fillNops): 111 * bytecode/CodeBlock.cpp: 112 (JSC::CodeBlock::printGetByIdCacheStatus): 113 * bytecode/InlineAccess.cpp: Added. 114 (JSC::InlineAccess::dumpCacheSizesAndCrash): 115 (JSC::linkCodeInline): 116 (JSC::InlineAccess::generateSelfPropertyAccess): 117 (JSC::getScratchRegister): 118 (JSC::hasFreeRegister): 119 (JSC::InlineAccess::canGenerateSelfPropertyReplace): 120 (JSC::InlineAccess::generateSelfPropertyReplace): 121 (JSC::InlineAccess::isCacheableArrayLength): 122 (JSC::InlineAccess::generateArrayLength): 123 (JSC::InlineAccess::rewireStubAsJump): 124 * bytecode/InlineAccess.h: Added. 125 (JSC::InlineAccess::sizeForPropertyAccess): 126 (JSC::InlineAccess::sizeForPropertyReplace): 127 (JSC::InlineAccess::sizeForLengthAccess): 128 * bytecode/PolymorphicAccess.cpp: 129 (JSC::PolymorphicAccess::regenerate): 130 * bytecode/StructureStubInfo.cpp: 131 (JSC::StructureStubInfo::initGetByIdSelf): 132 (JSC::StructureStubInfo::initArrayLength): 133 (JSC::StructureStubInfo::initPutByIdReplace): 134 (JSC::StructureStubInfo::deref): 135 (JSC::StructureStubInfo::aboutToDie): 136 (JSC::StructureStubInfo::propagateTransitions): 137 (JSC::StructureStubInfo::containsPC): 138 * bytecode/StructureStubInfo.h: 139 (JSC::StructureStubInfo::considerCaching): 140 (JSC::StructureStubInfo::slowPathCallLocation): 141 (JSC::StructureStubInfo::doneLocation): 142 (JSC::StructureStubInfo::slowPathStartLocation): 143 (JSC::StructureStubInfo::patchableJumpForIn): 144 (JSC::StructureStubInfo::valueRegs): 145 * dfg/DFGJITCompiler.cpp: 146 (JSC::DFG::JITCompiler::link): 147 * dfg/DFGOSRExitCompilerCommon.cpp: 148 (JSC::DFG::reifyInlinedCallFrames): 149 * dfg/DFGSpeculativeJIT32_64.cpp: 150 (JSC::DFG::SpeculativeJIT::cachedGetById): 151 * dfg/DFGSpeculativeJIT64.cpp: 152 (JSC::DFG::SpeculativeJIT::cachedGetById): 153 * ftl/FTLLowerDFGToB3.cpp: 154 (JSC::FTL::DFG::LowerDFGToB3::compileIn): 155 (JSC::FTL::DFG::LowerDFGToB3::getById): 156 * jit/JITInlineCacheGenerator.cpp: 157 (JSC::JITByIdGenerator::finalize): 158 (JSC::JITByIdGenerator::generateFastCommon): 159 (JSC::JITGetByIdGenerator::JITGetByIdGenerator): 160 (JSC::JITGetByIdGenerator::generateFastPath): 161 (JSC::JITPutByIdGenerator::JITPutByIdGenerator): 162 (JSC::JITPutByIdGenerator::generateFastPath): 163 (JSC::JITPutByIdGenerator::slowPathFunction): 164 (JSC::JITByIdGenerator::generateFastPathChecks): Deleted. 165 * jit/JITInlineCacheGenerator.h: 166 (JSC::JITByIdGenerator::reportSlowPathCall): 167 (JSC::JITByIdGenerator::slowPathBegin): 168 (JSC::JITByIdGenerator::slowPathJump): 169 (JSC::JITGetByIdGenerator::JITGetByIdGenerator): 170 * jit/JITPropertyAccess.cpp: 171 (JSC::JIT::emitGetByValWithCachedId): 172 (JSC::JIT::emit_op_try_get_by_id): 173 (JSC::JIT::emit_op_get_by_id): 174 * jit/JITPropertyAccess32_64.cpp: 175 (JSC::JIT::emitGetByValWithCachedId): 176 (JSC::JIT::emit_op_try_get_by_id): 177 (JSC::JIT::emit_op_get_by_id): 178 * jit/Repatch.cpp: 179 (JSC::repatchCall): 180 (JSC::tryCacheGetByID): 181 (JSC::repatchGetByID): 182 (JSC::appropriateGenericPutByIdFunction): 183 (JSC::tryCachePutByID): 184 (JSC::repatchPutByID): 185 (JSC::tryRepatchIn): 186 (JSC::repatchIn): 187 (JSC::linkSlowFor): 188 (JSC::resetGetByID): 189 (JSC::resetPutByID): 190 (JSC::resetIn): 191 (JSC::repatchByIdSelfAccess): Deleted. 192 (JSC::resetGetByIDCheckAndLoad): Deleted. 193 (JSC::resetPutByIDCheckAndLoad): Deleted. 194 (JSC::replaceWithJump): Deleted. 195 1 196 2016-06-19 Filip Pizlo <fpizlo@apple.com> 2 197 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r202157 r202214 1274 1274 72AAF7CD1D0D31B3005E60BE /* JSCustomGetterSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */; }; 1275 1275 72AAF7CE1D0D31B3005E60BE /* JSCustomGetterSetterFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */; }; 1276 7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7905BB661D12050E0019FE57 /* InlineAccess.cpp */; }; 1277 7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */ = {isa = PBXBuildFile; fileRef = 7905BB671D12050E0019FE57 /* InlineAccess.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1276 1278 79160DBD1C8E3EC8008C085A /* ProxyRevoke.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */; }; 1277 1279 79160DBE1C8E3EC8008C085A /* ProxyRevoke.h in Headers */ = {isa = PBXBuildFile; fileRef = 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 3458 3460 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCustomGetterSetterFunction.cpp; sourceTree = "<group>"; }; 3459 3461 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSCustomGetterSetterFunction.h; sourceTree = "<group>"; }; 3462 7905BB661D12050E0019FE57 /* InlineAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InlineAccess.cpp; sourceTree = "<group>"; }; 3463 7905BB671D12050E0019FE57 /* InlineAccess.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InlineAccess.h; sourceTree = "<group>"; }; 3460 3464 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProxyRevoke.cpp; sourceTree = "<group>"; }; 3461 3465 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProxyRevoke.h; sourceTree = "<group>"; }; … … 6587 6591 0F0332C218B01763005F979A /* GetByIdVariant.h */, 6588 6592 0F0B83A814BCF55E00885B4F /* HandlerInfo.h */, 6593 7905BB661D12050E0019FE57 /* InlineAccess.cpp */, 6594 7905BB671D12050E0019FE57 /* InlineAccess.h */, 6589 6595 148A7BED1B82975A002D9157 /* InlineCallFrame.cpp */, 6590 6596 148A7BEE1B82975A002D9157 /* InlineCallFrame.h */, … … 7331 7337 43C392AB1C3BEB0500241F53 /* AssemblerCommon.h in Headers */, 7332 7338 86EC9DD01328DF82002B2AD7 /* DFGOperations.h in Headers */, 7339 7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */, 7333 7340 A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */, 7334 7341 0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */, … … 9316 9323 527773DE1AAF83AC00BDE7E8 /* RuntimeType.cpp in Sources */, 9317 9324 0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */, 9325 7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */, 9318 9326 0FE050271AA9095600D33B33 /* ScopedArguments.cpp in Sources */, 9319 9327 0FE0502F1AAA806900D33B33 /* ScopedArgumentsTable.cpp in Sources */, -
trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h
r199299 r202214 1485 1485 } 1486 1486 1487 static void fillNops(void* base, size_t size )1487 static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory) 1488 1488 { 1489 1489 RELEASE_ASSERT(!(size % sizeof(int32_t))); … … 1491 1491 for (int32_t* ptr = static_cast<int32_t*>(base); n--;) { 1492 1492 int insn = nopPseudo(); 1493 performJITMemcpy(ptr++, &insn, sizeof(int)); 1493 if (isCopyingToExecutableMemory) 1494 performJITMemcpy(ptr++, &insn, sizeof(int)); 1495 else 1496 memcpy(ptr++, &insn, sizeof(int)); 1494 1497 } 1495 1498 } -
trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h
r200742 r202214 2005 2005 } 2006 2006 2007 static constexpr int16_t nopPseudo16() 2008 { 2009 return OP_NOP_T1; 2010 } 2011 2012 static constexpr int32_t nopPseudo32() 2013 { 2014 return OP_NOP_T2a | (OP_NOP_T2b << 16); 2015 } 2016 2017 static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory) 2018 { 2019 RELEASE_ASSERT(!(size % sizeof(int16_t))); 2020 2021 char* ptr = static_cast<char*>(base); 2022 const size_t num32s = size / sizeof(int32_t); 2023 for (size_t i = 0; i < num32s; i++) { 2024 const int32_t insn = nopPseudo32(); 2025 if (isCopyingToExecutableMemory) 2026 performJITMemcpy(ptr, &insn, sizeof(int32_t)); 2027 else 2028 memcpy(ptr, &insn, sizeof(int32_t)); 2029 ptr += sizeof(int32_t); 2030 } 2031 2032 const size_t num16s = (size % sizeof(int32_t)) / sizeof(int16_t); 2033 ASSERT(num16s == 0 || num16s == 1); 2034 ASSERT(num16s * sizeof(int16_t) + num32s * sizeof(int32_t) == size); 2035 if (num16s) { 2036 const int16_t insn = nopPseudo16(); 2037 if (isCopyingToExecutableMemory) 2038 performJITMemcpy(ptr, &insn, sizeof(int16_t)); 2039 else 2040 memcpy(ptr, &insn, sizeof(int16_t)); 2041 } 2042 } 2043 2007 2044 void dmbSY() 2008 2045 { -
trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
r199946 r202214 28 28 29 29 #include "AbortReason.h" 30 #include "AssemblerBuffer.h"31 30 #include "CodeLocation.h" 32 31 #include "MacroAssemblerCodeRef.h" … … 1041 1040 } 1042 1041 1042 void emitNops(size_t memoryToFillWithNopsInBytes) 1043 { 1044 AssemblerBuffer& buffer = m_assembler.buffer(); 1045 size_t startCodeSize = buffer.codeSize(); 1046 size_t targetCodeSize = startCodeSize + memoryToFillWithNopsInBytes; 1047 buffer.ensureSpace(memoryToFillWithNopsInBytes); 1048 bool isCopyingToExecutableMemory = false; 1049 AssemblerType::fillNops(static_cast<char*>(buffer.data()) + startCodeSize, memoryToFillWithNopsInBytes, isCopyingToExecutableMemory); 1050 buffer.setCodeSize(targetCodeSize); 1051 } 1052 1043 1053 protected: 1044 1054 AbstractMacroAssembler() -
trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h
r198708 r202214 63 63 64 64 class AssemblerData { 65 WTF_MAKE_NONCOPYABLE(AssemblerData); 66 static const size_t InlineCapacity = 128; 65 67 public: 66 68 AssemblerData() 67 : m_buffer(nullptr) 68 , m_capacity(0) 69 { 70 } 71 72 AssemblerData(unsigned initialCapacity) 73 { 74 m_capacity = initialCapacity; 75 m_buffer = static_cast<char*>(fastMalloc(m_capacity)); 69 : m_buffer(m_inlineBuffer) 70 , m_capacity(InlineCapacity) 71 { 72 } 73 74 AssemblerData(size_t initialCapacity) 75 { 76 if (initialCapacity <= InlineCapacity) { 77 m_capacity = InlineCapacity; 78 m_buffer = m_inlineBuffer; 79 } else { 80 m_capacity = initialCapacity; 81 m_buffer = static_cast<char*>(fastMalloc(m_capacity)); 82 } 76 83 } 77 84 78 85 AssemblerData(AssemblerData&& other) 79 86 { 80 m_buffer = other.m_buffer; 87 if (other.isInlineBuffer()) { 88 ASSERT(other.m_capacity == InlineCapacity); 89 memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity); 90 m_buffer = m_inlineBuffer; 91 } else 92 m_buffer = other.m_buffer; 93 m_capacity = other.m_capacity; 94 81 95 other.m_buffer = nullptr; 96 other.m_capacity = 0; 97 } 98 99 AssemblerData& operator=(AssemblerData&& other) 100 { 101 if (m_buffer && !isInlineBuffer()) 102 fastFree(m_buffer); 103 104 if (other.isInlineBuffer()) { 105 ASSERT(other.m_capacity == InlineCapacity); 106 memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity); 107 m_buffer = m_inlineBuffer; 108 } else 109 m_buffer = other.m_buffer; 82 110 m_capacity = other.m_capacity; 83 other.m_capacity = 0; 84 } 85 86 AssemblerData& operator=(AssemblerData&& other) 87 { 88 m_buffer = other.m_buffer; 111 89 112 other.m_buffer = nullptr; 90 m_capacity = other.m_capacity;91 113 other.m_capacity = 0; 92 114 return *this; … … 95 117 ~AssemblerData() 96 118 { 97 fastFree(m_buffer); 119 if (m_buffer && !isInlineBuffer()) 120 fastFree(m_buffer); 98 121 } 99 122 … … 105 128 { 106 129 m_capacity = m_capacity + m_capacity / 2 + extraCapacity; 107 m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity)); 130 if (isInlineBuffer()) { 131 m_buffer = static_cast<char*>(fastMalloc(m_capacity)); 132 memcpy(m_buffer, m_inlineBuffer, InlineCapacity); 133 } else 134 m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity)); 108 135 } 109 136 110 137 private: 138 bool isInlineBuffer() const { return m_buffer == m_inlineBuffer; } 111 139 char* m_buffer; 140 char m_inlineBuffer[InlineCapacity]; 112 141 unsigned m_capacity; 113 142 }; 114 143 115 144 class AssemblerBuffer { 116 static const int initialCapacity = 128;117 145 public: 118 146 AssemblerBuffer() 119 : m_storage( initialCapacity)147 : m_storage() 120 148 , m_index(0) 121 149 { … … 129 157 void ensureSpace(unsigned space) 130 158 { 131 if(!isAvailable(space))159 while (!isAvailable(space)) 132 160 outOfLineGrow(); 133 161 } … … 157 185 } 158 186 187 void setCodeSize(size_t index) 188 { 189 // Warning: Only use this if you know exactly what you are doing. 190 // For example, say you want 40 bytes of nops, it's ok to grow 191 // and then fill 40 bytes of nops using bigger instructions. 192 m_index = index; 193 ASSERT(m_index <= m_storage.capacity()); 194 } 195 159 196 AssemblerLabel label() const 160 197 { … … 164 201 unsigned debugOffset() { return m_index; } 165 202 166 AssemblerData releaseAssemblerData() { return WTFMove(m_storage); }203 AssemblerData&& releaseAssemblerData() { return WTFMove(m_storage); } 167 204 168 205 // LocalWriter is a trick to keep the storage buffer and the index -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp
r201380 r202214 99 99 void LinkBuffer::copyCompactAndLinkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort) 100 100 { 101 m_initialSize = macroAssembler.m_assembler.codeSize();102 allocate(m_initialSize, ownerUID, effort);101 allocate(macroAssembler, ownerUID, effort); 102 const size_t initialSize = macroAssembler.m_assembler.codeSize(); 103 103 if (didFailToAllocate()) 104 104 return; 105 105 106 Vector<LinkRecord, 0, UnsafeVectorOverflow>& jumpsToLink = macroAssembler.jumpsToLink(); 106 107 m_assemblerStorage = macroAssembler.m_assembler.buffer().releaseAssemblerData(); … … 108 109 109 110 AssemblerData outBuffer(m_size); 111 110 112 uint8_t* outData = reinterpret_cast<uint8_t*>(outBuffer.buffer()); 111 113 uint8_t* codeOutData = reinterpret_cast<uint8_t*>(m_code); … … 114 116 int writePtr = 0; 115 117 unsigned jumpCount = jumpsToLink.size(); 116 for (unsigned i = 0; i < jumpCount; ++i) { 117 int offset = readPtr - writePtr; 118 ASSERT(!(offset & 1)); 119 120 // Copy the instructions from the last jump to the current one. 121 size_t regionSize = jumpsToLink[i].from() - readPtr; 122 InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr); 123 InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize); 124 InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr); 125 ASSERT(!(regionSize % 2)); 126 ASSERT(!(readPtr % 2)); 127 ASSERT(!(writePtr % 2)); 128 while (copySource != copyEnd) 129 *copyDst++ = *copySource++; 130 recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset); 131 readPtr += regionSize; 132 writePtr += regionSize; 133 134 // Calculate absolute address of the jump target, in the case of backwards 135 // branches we need to be precise, forward branches we are pessimistic 136 const uint8_t* target; 137 if (jumpsToLink[i].to() >= jumpsToLink[i].from()) 138 target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far 139 else 140 target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to()); 141 142 JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target); 143 // Compact branch if we can... 144 if (MacroAssembler::canCompact(jumpsToLink[i].type())) { 145 // Step back in the write stream 146 int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType); 147 if (delta) { 148 writePtr -= delta; 149 recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr); 118 if (m_shouldPerformBranchCompaction) { 119 for (unsigned i = 0; i < jumpCount; ++i) { 120 int offset = readPtr - writePtr; 121 ASSERT(!(offset & 1)); 122 123 // Copy the instructions from the last jump to the current one. 124 size_t regionSize = jumpsToLink[i].from() - readPtr; 125 InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr); 126 InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize); 127 InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr); 128 ASSERT(!(regionSize % 2)); 129 ASSERT(!(readPtr % 2)); 130 ASSERT(!(writePtr % 2)); 131 while (copySource != copyEnd) 132 *copyDst++ = *copySource++; 133 recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset); 134 readPtr += regionSize; 135 writePtr += regionSize; 136 137 // Calculate absolute address of the jump target, in the case of backwards 138 // branches we need to be precise, forward branches we are pessimistic 139 const uint8_t* target; 140 if (jumpsToLink[i].to() >= jumpsToLink[i].from()) 141 target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far 142 else 143 target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to()); 144 145 JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target); 146 // Compact branch if we can... 147 if (MacroAssembler::canCompact(jumpsToLink[i].type())) { 148 // Step back in the write stream 149 int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType); 150 if (delta) { 151 writePtr -= delta; 152 recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr); 153 } 150 154 } 155 jumpsToLink[i].setFrom(writePtr); 151 156 } 152 jumpsToLink[i].setFrom(writePtr); 157 } else { 158 if (!ASSERT_DISABLED) { 159 for (unsigned i = 0; i < jumpCount; ++i) 160 ASSERT(!MacroAssembler::canCompact(jumpsToLink[i].type())); 161 } 153 162 } 154 163 // Copy everything after the last jump 155 memcpy(outData + writePtr, inData + readPtr, m_initialSize - readPtr);156 recordLinkOffsets(m_assemblerStorage, readPtr, m_initialSize, readPtr - writePtr);164 memcpy(outData + writePtr, inData + readPtr, initialSize - readPtr); 165 recordLinkOffsets(m_assemblerStorage, readPtr, initialSize, readPtr - writePtr); 157 166 158 167 for (unsigned i = 0; i < jumpCount; ++i) { … … 163 172 164 173 jumpsToLink.clear(); 165 shrink(writePtr + m_initialSize - readPtr); 166 167 performJITMemcpy(m_code, outBuffer.buffer(), m_size); 174 175 size_t compactSize = writePtr + initialSize - readPtr; 176 if (m_executableMemory) { 177 m_size = compactSize; 178 m_executableMemory->shrink(m_size); 179 } else { 180 size_t nopSizeInBytes = initialSize - compactSize; 181 bool isCopyingToExecutableMemory = false; 182 MacroAssembler::AssemblerType_T::fillNops(outData + compactSize, nopSizeInBytes, isCopyingToExecutableMemory); 183 } 184 185 performJITMemcpy(m_code, outData, m_size); 168 186 169 187 #if DUMP_LINK_STATISTICS 170 dumpLinkStatistics(m_code, m_initialSize, m_size);188 dumpLinkStatistics(m_code, initialSize, m_size); 171 189 #endif 172 190 #if DUMP_CODE … … 183 201 macroAssembler.m_assembler.buffer().flushConstantPool(false); 184 202 #endif 185 AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer(); 186 allocate(buffer.codeSize(), ownerUID, effort); 203 allocate(macroAssembler, ownerUID, effort); 187 204 if (!m_didAllocate) 188 205 return; 189 206 ASSERT(m_code); 207 AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer(); 190 208 #if CPU(ARM_TRADITIONAL) 191 209 macroAssembler.m_assembler.prepareExecutableCopy(m_code); … … 199 217 #elif CPU(ARM64) 200 218 copyCompactAndLinkCode<uint32_t>(macroAssembler, ownerUID, effort); 201 #endif 219 #endif // !ENABLE(BRANCH_COMPACTION) 202 220 203 221 m_linkTasks = WTFMove(macroAssembler.m_linkTasks); 204 222 } 205 223 206 void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort) 207 { 224 void LinkBuffer::allocate(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort) 225 { 226 size_t initialSize = macroAssembler.m_assembler.codeSize(); 208 227 if (m_code) { 209 228 if (initialSize > m_size) 210 229 return; 211 230 231 size_t nopsToFillInBytes = m_size - initialSize; 232 macroAssembler.emitNops(nopsToFillInBytes); 212 233 m_didAllocate = true; 213 m_size = initialSize;214 234 return; 215 235 } … … 222 242 m_size = initialSize; 223 243 m_didAllocate = true; 224 }225 226 void LinkBuffer::shrink(size_t newSize)227 {228 if (!m_executableMemory)229 return;230 m_size = newSize;231 m_executableMemory->shrink(m_size);232 244 } 233 245 -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.h
r199710 r202214 83 83 LinkBuffer(VM& vm, MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort = JITCompilationMustSucceed) 84 84 : m_size(0) 85 #if ENABLE(BRANCH_COMPACTION)86 , m_initialSize(0)87 #endif88 85 , m_didAllocate(false) 89 86 , m_code(0) … … 96 93 } 97 94 98 LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed )95 LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed, bool shouldPerformBranchCompaction = true) 99 96 : m_size(size) 100 #if ENABLE(BRANCH_COMPACTION)101 , m_initialSize(0)102 #endif103 97 , m_didAllocate(false) 104 98 , m_code(code) … … 108 102 #endif 109 103 { 104 #if ENABLE(BRANCH_COMPACTION) 105 m_shouldPerformBranchCompaction = shouldPerformBranchCompaction; 106 #else 107 UNUSED_PARAM(shouldPerformBranchCompaction); 108 #endif 110 109 linkCode(macroAssembler, 0, effort); 111 110 } … … 251 250 } 252 251 253 // FIXME: this does not account for the AssemblerData size! 254 size_t size() 255 { 256 return m_size; 257 } 252 size_t size() const { return m_size; } 258 253 259 254 bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; } … … 279 274 return src; 280 275 } 281 276 282 277 // Keep this private! - the underlying code should only be obtained externally via finalizeCode(). 283 278 void* code() … … 286 281 } 287 282 288 void allocate(size_t initialSize, void* ownerUID, JITCompilationEffort); 289 void shrink(size_t newSize); 283 void allocate(MacroAssembler&, void* ownerUID, JITCompilationEffort); 290 284 291 285 JS_EXPORT_PRIVATE void linkCode(MacroAssembler&, void* ownerUID, JITCompilationEffort); … … 308 302 size_t m_size; 309 303 #if ENABLE(BRANCH_COMPACTION) 310 size_t m_initialSize;311 304 AssemblerData m_assemblerStorage; 305 bool m_shouldPerformBranchCompaction { true }; 312 306 #endif 313 307 bool m_didAllocate; -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r201208 r202214 3088 3088 } 3089 3089 3090 PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm) 3091 { 3092 m_makeJumpPatchable = true; 3093 Jump result = branch32(cond, left, imm); 3094 m_makeJumpPatchable = false; 3095 return PatchableJump(result); 3096 } 3097 3090 3098 PatchableJump patchableBranch64(RelationalCondition cond, RegisterID reg, TrustedImm64 imm) 3091 3099 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
r199894 r202214 1869 1869 } 1870 1870 1871 PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm) 1872 { 1873 m_makeJumpPatchable = true; 1874 Jump result = branch32(cond, left, imm); 1875 m_makeJumpPatchable = false; 1876 return PatchableJump(result); 1877 } 1878 1871 1879 PatchableJump patchableBranchPtrWithPatch(RelationalCondition cond, Address left, DataLabelPtr& dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0)) 1872 1880 { -
trunk/Source/JavaScriptCore/assembler/X86Assembler.h
r201208 r202214 2947 2947 } 2948 2948 2949 static void fillNops(void* base, size_t size) 2950 { 2949 static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory) 2950 { 2951 UNUSED_PARAM(isCopyingToExecutableMemory); 2951 2952 #if CPU(X86_64) 2952 2953 static const uint8_t nops[10][10] = { -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r202157 r202214 441 441 case CacheType::Unset: 442 442 out.printf("unset"); 443 break; 444 case CacheType::ArrayLength: 445 out.printf("ArrayLength"); 443 446 break; 444 447 default: -
trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
r201703 r202214 1552 1552 1553 1553 state.baseGPR = static_cast<GPRReg>(stubInfo.patch.baseGPR); 1554 state.valueRegs = JSValueRegs( 1555 #if USE(JSVALUE32_64) 1556 static_cast<GPRReg>(stubInfo.patch.valueTagGPR), 1557 #endif 1558 static_cast<GPRReg>(stubInfo.patch.valueGPR)); 1554 state.valueRegs = stubInfo.valueRegs(); 1559 1555 1560 1556 ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters); … … 1754 1750 } 1755 1751 1756 CodeLocationLabel successLabel = 1757 stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToDone); 1752 CodeLocationLabel successLabel = stubInfo.doneLocation(); 1758 1753 1759 1754 linkBuffer.link(state.success, successLabel); 1760 1755 1761 linkBuffer.link( 1762 failure, 1763 stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase)); 1756 linkBuffer.link(failure, stubInfo.slowPathStartLocation()); 1764 1757 1765 1758 if (verbose) -
trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
r201657 r202214 64 64 } 65 65 66 void StructureStubInfo::initArrayLength() 67 { 68 cacheType = CacheType::ArrayLength; 69 } 70 66 71 void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset) 67 72 { … … 88 93 case CacheType::GetByIdSelf: 89 94 case CacheType::PutByIdReplace: 95 case CacheType::ArrayLength: 90 96 return; 91 97 } … … 103 109 case CacheType::GetByIdSelf: 104 110 case CacheType::PutByIdReplace: 111 case CacheType::ArrayLength: 105 112 return; 106 113 } … … 258 265 switch (cacheType) { 259 266 case CacheType::Unset: 267 case CacheType::ArrayLength: 260 268 return true; 261 269 case CacheType::GetByIdSelf: … … 276 284 return u.stub->containsPC(pc); 277 285 } 278 #endif 286 287 #endif // ENABLE(JIT) 279 288 280 289 } // namespace JSC -
trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h
r200405 r202214 58 58 GetByIdSelf, 59 59 PutByIdReplace, 60 Stub 60 Stub, 61 ArrayLength 61 62 }; 62 63 … … 69 70 70 71 void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset); 72 void initArrayLength(); 71 73 void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset); 72 74 void initStub(CodeBlock*, std::unique_ptr<PolymorphicAccess>); … … 144 146 } 145 147 146 CodeLocationCall callReturnLocation;148 bool containsPC(void* pc) const; 147 149 148 150 CodeOrigin codeOrigin; 149 151 CallSiteIndex callSiteIndex; 150 151 bool containsPC(void* pc) const;152 152 153 153 union { … … 166 166 167 167 struct { 168 CodeLocationLabel start; // This is either the start of the inline IC for *byId caches, or the location of patchable jump for 'in' caches. 169 RegisterSet usedRegisters; 170 uint32_t inlineSize; 171 int32_t deltaFromStartToSlowPathCallLocation; 172 int32_t deltaFromStartToSlowPathStart; 173 168 174 int8_t baseGPR; 175 int8_t valueGPR; 169 176 #if USE(JSVALUE32_64) 170 177 int8_t valueTagGPR; 171 178 int8_t baseTagGPR; 172 179 #endif 173 int8_t valueGPR; 174 RegisterSet usedRegisters; 175 int32_t deltaCallToDone; 176 int32_t deltaCallToJump; 177 int32_t deltaCallToSlowCase; 178 int32_t deltaCheckImmToCall; 179 #if USE(JSVALUE64) 180 int32_t deltaCallToLoadOrStore; 181 #else 182 int32_t deltaCallToTagLoadOrStore; 183 int32_t deltaCallToPayloadLoadOrStore; 180 } patch; 181 182 CodeLocationCall slowPathCallLocation() { return patch.start.callAtOffset(patch.deltaFromStartToSlowPathCallLocation); } 183 CodeLocationLabel doneLocation() { return patch.start.labelAtOffset(patch.inlineSize); } 184 CodeLocationLabel slowPathStartLocation() { return patch.start.labelAtOffset(patch.deltaFromStartToSlowPathStart); } 185 CodeLocationJump patchableJumpForIn() 186 { 187 ASSERT(accessType == AccessType::In); 188 return patch.start.jumpAtOffset(0); 189 } 190 191 JSValueRegs valueRegs() const 192 { 193 return JSValueRegs( 194 #if USE(JSVALUE32_64) 195 static_cast<GPRReg>(patch.valueTagGPR), 184 196 #endif 185 } patch; 197 static_cast<GPRReg>(patch.valueGPR)); 198 } 199 186 200 187 201 AccessType accessType; -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r200879 r202214 259 259 for (unsigned i = 0; i < m_ins.size(); ++i) { 260 260 StructureStubInfo& info = *m_ins[i].m_stubInfo; 261 CodeLocationCall callReturnLocation = linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call()); 262 info.patch.deltaCallToDone = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_done)); 263 info.patch.deltaCallToJump = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_jump)); 264 info.callReturnLocation = callReturnLocation; 265 info.patch.deltaCallToSlowCase = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label())); 261 262 CodeLocationLabel start = linkBuffer.locationOf(m_ins[i].m_jump); 263 info.patch.start = start; 264 265 ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr( 266 start, linkBuffer.locationOf(m_ins[i].m_done)); 267 RELEASE_ASSERT(inlineSize >= 0); 268 info.patch.inlineSize = inlineSize; 269 270 info.patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr( 271 start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call())); 272 273 info.patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr( 274 start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label())); 266 275 } 267 276 -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r199303 r202214 187 187 RELEASE_ASSERT(stubInfo); 188 188 189 jumpTarget = stubInfo->callReturnLocation.labelAtOffset( 190 stubInfo->patch.deltaCallToDone).executableAddress(); 189 jumpTarget = stubInfo->doneLocation().executableAddress(); 191 190 break; 192 191 } -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r202125 r202214 198 198 CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size()); 199 199 JITGetByIdGenerator gen( 200 m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, 201 JSValueRegs(baseTagGPROrNone, basePayloadGPR), 202 JSValueRegs(resultTagGPR, resultPayloadGPR), type); 200 m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber), 201 JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), type); 203 202 204 203 gen.generateFastPath(m_jit); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r202125 r202214 169 169 } 170 170 JITGetByIdGenerator gen( 171 m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, JSValueRegs(baseGPR),172 JSValueRegs( resultGPR), type);171 m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber), 172 JSValueRegs(baseGPR), JSValueRegs(resultGPR), type); 173 173 gen.generateFastPath(m_jit); 174 174 -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r202141 r202214 6184 6184 jit.addLinkTask( 6185 6185 [=] (LinkBuffer& linkBuffer) { 6186 CodeLocationCall callReturnLocation = 6187 linkBuffer.locationOf(slowPathCall); 6188 stubInfo->patch.deltaCallToDone = 6189 CCallHelpers::differenceBetweenCodePtr( 6190 callReturnLocation, 6191 linkBuffer.locationOf(done)); 6192 stubInfo->patch.deltaCallToJump = 6193 CCallHelpers::differenceBetweenCodePtr( 6194 callReturnLocation, 6195 linkBuffer.locationOf(jump)); 6196 stubInfo->callReturnLocation = callReturnLocation; 6197 stubInfo->patch.deltaCallToSlowCase = 6198 CCallHelpers::differenceBetweenCodePtr( 6199 callReturnLocation, 6200 linkBuffer.locationOf(slowPathBegin)); 6186 CodeLocationLabel start = linkBuffer.locationOf(jump); 6187 stubInfo->patch.start = start; 6188 ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr( 6189 start, linkBuffer.locationOf(done)); 6190 RELEASE_ASSERT(inlineSize >= 0); 6191 stubInfo->patch.inlineSize = inlineSize; 6192 6193 stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr( 6194 start, linkBuffer.locationOf(slowPathCall)); 6195 6196 stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr( 6197 start, linkBuffer.locationOf(slowPathBegin)); 6198 6201 6199 }); 6202 6200 }); … … 7617 7615 auto generator = Box<JITGetByIdGenerator>::create( 7618 7616 jit.codeBlock(), node->origin.semantic, callSiteIndex, 7619 params.unavailableRegisters(), JSValueRegs(params[1].gpr()),7617 params.unavailableRegisters(), uid, JSValueRegs(params[1].gpr()), 7620 7618 JSValueRegs(params[0].gpr()), type); 7621 7619 -
trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp
r199303 r202214 30 30 31 31 #include "CodeBlock.h" 32 #include "InlineAccess.h" 33 #include "JSCInlines.h" 32 34 #include "LinkBuffer.h" 33 #include "JSCInlines.h"34 35 #include "StructureStubInfo.h" 35 36 … … 70 71 void JITByIdGenerator::finalize(LinkBuffer& fastPath, LinkBuffer& slowPath) 71 72 { 72 CodeLocationCall callReturnLocation = slowPath.locationOf(m_call); 73 m_stubInfo->callReturnLocation = callReturnLocation; 74 m_stubInfo->patch.deltaCheckImmToCall = MacroAssembler::differenceBetweenCodePtr( 75 fastPath.locationOf(m_structureImm), callReturnLocation); 76 m_stubInfo->patch.deltaCallToJump = MacroAssembler::differenceBetweenCodePtr( 77 callReturnLocation, fastPath.locationOf(m_structureCheck)); 78 #if USE(JSVALUE64) 79 m_stubInfo->patch.deltaCallToLoadOrStore = MacroAssembler::differenceBetweenCodePtr( 80 callReturnLocation, fastPath.locationOf(m_loadOrStore)); 81 #else 82 m_stubInfo->patch.deltaCallToTagLoadOrStore = MacroAssembler::differenceBetweenCodePtr( 83 callReturnLocation, fastPath.locationOf(m_tagLoadOrStore)); 84 m_stubInfo->patch.deltaCallToPayloadLoadOrStore = MacroAssembler::differenceBetweenCodePtr( 85 callReturnLocation, fastPath.locationOf(m_loadOrStore)); 86 #endif 87 m_stubInfo->patch.deltaCallToSlowCase = MacroAssembler::differenceBetweenCodePtr( 88 callReturnLocation, slowPath.locationOf(m_slowPathBegin)); 89 m_stubInfo->patch.deltaCallToDone = MacroAssembler::differenceBetweenCodePtr( 90 callReturnLocation, fastPath.locationOf(m_done)); 73 ASSERT(m_start.isSet()); 74 CodeLocationLabel start = fastPath.locationOf(m_start); 75 m_stubInfo->patch.start = start; 76 77 int32_t inlineSize = MacroAssembler::differenceBetweenCodePtr( 78 start, fastPath.locationOf(m_done)); 79 ASSERT(inlineSize > 0); 80 m_stubInfo->patch.inlineSize = inlineSize; 81 82 m_stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr( 83 start, slowPath.locationOf(m_slowPathCall)); 84 m_stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr( 85 start, slowPath.locationOf(m_slowPathBegin)); 91 86 } 92 87 … … 96 91 } 97 92 98 void JITByIdGenerator::generateFast PathChecks(MacroAssembler& jit)93 void JITByIdGenerator::generateFastCommon(MacroAssembler& jit, size_t inlineICSize) 99 94 { 100 m_structureCheck = jit.patchableBranch32WithPatch( 101 MacroAssembler::NotEqual, 102 MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()), 103 m_structureImm, MacroAssembler::TrustedImm32(0)); 95 m_start = jit.label(); 96 size_t startSize = jit.m_assembler.buffer().codeSize(); 97 m_slowPathJump = jit.jump(); 98 size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize; 99 size_t nopsToEmitInBytes = inlineICSize - jumpSize; 100 jit.emitNops(nopsToEmitInBytes); 101 ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize); 102 m_done = jit.label(); 104 103 } 105 104 106 105 JITGetByIdGenerator::JITGetByIdGenerator( 107 106 CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet& usedRegisters, 108 JSValueRegs base, JSValueRegs value, AccessType accessType)109 : JITByIdGenerator( 110 codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)107 UniquedStringImpl* propertyName, JSValueRegs base, JSValueRegs value, AccessType accessType) 108 : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value) 109 , m_isLengthAccess(propertyName == codeBlock->vm()->propertyNames->length.impl()) 111 110 { 112 111 RELEASE_ASSERT(base.payloadGPR() != value.tagGPR()); … … 115 114 void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit) 116 115 { 117 generateFastPathChecks(jit); 118 119 #if USE(JSVALUE64) 120 m_loadOrStore = jit.load64WithCompactAddressOffsetPatch( 121 MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label(); 122 #else 123 m_tagLoadOrStore = jit.load32WithCompactAddressOffsetPatch( 124 MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.tagGPR()).label(); 125 m_loadOrStore = jit.load32WithCompactAddressOffsetPatch( 126 MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label(); 127 #endif 128 129 m_done = jit.label(); 116 generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess()); 130 117 } 131 118 … … 144 131 void JITPutByIdGenerator::generateFastPath(MacroAssembler& jit) 145 132 { 146 generateFastPathChecks(jit); 147 148 #if USE(JSVALUE64) 149 m_loadOrStore = jit.store64WithAddressOffsetPatch( 150 m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label(); 151 #else 152 m_tagLoadOrStore = jit.store32WithAddressOffsetPatch( 153 m_value.tagGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label(); 154 m_loadOrStore = jit.store32WithAddressOffsetPatch( 155 m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label(); 156 #endif 157 158 m_done = jit.label(); 133 generateFastCommon(jit, InlineAccess::sizeForPropertyReplace()); 159 134 } 160 135 -
trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h
r199303 r202214 69 69 { 70 70 m_slowPathBegin = slowPathBegin; 71 m_ call = call;71 m_slowPathCall = call; 72 72 } 73 73 74 74 MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; } 75 MacroAssembler::Jump slowPathJump() const { return m_structureCheck.m_jump; } 75 MacroAssembler::Jump slowPathJump() const 76 { 77 ASSERT(m_slowPathJump.isSet()); 78 return m_slowPathJump; 79 } 76 80 77 81 void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer); … … 79 83 80 84 protected: 81 void generateFast PathChecks(MacroAssembler&);85 void generateFastCommon(MacroAssembler&, size_t size); 82 86 83 87 JSValueRegs m_base; 84 88 JSValueRegs m_value; 85 89 86 MacroAssembler::DataLabel32 m_structureImm; 87 MacroAssembler::PatchableJump m_structureCheck; 88 AssemblerLabel m_loadOrStore; 89 #if USE(JSVALUE32_64) 90 AssemblerLabel m_tagLoadOrStore; 91 #endif 90 MacroAssembler::Label m_start; 92 91 MacroAssembler::Label m_done; 93 92 MacroAssembler::Label m_slowPathBegin; 94 MacroAssembler::Call m_call; 93 MacroAssembler::Call m_slowPathCall; 94 MacroAssembler::Jump m_slowPathJump; 95 95 }; 96 96 … … 100 100 101 101 JITGetByIdGenerator( 102 CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, JSValueRegs base,103 JSValueRegs value, AccessType);102 CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, UniquedStringImpl* propertyName, 103 JSValueRegs base, JSValueRegs value, AccessType); 104 104 105 105 void generateFastPath(MacroAssembler&); 106 107 private: 108 bool m_isLengthAccess; 106 109 }; 107 110 -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
r202157 r202214 223 223 JITGetByIdGenerator gen( 224 224 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 225 JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);225 propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); 226 226 gen.generateFastPath(*this); 227 227 … … 572 572 int resultVReg = currentInstruction[1].u.operand; 573 573 int baseVReg = currentInstruction[2].u.operand; 574 const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); 574 575 575 576 emitGetVirtualRegister(baseVReg, regT0); … … 579 580 JITGetByIdGenerator gen( 580 581 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 581 JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);582 ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure); 582 583 gen.generateFastPath(*this); 583 584 addSlowCase(gen.slowPathJump()); … … 620 621 JITGetByIdGenerator gen( 621 622 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(), 622 JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);623 ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get); 623 624 gen.generateFastPath(*this); 624 625 addSlowCase(gen.slowPathJump()); -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
r202157 r202214 293 293 JITGetByIdGenerator gen( 294 294 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 295 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);295 propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); 296 296 gen.generateFastPath(*this); 297 297 … … 588 588 int dst = currentInstruction[1].u.operand; 589 589 int base = currentInstruction[2].u.operand; 590 const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand)); 590 591 591 592 emitLoad(base, regT1, regT0); … … 594 595 JITGetByIdGenerator gen( 595 596 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 596 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);597 ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure); 597 598 gen.generateFastPath(*this); 598 599 addSlowCase(gen.slowPathJump()); … … 635 636 JITGetByIdGenerator gen( 636 637 m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(), 637 JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);638 ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get); 638 639 gen.generateFastPath(*this); 639 640 addSlowCase(gen.slowPathJump()); -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r201562 r202214 39 39 #include "GetterSetter.h" 40 40 #include "ICStats.h" 41 #include "InlineAccess.h" 41 42 #include "JIT.h" 42 43 #include "JITInlines.h" … … 91 92 } 92 93 93 static void repatchByIdSelfAccess(94 CodeBlock* codeBlock, StructureStubInfo& stubInfo, Structure* structure,95 PropertyOffset offset, const FunctionPtr& slowPathFunction,96 bool compact)97 {98 // Only optimize once!99 repatchCall(codeBlock, stubInfo.callReturnLocation, slowPathFunction);100 101 // Patch the structure check & the offset of the load.102 MacroAssembler::repatchInt32(103 stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall),104 bitwise_cast<int32_t>(structure->id()));105 #if USE(JSVALUE64)106 if (compact)107 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));108 else109 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));110 #elif USE(JSVALUE32_64)111 if (compact) {112 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));113 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));114 } else {115 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));116 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));117 }118 #endif119 }120 121 static void resetGetByIDCheckAndLoad(StructureStubInfo& stubInfo)122 {123 CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);124 if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {125 MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(126 MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),127 MacroAssembler::Address(128 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),129 JSCell::structureIDOffset()),130 static_cast<int32_t>(unusedPointer));131 }132 MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));133 #if USE(JSVALUE64)134 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);135 #else136 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);137 MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);138 #endif139 }140 141 static void resetPutByIDCheckAndLoad(StructureStubInfo& stubInfo)142 {143 CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);144 if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {145 MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(146 MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),147 MacroAssembler::Address(148 static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),149 JSCell::structureIDOffset()),150 static_cast<int32_t>(unusedPointer));151 }152 MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));153 #if USE(JSVALUE64)154 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);155 #else156 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);157 MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);158 #endif159 }160 161 static void replaceWithJump(StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)162 {163 RELEASE_ASSERT(target);164 165 if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {166 MacroAssembler::replaceWithJump(167 MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(168 stubInfo.callReturnLocation.dataLabel32AtOffset(169 -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),170 CodeLocationLabel(target));171 return;172 }173 174 resetGetByIDCheckAndLoad(stubInfo);175 176 MacroAssembler::repatchJump(177 stubInfo.callReturnLocation.jumpAtOffset(178 stubInfo.patch.deltaCallToJump),179 CodeLocationLabel(target));180 }181 182 94 enum InlineCacheAction { 183 95 GiveUpOnCache, … … 242 154 243 155 if (propertyName == vm.propertyNames->length) { 244 if (isJSArray(baseValue)) 156 if (isJSArray(baseValue)) { 157 if (stubInfo.cacheType == CacheType::Unset 158 && slot.slotBase() == baseValue 159 && InlineAccess::isCacheableArrayLength(stubInfo, jsCast<JSArray*>(baseValue))) { 160 161 bool generatedCodeInline = InlineAccess::generateArrayLength(*codeBlock->vm(), stubInfo, jsCast<JSArray*>(baseValue)); 162 if (generatedCodeInline) { 163 repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind)); 164 stubInfo.initArrayLength(); 165 return RetryCacheLater; 166 } 167 } 168 245 169 newCase = AccessCase::getLength(vm, codeBlock, AccessCase::ArrayLength); 246 else if (isJSString(baseValue))170 } else if (isJSString(baseValue)) 247 171 newCase = AccessCase::getLength(vm, codeBlock, AccessCase::StringLength); 248 172 else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(baseValue)) { … … 277 201 if (action != AttemptToCache) 278 202 return action; 279 203 280 204 // Optimize self access. 281 205 if (stubInfo.cacheType == CacheType::Unset … … 283 207 && slot.slotBase() == baseValue 284 208 && !slot.watchpointSet() 285 && isInlineOffset(slot.cachedOffset())286 && MacroAssembler::isCompactPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))287 && action == AttemptToCache288 209 && !structure->needImpurePropertyWatchpoint() 289 210 && !loadTargetFromProxy) { 290 LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName)); 291 structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset()); 292 repatchByIdSelfAccess(codeBlock, stubInfo, structure, slot.cachedOffset(), appropriateOptimizingGetByIdFunction(kind), true); 293 stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset()); 294 return RetryCacheLater; 211 212 bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(*codeBlock->vm(), stubInfo, structure, slot.cachedOffset()); 213 if (generatedCodeInline) { 214 LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName)); 215 structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset()); 216 repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind)); 217 stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset()); 218 return RetryCacheLater; 219 } 295 220 } 296 221 … … 371 296 372 297 RELEASE_ASSERT(result.code()); 373 replaceWithJump(stubInfo, result.code());298 InlineAccess::rewireStubAsJump(exec->vm(), stubInfo, CodeLocationLabel(result.code())); 374 299 } 375 300 … … 383 308 384 309 if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache) 385 repatchCall(exec->codeBlock(), stubInfo. callReturnLocation, appropriateGenericGetByIdFunction(kind));310 repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind)); 386 311 } 387 312 … … 434 359 435 360 if (stubInfo.cacheType == CacheType::Unset 436 && isInlineOffset(slot.cachedOffset()) 437 && MacroAssembler::isPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset())) 361 && InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset()) 438 362 && !structure->needImpurePropertyWatchpoint() 439 363 && !structure->inferredTypeFor(ident.impl())) { 440 364 441 LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));442 443 repatchByIdSelfAccess(444 codeBlock, stubInfo, structure, slot.cachedOffset(),445 appropriateOptimizingPutByIdFunction(slot, putKind), false);446 stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());447 return RetryCacheLater;365 bool generatedCodeInline = InlineAccess::generateSelfPropertyReplace(vm, stubInfo, structure, slot.cachedOffset()); 366 if (generatedCodeInline) { 367 LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident)); 368 repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind)); 369 stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset()); 370 return RetryCacheLater; 371 } 448 372 } 449 373 … … 525 449 526 450 RELEASE_ASSERT(result.code()); 527 resetPutByIDCheckAndLoad(stubInfo); 528 MacroAssembler::repatchJump( 529 stubInfo.callReturnLocation.jumpAtOffset( 530 stubInfo.patch.deltaCallToJump), 531 CodeLocationLabel(result.code())); 451 452 InlineAccess::rewireStubAsJump(vm, stubInfo, CodeLocationLabel(result.code())); 532 453 } 533 454 … … 541 462 542 463 if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache) 543 repatchCall(exec->codeBlock(), stubInfo. callReturnLocation, appropriateGenericPutByIdFunction(slot, putKind));464 repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind)); 544 465 } 545 466 … … 587 508 588 509 RELEASE_ASSERT(result.code()); 510 589 511 MacroAssembler::repatchJump( 590 stubInfo. callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),512 stubInfo.patchableJumpForIn(), 591 513 CodeLocationLabel(result.code())); 592 514 } … … 601 523 SuperSamplerScope superSamplerScope(false); 602 524 if (tryRepatchIn(exec, base, ident, wasFound, slot, stubInfo) == GiveUpOnCache) 603 repatchCall(exec->codeBlock(), stubInfo. callReturnLocation, operationIn);525 repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), operationIn); 604 526 } 605 527 … … 973 895 void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind) 974 896 { 975 repatchCall(codeBlock, stubInfo.callReturnLocation, appropriateOptimizingGetByIdFunction(kind)); 976 resetGetByIDCheckAndLoad(stubInfo); 977 MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase)); 897 repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind)); 898 InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation()); 978 899 } 979 900 980 901 void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo) 981 902 { 982 V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo. callReturnLocation).executableAddress());903 V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress()); 983 904 V_JITOperation_ESsiJJI optimizedFunction; 984 905 if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize) … … 992 913 optimizedFunction = operationPutByIdDirectNonStrictOptimize; 993 914 } 994 repatchCall(codeBlock, stubInfo.callReturnLocation, optimizedFunction); 995 re setPutByIDCheckAndLoad(stubInfo);996 MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));915 916 repatchCall(codeBlock, stubInfo.slowPathCallLocation(), optimizedFunction); 917 InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation()); 997 918 } 998 919 999 920 void resetIn(CodeBlock*, StructureStubInfo& stubInfo) 1000 921 { 1001 MacroAssembler::repatchJump(stubInfo. callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));922 MacroAssembler::repatchJump(stubInfo.patchableJumpForIn(), stubInfo.slowPathStartLocation()); 1002 923 } 1003 924
Note: See TracChangeset
for help on using the changeset viewer.