Changeset 245906 in webkit
- Timestamp:
- May 30, 2019 2:40:35 PM (5 years ago)
- Location:
- trunk
- Files:
-
- 2 added
- 42 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r245895 r245906 1 2019-05-30 Tadeu Zagallo <tzagallo@apple.com> and Yusuke Suzuki <ysuzuki@apple.com> 2 3 [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode 4 https://bugs.webkit.org/show_bug.cgi?id=197979 5 6 Reviewed by Filip Pizlo. 7 8 * stress/16bit-code.js: Added. 9 (shouldBe): 10 * stress/32bit-code.js: Added. 11 (shouldBe): 12 1 13 2019-05-30 Justin Michaud <justin_michaud@apple.com> 2 14 -
trunk/Source/JavaScriptCore/CMakeLists.txt
r245492 r245906 237 237 238 238 if (WIN32) 239 set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP")239 set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP_WIN") 240 240 else () 241 241 if (WTF_CPU_X86) -
trunk/Source/JavaScriptCore/ChangeLog
r245895 r245906 1 2019-05-30 Tadeu Zagallo <tzagallo@apple.com> and Yusuke Suzuki <ysuzuki@apple.com> 2 3 [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode 4 https://bugs.webkit.org/show_bug.cgi?id=197979 5 6 Reviewed by Filip Pizlo. 7 8 This patch introduces 16bit bytecode size. Previously, we had two versions of bytecodes, 8bit and 32bit. However, 9 in Gmail, we found that a lot of bytecodes get 32bit because they do not fit in 8bit. 8bit is very small and large 10 function easily emits a lot of 32bit bytecodes because of large VirtualRegister number etc. But they almost always 11 fit in 16bit. If we can have 16bit version of bytecode, we can make most of the current 32bit bytecodes 16bit and 12 save memory. 13 14 We rename rename op_wide to op_wide32 and introduce op_wide16. The mechanism is similar to old op_wide. When we 15 get op_wide16, the following bytecode data is 16bit, and we execute 16bit version of bytecode in LLInt. 16 17 We also disable this op_wide16 feature in Windows CLoop, which is used in AppleWin port. When the code size of 18 CLoop::execute increases, MSVC starts generating CLoop::execute function with very large stack allocation 19 requirement. Even before introducing this 16bit bytecode, CLoop::execute in AppleWin takes almost 100KB stack 20 height. After introducing this, it becomes 160KB. While the semantics of the function is correctly compiled, 21 such a large stack allocation is not essentially necessary, and this leads to stack overflow errors quite easily, 22 and tests fail with AppleWin port because it starts throwing stack overflow range error in various places. 23 In this patch, for now, we just disable op_wide16 feature for AppleWin so that CLoop::execute takes 100KB 24 stack allocation because this patch is not focusing on fixing AppleWin's CLoop issue. We introduce a new backend 25 type for LLInt, "C_LOOP_WIN". "C_LOOP_WIN" do not generate wide16 version of code to reduce the code size of 26 CLoop::execute. In the future, we should investigate whether this MSVC issue is fixed in Visual Studio 2019. 27 Or we should consider always enabling ASM LLInt for Windows. 28 29 This patch improves Gmail by 7MB at least. 30 31 * CMakeLists.txt: 32 * bytecode/BytecodeConventions.h: 33 * bytecode/BytecodeDumper.cpp: 34 (JSC::BytecodeDumper<Block>::dumpBlock): 35 * bytecode/BytecodeList.rb: 36 * bytecode/BytecodeRewriter.h: 37 (JSC::BytecodeRewriter::Fragment::align): 38 * bytecode/BytecodeUseDef.h: 39 (JSC::computeUsesForBytecodeOffset): 40 (JSC::computeDefsForBytecodeOffset): 41 * bytecode/CodeBlock.cpp: 42 (JSC::CodeBlock::finishCreation): 43 * bytecode/CodeBlock.h: 44 (JSC::CodeBlock::metadataTable const): 45 * bytecode/Fits.h: 46 * bytecode/Instruction.h: 47 (JSC::Instruction::opcodeID const): 48 (JSC::Instruction::isWide16 const): 49 (JSC::Instruction::isWide32 const): 50 (JSC::Instruction::hasMetadata const): 51 (JSC::Instruction::sizeShiftAmount const): 52 (JSC::Instruction::size const): 53 (JSC::Instruction::wide16 const): 54 (JSC::Instruction::wide32 const): 55 (JSC::Instruction::isWide const): Deleted. 56 (JSC::Instruction::wide const): Deleted. 57 * bytecode/InstructionStream.h: 58 (JSC::InstructionStreamWriter::write): 59 * bytecode/Opcode.h: 60 * bytecode/OpcodeSize.h: 61 * bytecompiler/BytecodeGenerator.cpp: 62 (JSC::BytecodeGenerator::alignWideOpcode16): 63 (JSC::BytecodeGenerator::alignWideOpcode32): 64 (JSC::BytecodeGenerator::emitGetByVal): Previously, we always emit 32bit op_get_by_val for bytecodes in `for-in` context because 65 its operand can be replaced to the other VirtualRegister later. But if we know that replacing VirtualRegister can fit in 8bit / 16bit 66 a-priori, we should not emit 32bit version. We expose OpXXX::checkWithoutMetadataID to check whether we could potentially compact 67 the bytecode for the given operands. 68 69 (JSC::BytecodeGenerator::emitYieldPoint): 70 (JSC::StructureForInContext::finalize): 71 (JSC::BytecodeGenerator::alignWideOpcode): Deleted. 72 * bytecompiler/BytecodeGenerator.h: 73 (JSC::BytecodeGenerator::write): 74 * dfg/DFGCapabilities.cpp: 75 (JSC::DFG::capabilityLevel): 76 * generator/Argument.rb: 77 * generator/DSL.rb: 78 * generator/Metadata.rb: 79 * generator/Opcode.rb: A little bit weird but checkImpl's argument must be reference. We are relying on that BoundLabel is once modified in 80 this check phase, and the modified BoundLabel will be used when emitting the code. If checkImpl copies the passed BoundLabel, this modification 81 will be discarded in this checkImpl function and make the code generation broken. 82 83 * generator/Section.rb: 84 * jit/JITExceptions.cpp: 85 (JSC::genericUnwind): 86 * llint/LLIntData.cpp: 87 (JSC::LLInt::initialize): 88 * llint/LLIntData.h: 89 (JSC::LLInt::opcodeMapWide16): 90 (JSC::LLInt::opcodeMapWide32): 91 (JSC::LLInt::getOpcodeWide16): 92 (JSC::LLInt::getOpcodeWide32): 93 (JSC::LLInt::getWide16CodePtr): 94 (JSC::LLInt::getWide32CodePtr): 95 (JSC::LLInt::opcodeMapWide): Deleted. 96 (JSC::LLInt::getOpcodeWide): Deleted. 97 (JSC::LLInt::getWideCodePtr): Deleted. 98 * llint/LLIntOfflineAsmConfig.h: 99 * llint/LLIntSlowPaths.cpp: 100 (JSC::LLInt::LLINT_SLOW_PATH_DECL): 101 * llint/LLIntSlowPaths.h: 102 * llint/LowLevelInterpreter.asm: 103 * llint/LowLevelInterpreter.cpp: 104 (JSC::CLoop::execute): 105 * llint/LowLevelInterpreter32_64.asm: 106 * llint/LowLevelInterpreter64.asm: 107 * offlineasm/arm.rb: 108 * offlineasm/arm64.rb: 109 * offlineasm/asm.rb: 110 * offlineasm/backends.rb: 111 * offlineasm/cloop.rb: 112 * offlineasm/instructions.rb: 113 * offlineasm/mips.rb: 114 * offlineasm/x86.rb: Load operation with sign extension should also have the extended size information. For example, loadbs should be 115 converted to loadbsi for 32bit sign extension (and loadbsq for 64bit sign extension). And use loadbsq / loadhsq for loading VirtualRegister 116 information in LowLevelInterpreter64 since they will be used for pointer arithmetic and they are using machine register width. 117 118 * parser/ResultType.h: 119 (JSC::OperandTypes::OperandTypes): 120 (JSC::OperandTypes::first const): 121 (JSC::OperandTypes::second const): 122 (JSC::OperandTypes::bits): 123 (JSC::OperandTypes::fromBits): 124 (): Deleted. 125 (JSC::OperandTypes::toInt): Deleted. 126 (JSC::OperandTypes::fromInt): Deleted. 127 We reduce sizeof(OperandTypes) from unsigned to uint16_t, which guarantees that OperandTypes always fit in 16bit bytecode. 128 1 129 2019-05-30 Justin Michaud <justin_michaud@apple.com> 2 130 -
trunk/Source/JavaScriptCore/bytecode/BytecodeConventions.h
r206525 r245906 30 30 // 0x00000000-0x3FFFFFFF Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe. 31 31 // 0x40000000-0x7FFFFFFF Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock. 32 static const int FirstConstantRegisterIndex = 0x40000000; 32 static constexpr int FirstConstantRegisterIndex = 0x40000000; 33 34 static constexpr int FirstConstantRegisterIndex8 = 16; 35 static constexpr int FirstConstantRegisterIndex16 = 64; 36 static constexpr int FirstConstantRegisterIndex32 = FirstConstantRegisterIndex; -
trunk/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
r241104 r245906 194 194 { 195 195 size_t instructionCount = 0; 196 size_t wideInstructionCount = 0; 196 size_t wide16InstructionCount = 0; 197 size_t wide32InstructionCount = 0; 197 198 size_t instructionWithMetadataCount = 0; 198 199 199 200 for (const auto& instruction : instructions) { 200 if (instruction->isWide()) 201 ++wideInstructionCount; 202 if (instruction->opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA) 201 if (instruction->isWide16()) 202 ++wide16InstructionCount; 203 else if (instruction->isWide32()) 204 ++wide32InstructionCount; 205 if (instruction->hasMetadata()) 203 206 ++instructionWithMetadataCount; 204 207 ++instructionCount; … … 207 210 out.print(*block); 208 211 out.printf( 209 ": %lu instructions (%lu wideinstructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)",212 ": %lu instructions (%lu 16-bit instructions, %lu 32-bit instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)", 210 213 static_cast<unsigned long>(instructionCount), 211 static_cast<unsigned long>(wideInstructionCount), 214 static_cast<unsigned long>(wide16InstructionCount), 215 static_cast<unsigned long>(wide32InstructionCount), 212 216 static_cast<unsigned long>(instructionWithMetadataCount), 213 217 static_cast<unsigned long>(instructions.sizeInBytes() + block->metadataSizeInBytes()), -
trunk/Source/JavaScriptCore/bytecode/BytecodeList.rb
r245658 r245906 83 83 op_prefix: "op_" 84 84 85 op :wide 85 op :wide16 86 op :wide32 86 87 87 88 op :enter … … 1141 1142 op :llint_cloop_did_return_from_js_22 1142 1143 op :llint_cloop_did_return_from_js_23 1144 op :llint_cloop_did_return_from_js_24 1145 op :llint_cloop_did_return_from_js_25 1146 op :llint_cloop_did_return_from_js_26 1147 op :llint_cloop_did_return_from_js_27 1148 op :llint_cloop_did_return_from_js_28 1149 op :llint_cloop_did_return_from_js_29 1150 op :llint_cloop_did_return_from_js_30 1151 op :llint_cloop_did_return_from_js_31 1152 op :llint_cloop_did_return_from_js_32 1153 op :llint_cloop_did_return_from_js_33 1154 op :llint_cloop_did_return_from_js_34 1143 1155 1144 1156 end_section :CLoopHelpers -
trunk/Source/JavaScriptCore/bytecode/BytecodeRewriter.h
r237933 r245906 162 162 #if CPU(NEEDS_ALIGNED_ACCESS) 163 163 m_bytecodeGenerator.withWriter(m_writer, [&] { 164 while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide )164 while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide32) 165 165 OpNop::emit<OpcodeSize::Narrow>(&m_bytecodeGenerator); 166 166 }); -
trunk/Source/JavaScriptCore/bytecode/BytecodeUseDef.h
r244088 r245906 69 69 70 70 switch (opcodeID) { 71 case op_wide: 71 case op_wide16: 72 case op_wide32: 72 73 RELEASE_ASSERT_NOT_REACHED(); 73 74 … … 290 291 { 291 292 switch (opcodeID) { 292 case op_wide: 293 case op_wide16: 294 case op_wide32: 293 295 RELEASE_ASSERT_NOT_REACHED(); 294 296 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r245667 r245906 446 446 HandlerInfo& handler = m_rareData->m_exceptionHandlers[i]; 447 447 #if ENABLE(JIT) 448 MacroAssemblerCodePtr<BytecodePtrTag> codePtr = instructions().at(unlinkedHandler.target)->isWide() 449 ? LLInt::getWideCodePtr<BytecodePtrTag>(op_catch) 450 : LLInt::getCodePtr<BytecodePtrTag>(op_catch); 448 auto instruction = instructions().at(unlinkedHandler.target); 449 MacroAssemblerCodePtr<BytecodePtrTag> codePtr; 450 if (instruction->isWide32()) 451 codePtr = LLInt::getWide32CodePtr<BytecodePtrTag>(op_catch); 452 else if (instruction->isWide16()) 453 codePtr = LLInt::getWide16CodePtr<BytecodePtrTag>(op_catch); 454 else 455 codePtr = LLInt::getCodePtr<BytecodePtrTag>(op_catch); 451 456 handler.initialize(unlinkedHandler, CodeLocationLabel<ExceptionHandlerPtrTag>(codePtr.retagged<ExceptionHandlerPtrTag>())); 452 457 #else -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r245667 r245906 146 146 JS_EXPORT_PRIVATE void dump(PrintStream&) const; 147 147 148 MetadataTable* metadataTable() const { return m_metadata.get(); } 149 148 150 int numParameters() const { return m_numParameters; } 149 151 void setNumParameters(int newValue); -
trunk/Source/JavaScriptCore/bytecode/Fits.h
r245213 r245906 52 52 template<typename T, OpcodeSize size> 53 53 struct Fits<T, size, std::enable_if_t<sizeof(T) == size, std::true_type>> { 54 using TargetType = typename TypeBySize<size>::unsignedType; 55 54 56 static bool check(T) { return true; } 55 57 56 static typename TypeBySize<size>::type convert(T t) { return bitwise_cast<typename TypeBySize<size>::type>(t); }57 58 template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>>59 static T1 convert( typename TypeBySize<size1>::type t) { return bitwise_cast<T1>(t); }58 static TargetType convert(T t) { return bitwise_cast<TargetType>(t); } 59 60 template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, TargetType>::value, std::true_type>> 61 static T1 convert(TargetType t) { return bitwise_cast<T1>(t); } 60 62 }; 61 63 62 64 template<typename T, OpcodeSize size> 63 struct Fits<T, size, std::enable_if_t<sizeof(T) < size, std::true_type>> { 64 static bool check(T) { return true; } 65 66 static typename TypeBySize<size>::type convert(T t) { return static_cast<typename TypeBySize<size>::type>(t); } 67 68 template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>> 69 static T1 convert(typename TypeBySize<size1>::type t) { return static_cast<T1>(t); } 70 }; 65 struct Fits<T, size, std::enable_if_t<std::is_integral<T>::value && sizeof(T) != size && !std::is_same<bool, T>::value, std::true_type>> { 66 using TargetType = std::conditional_t<std::is_unsigned<T>::value, typename TypeBySize<size>::unsignedType, typename TypeBySize<size>::signedType>; 67 68 static bool check(T t) 69 { 70 return t >= std::numeric_limits<TargetType>::min() && t <= std::numeric_limits<TargetType>::max(); 71 } 72 73 static TargetType convert(T t) 74 { 75 ASSERT(check(t)); 76 return static_cast<TargetType>(t); 77 } 78 79 template<class T1 = T, OpcodeSize size1 = size, typename TargetType1 = TargetType, typename = std::enable_if_t<!std::is_same<T1, TargetType1>::value, std::true_type>> 80 static T1 convert(TargetType1 t) { return static_cast<T1>(t); } 81 }; 82 83 template<OpcodeSize size> 84 struct Fits<bool, size, std::enable_if_t<size != sizeof(bool), std::true_type>> : public Fits<uint8_t, size> { 85 using Base = Fits<uint8_t, size>; 86 87 static bool check(bool e) { return Base::check(static_cast<uint8_t>(e)); } 88 89 static typename Base::TargetType convert(bool e) 90 { 91 return Base::convert(static_cast<uint8_t>(e)); 92 } 93 94 static bool convert(typename Base::TargetType e) 95 { 96 return Base::convert(e); 97 } 98 }; 99 100 template<OpcodeSize size> 101 struct FirstConstant; 71 102 72 103 template<> 73 struct Fits<uint32_t, OpcodeSize::Narrow> { 74 static bool check(unsigned u) { return u <= UINT8_MAX; } 75 76 static uint8_t convert(unsigned u) 77 { 78 ASSERT(check(u)); 79 return static_cast<uint8_t>(u); 80 } 81 static unsigned convert(uint8_t u) 82 { 83 return u; 84 } 104 struct FirstConstant<OpcodeSize::Narrow> { 105 static constexpr int index = FirstConstantRegisterIndex8; 85 106 }; 86 107 87 108 template<> 88 struct Fits<int, OpcodeSize::Narrow> { 89 static bool check(int i) 90 { 91 return i >= INT8_MIN && i <= INT8_MAX; 92 } 93 94 static uint8_t convert(int i) 95 { 96 ASSERT(check(i)); 97 return static_cast<uint8_t>(i); 98 } 99 100 static int convert(uint8_t i) 101 { 102 return static_cast<int8_t>(i); 103 } 104 }; 105 106 template<> 107 struct Fits<VirtualRegister, OpcodeSize::Narrow> { 109 struct FirstConstant<OpcodeSize::Wide16> { 110 static constexpr int index = FirstConstantRegisterIndex16; 111 }; 112 113 template<OpcodeSize size> 114 struct Fits<VirtualRegister, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> { 115 // Narrow: 108 116 // -128..-1 local variables 109 117 // 0..15 arguments 110 118 // 16..127 constants 111 static constexpr int s_firstConstantIndex = 16; 119 // 120 // Wide16: 121 // -2**15..-1 local variables 122 // 0..64 arguments 123 // 64..2**15-1 constants 124 125 using TargetType = typename TypeBySize<size>::signedType; 126 127 static constexpr int s_firstConstantIndex = FirstConstant<size>::index; 112 128 static bool check(VirtualRegister r) 113 129 { 114 130 if (r.isConstant()) 115 return (s_firstConstantIndex + r.toConstantIndex()) <= INT8_MAX;116 return r.offset() >= INT8_MIN&& r.offset() < s_firstConstantIndex;117 } 118 119 static uint8_tconvert(VirtualRegister r)131 return (s_firstConstantIndex + r.toConstantIndex()) <= std::numeric_limits<TargetType>::max(); 132 return r.offset() >= std::numeric_limits<TargetType>::min() && r.offset() < s_firstConstantIndex; 133 } 134 135 static TargetType convert(VirtualRegister r) 120 136 { 121 137 ASSERT(check(r)); 122 138 if (r.isConstant()) 123 return static_cast< int8_t>(s_firstConstantIndex + r.toConstantIndex());124 return static_cast< int8_t>(r.offset());125 } 126 127 static VirtualRegister convert( uint8_tu)128 { 129 int i = static_cast<int>(static_cast< int8_t>(u));139 return static_cast<TargetType>(s_firstConstantIndex + r.toConstantIndex()); 140 return static_cast<TargetType>(r.offset()); 141 } 142 143 static VirtualRegister convert(TargetType u) 144 { 145 int i = static_cast<int>(static_cast<TargetType>(u)); 130 146 if (i >= s_firstConstantIndex) 131 147 return VirtualRegister { (i - s_firstConstantIndex) + FirstConstantRegisterIndex }; … … 134 150 }; 135 151 136 template<> 137 struct Fits<SymbolTableOrScopeDepth, OpcodeSize::Narrow> { 138 static bool check(SymbolTableOrScopeDepth u) 139 { 140 return u.raw() <= UINT8_MAX; 141 } 142 143 static uint8_t convert(SymbolTableOrScopeDepth u) 144 { 145 ASSERT(check(u)); 146 return static_cast<uint8_t>(u.raw()); 147 } 148 149 static SymbolTableOrScopeDepth convert(uint8_t u) 150 { 151 return SymbolTableOrScopeDepth::raw(u); 152 } 153 }; 154 155 template<> 156 struct Fits<Special::Pointer, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { 157 using Base = Fits<int, OpcodeSize::Narrow>; 158 static bool check(Special::Pointer sp) { return Base::check(static_cast<int>(sp)); } 159 static uint8_t convert(Special::Pointer sp) 160 { 161 return Base::convert(static_cast<int>(sp)); 162 } 163 static Special::Pointer convert(uint8_t sp) 164 { 165 return static_cast<Special::Pointer>(Base::convert(sp)); 166 } 167 }; 168 169 template<> 170 struct Fits<GetPutInfo, OpcodeSize::Narrow> { 152 template<OpcodeSize size> 153 struct Fits<SymbolTableOrScopeDepth, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> : public Fits<unsigned, size> { 154 static_assert(sizeof(SymbolTableOrScopeDepth) == sizeof(unsigned)); 155 using TargetType = typename TypeBySize<size>::unsignedType; 156 using Base = Fits<unsigned, size>; 157 158 static bool check(SymbolTableOrScopeDepth u) { return Base::check(u.raw()); } 159 160 static TargetType convert(SymbolTableOrScopeDepth u) 161 { 162 return Base::convert(u.raw()); 163 } 164 165 static SymbolTableOrScopeDepth convert(TargetType u) 166 { 167 return SymbolTableOrScopeDepth::raw(Base::convert(u)); 168 } 169 }; 170 171 template<OpcodeSize size> 172 struct Fits<GetPutInfo, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> { 173 using TargetType = typename TypeBySize<size>::unsignedType; 174 171 175 // 13 Resolve Types 172 176 // 3 Initialization Modes … … 198 202 } 199 203 200 static uint8_tconvert(GetPutInfo gpi)204 static TargetType convert(GetPutInfo gpi) 201 205 { 202 206 ASSERT(check(gpi)); … … 207 211 } 208 212 209 static GetPutInfo convert( uint8_tgpi)213 static GetPutInfo convert(TargetType gpi) 210 214 { 211 215 auto resolveType = static_cast<ResolveType>((gpi & s_resolveTypeBits) >> 3); … … 216 220 }; 217 221 218 template<> 219 struct Fits<DebugHookType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { 220 using Base = Fits<int, OpcodeSize::Narrow>; 221 static bool check(DebugHookType dht) { return Base::check(static_cast<int>(dht)); } 222 static uint8_t convert(DebugHookType dht) 223 { 224 return Base::convert(static_cast<int>(dht)); 225 } 226 static DebugHookType convert(uint8_t dht) 227 { 228 return static_cast<DebugHookType>(Base::convert(dht)); 229 } 230 }; 231 232 template<> 233 struct Fits<ProfileTypeBytecodeFlag, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { 234 using Base = Fits<int, OpcodeSize::Narrow>; 235 static bool check(ProfileTypeBytecodeFlag ptbf) { return Base::check(static_cast<int>(ptbf)); } 236 static uint8_t convert(ProfileTypeBytecodeFlag ptbf) 237 { 238 return Base::convert(static_cast<int>(ptbf)); 239 } 240 static ProfileTypeBytecodeFlag convert(uint8_t ptbf) 241 { 242 return static_cast<ProfileTypeBytecodeFlag>(Base::convert(ptbf)); 243 } 244 }; 245 246 template<> 247 struct Fits<ResolveType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { 248 using Base = Fits<int, OpcodeSize::Narrow>; 249 static bool check(ResolveType rt) { return Base::check(static_cast<int>(rt)); } 250 static uint8_t convert(ResolveType rt) 251 { 252 return Base::convert(static_cast<int>(rt)); 253 } 254 255 static ResolveType convert(uint8_t rt) 256 { 257 return static_cast<ResolveType>(Base::convert(rt)); 258 } 259 }; 260 261 template<> 262 struct Fits<OperandTypes, OpcodeSize::Narrow> { 222 template<typename E, OpcodeSize size> 223 struct Fits<E, size, std::enable_if_t<sizeof(E) != size && std::is_enum<E>::value, std::true_type>> : public Fits<std::underlying_type_t<E>, size> { 224 using Base = Fits<std::underlying_type_t<E>, size>; 225 226 static bool check(E e) { return Base::check(static_cast<std::underlying_type_t<E>>(e)); } 227 228 static typename Base::TargetType convert(E e) 229 { 230 return Base::convert(static_cast<std::underlying_type_t<E>>(e)); 231 } 232 233 static E convert(typename Base::TargetType e) 234 { 235 return static_cast<E>(Base::convert(e)); 236 } 237 }; 238 239 template<OpcodeSize size> 240 struct Fits<OperandTypes, size, std::enable_if_t<sizeof(OperandTypes) != size, std::true_type>> { 241 static_assert(sizeof(OperandTypes) == sizeof(uint16_t)); 242 using TargetType = typename TypeBySize<size>::unsignedType; 243 263 244 // a pair of (ResultType::Type, ResultType::Type) - try to fit each type into 4 bits 264 245 // additionally, encode unknown types as 0 rather than the | of all types 265 static constexpr int s_maxType = 0x10; 246 static constexpr unsigned typeWidth = 4; 247 static constexpr unsigned maxType = (1 << typeWidth) - 1; 266 248 267 249 static bool check(OperandTypes types) 268 250 { 269 auto first = types.first().bits(); 270 auto second = types.second().bits(); 271 if (first == ResultType::unknownType().bits()) 272 first = 0; 273 if (second == ResultType::unknownType().bits()) 274 second = 0; 275 return first < s_maxType && second < s_maxType; 276 } 277 278 static uint8_t convert(OperandTypes types) 279 { 280 ASSERT(check(types)); 281 auto first = types.first().bits(); 282 auto second = types.second().bits(); 283 if (first == ResultType::unknownType().bits()) 284 first = 0; 285 if (second == ResultType::unknownType().bits()) 286 second = 0; 287 return (first << 4) | second; 288 } 289 290 static OperandTypes convert(uint8_t types) 291 { 292 auto first = (types & (0xf << 4)) >> 4; 293 auto second = (types & 0xf); 294 if (!first) 295 first = ResultType::unknownType().bits(); 296 if (!second) 297 second = ResultType::unknownType().bits(); 298 return OperandTypes(ResultType(first), ResultType(second)); 299 } 300 }; 301 302 template<> 303 struct Fits<PutByIdFlags, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { 304 // only ever encoded in the bytecode stream as 0 or 1, so the trivial encoding should be good enough 305 using Base = Fits<int, OpcodeSize::Narrow>; 306 static bool check(PutByIdFlags flags) { return Base::check(static_cast<int>(flags)); } 307 static uint8_t convert(PutByIdFlags flags) 308 { 309 return Base::convert(static_cast<int>(flags)); 310 } 311 312 static PutByIdFlags convert(uint8_t flags) 313 { 314 return static_cast<PutByIdFlags>(Base::convert(flags)); 315 } 316 }; 317 318 template<OpcodeSize size> 319 struct Fits<BoundLabel, size> : Fits<int, size> { 251 if (size == OpcodeSize::Narrow) { 252 auto first = types.first().bits(); 253 auto second = types.second().bits(); 254 if (first == ResultType::unknownType().bits()) 255 first = 0; 256 if (second == ResultType::unknownType().bits()) 257 second = 0; 258 return first <= maxType && second <= maxType; 259 } 260 return true; 261 } 262 263 static TargetType convert(OperandTypes types) 264 { 265 if (size == OpcodeSize::Narrow) { 266 ASSERT(check(types)); 267 auto first = types.first().bits(); 268 auto second = types.second().bits(); 269 if (first == ResultType::unknownType().bits()) 270 first = 0; 271 if (second == ResultType::unknownType().bits()) 272 second = 0; 273 return (first << typeWidth) | second; 274 } 275 return static_cast<TargetType>(types.bits()); 276 } 277 278 static OperandTypes convert(TargetType types) 279 { 280 if (size == OpcodeSize::Narrow) { 281 auto first = types >> typeWidth; 282 auto second = types & maxType; 283 if (!first) 284 first = ResultType::unknownType().bits(); 285 if (!second) 286 second = ResultType::unknownType().bits(); 287 return OperandTypes(ResultType(first), ResultType(second)); 288 } 289 return OperandTypes::fromBits(static_cast<uint16_t>(types)); 290 } 291 }; 292 293 template<OpcodeSize size> 294 struct Fits<BoundLabel, size> : public Fits<int, size> { 320 295 // This is a bit hacky: we need to delay computing jump targets, since we 321 296 // might have to emit `nop`s to align the instructions stream. Additionally, … … 331 306 } 332 307 333 static typename TypeBySize<size>::type convert(BoundLabel& label)308 static typename Base::TargetType convert(BoundLabel& label) 334 309 { 335 310 return Base::convert(label.commitTarget()); 336 311 } 337 312 338 static BoundLabel convert(typename TypeBySize<size>::type target)313 static BoundLabel convert(typename Base::TargetType target) 339 314 { 340 315 return BoundLabel(Base::convert(target)); -
trunk/Source/JavaScriptCore/bytecode/Instruction.h
r243162 r245906 46 46 47 47 private: 48 typename TypeBySize<Width>:: type m_opcode;48 typename TypeBySize<Width>::unsignedType m_opcode; 49 49 }; 50 50 … … 52 52 OpcodeID opcodeID() const 53 53 { 54 if (isWide()) 55 return wide()->opcodeID(); 54 if (isWide32()) 55 return wide32()->opcodeID(); 56 if (isWide16()) 57 return wide16()->opcodeID(); 56 58 return narrow()->opcodeID(); 57 59 } … … 62 64 } 63 65 64 bool isWide () const66 bool isWide16() const 65 67 { 66 return narrow()->opcodeID() == op_wide; 68 return narrow()->opcodeID() == op_wide16; 69 } 70 71 bool isWide32() const 72 { 73 return narrow()->opcodeID() == op_wide32; 74 } 75 76 bool hasMetadata() const 77 { 78 return opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA; 79 } 80 81 int sizeShiftAmount() const 82 { 83 if (isWide32()) 84 return 2; 85 if (isWide16()) 86 return 1; 87 return 0; 67 88 } 68 89 69 90 size_t size() const 70 91 { 71 auto wide = isWide();72 auto padding = wide? 1 : 0;73 auto size = wide ? 4 : 1;92 auto sizeShiftAmount = this->sizeShiftAmount(); 93 auto padding = sizeShiftAmount ? 1 : 0; 94 auto size = 1 << sizeShiftAmount; 74 95 return opcodeLengths[opcodeID()] * size + padding; 75 96 } … … 107 128 } 108 129 109 const Impl<OpcodeSize::Wide >* wide() const130 const Impl<OpcodeSize::Wide16>* wide16() const 110 131 { 111 132 112 ASSERT(isWide()); 113 return reinterpret_cast<const Impl<OpcodeSize::Wide>*>(bitwise_cast<uintptr_t>(this) + 1); 133 ASSERT(isWide16()); 134 return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(bitwise_cast<uintptr_t>(this) + 1); 135 } 136 137 const Impl<OpcodeSize::Wide32>* wide32() const 138 { 139 140 ASSERT(isWide32()); 141 return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(bitwise_cast<uintptr_t>(this) + 1); 114 142 } 115 143 }; -
trunk/Source/JavaScriptCore/bytecode/InstructionStream.h
r240684 r245906 211 211 } 212 212 } 213 214 void write(uint16_t h) 215 { 216 ASSERT(!m_finalized); 217 uint8_t bytes[2]; 218 std::memcpy(bytes, &h, sizeof(h)); 219 220 // Though not always obvious, we don't have to invert the order of the 221 // bytes written here for CPU(BIG_ENDIAN). This is because the incoming 222 // i value is already ordered in big endian on CPU(BIG_EDNDIAN) platforms. 223 write(bytes[0]); 224 write(bytes[1]); 225 } 226 213 227 void write(uint32_t i) 214 228 { -
trunk/Source/JavaScriptCore/bytecode/Opcode.h
r245658 r245906 67 67 #if ENABLE(C_LOOP) && !HAVE(COMPUTED_GOTO) 68 68 69 #define OPCODE_ID_ENUM(opcode, length) opcode##_wide = numOpcodeIDs + opcode, 70 enum OpcodeIDWide : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; 69 #define OPCODE_ID_ENUM(opcode, length) opcode##_wide16 = numOpcodeIDs + opcode, 70 enum OpcodeIDWide16 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; 71 #undef OPCODE_ID_ENUM 72 73 #define OPCODE_ID_ENUM(opcode, length) opcode##_wide32 = numOpcodeIDs * 2 + opcode, 74 enum OpcodeIDWide32 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; 71 75 #undef OPCODE_ID_ENUM 72 76 #endif -
trunk/Source/JavaScriptCore/bytecode/OpcodeSize.h
r237547 r245906 30 30 enum OpcodeSize { 31 31 Narrow = 1, 32 Wide = 4, 32 Wide16 = 2, 33 Wide32 = 4, 33 34 }; 34 35 … … 38 39 template<> 39 40 struct TypeBySize<OpcodeSize::Narrow> { 40 using type = uint8_t; 41 using signedType = int8_t; 42 using unsignedType = uint8_t; 41 43 }; 42 44 43 45 template<> 44 struct TypeBySize<OpcodeSize::Wide> { 45 using type = uint32_t; 46 struct TypeBySize<OpcodeSize::Wide16> { 47 using signedType = int16_t; 48 using unsignedType = uint16_t; 49 }; 50 51 template<> 52 struct TypeBySize<OpcodeSize::Wide32> { 53 using signedType = int32_t; 54 using unsignedType = uint32_t; 46 55 }; 47 56 … … 55 64 56 65 template<> 57 struct PaddingBySize<OpcodeSize::Wide> { 66 struct PaddingBySize<OpcodeSize::Wide16> { 67 static constexpr uint8_t value = 1; 68 }; 69 70 template<> 71 struct PaddingBySize<OpcodeSize::Wide32> { 58 72 static constexpr uint8_t value = 1; 59 73 }; -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
r245213 r245906 1340 1340 } 1341 1341 1342 void BytecodeGenerator::alignWideOpcode ()1342 void BytecodeGenerator::alignWideOpcode16() 1343 1343 { 1344 1344 #if CPU(NEEDS_ALIGNED_ACCESS) 1345 while ((m_writer.position() + 1) % OpcodeSize::Wide) 1345 while ((m_writer.position() + 1) % OpcodeSize::Wide16) 1346 OpNop::emit<OpcodeSize::Narrow>(this); 1347 #endif 1348 } 1349 1350 void BytecodeGenerator::alignWideOpcode32() 1351 { 1352 #if CPU(NEEDS_ALIGNED_ACCESS) 1353 while ((m_writer.position() + 1) % OpcodeSize::Wide32) 1346 1354 OpNop::emit<OpcodeSize::Narrow>(this); 1347 1355 #endif … … 2722 2730 if (context.isIndexedForInContext()) { 2723 2731 auto& indexedContext = context.asIndexedForInContext(); 2724 OpGetByVal::emit<OpcodeSize::Wide>(this, kill(dst), base, indexedContext.index()); 2732 kill(dst); 2733 if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Narrow>(this, dst, base, property)) 2734 OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(this, dst, base, indexedContext.index()); 2735 else if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Wide16>(this, dst, base, property)) 2736 OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Wide16>(this, dst, base, indexedContext.index()); 2737 else 2738 OpGetByVal::emit<OpcodeSize::Wide32>(this, dst, base, indexedContext.index()); 2725 2739 indexedContext.addGetInst(m_lastInstruction.offset(), property->index()); 2726 2740 return dst; 2727 2741 } 2728 2742 2743 // We cannot do the above optimization here since OpGetDirectPname => OpGetByVal conversion involves different metadata ID allocation. 2729 2744 StructureForInContext& structureContext = context.asStructureForInContext(); 2730 OpGetDirectPname::emit<OpcodeSize::Wide >(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator());2745 OpGetDirectPname::emit<OpcodeSize::Wide32>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator()); 2731 2746 2732 2747 structureContext.addGetInst(m_lastInstruction.offset(), property->index()); … … 4481 4496 // conservatively align for the bytecode rewriter: it will delete this yield and 4482 4497 // append a fragment, so we make sure that the start of the fragments is aligned 4483 while (m_writer.position() % OpcodeSize::Wide )4498 while (m_writer.position() % OpcodeSize::Wide32) 4484 4499 OpNop::emit<OpcodeSize::Narrow>(this); 4485 4500 #endif … … 4984 4999 auto instruction = generator.m_writer.ref(instIndex); 4985 5000 auto end = instIndex + instruction->size(); 4986 ASSERT(instruction->isWide ());5001 ASSERT(instruction->isWide32()); 4987 5002 4988 5003 generator.m_writer.seek(instIndex); … … 4997 5012 // 2. base stays the same. 4998 5013 // 3. property gets switched to the original property. 4999 OpGetByVal::emit<OpcodeSize::Wide >(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex));5014 OpGetByVal::emit<OpcodeSize::Wide32>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex)); 5000 5015 5001 5016 // 4. nop out the remaining bytes … … 5019 5034 unsigned instIndex = instPair.first; 5020 5035 int propertyRegIndex = instPair.second; 5021 // FIXME: we should not have to force this get_by_val to be wide, just guarantee that propertyRegIndex fits5022 // https://bugs.webkit.org/show_bug.cgi?id=1909295023 5036 generator.m_writer.ref(instIndex)->cast<OpGetByVal>()->setProperty(VirtualRegister(propertyRegIndex), []() { 5024 5037 ASSERT_NOT_REACHED(); -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
r245213 r245906 1163 1163 1164 1164 void write(uint8_t byte) { m_writer.write(byte); } 1165 void write(uint16_t h) { m_writer.write(h); } 1165 1166 void write(uint32_t i) { m_writer.write(i); } 1166 void alignWideOpcode(); 1167 void write(int8_t byte) { m_writer.write(static_cast<uint8_t>(byte)); } 1168 void write(int16_t h) { m_writer.write(static_cast<uint16_t>(h)); } 1169 void write(int32_t i) { m_writer.write(static_cast<uint32_t>(i)); } 1170 void alignWideOpcode16(); 1171 void alignWideOpcode32(); 1167 1172 1168 1173 class PreservedTDZStack { -
trunk/Source/JavaScriptCore/dfg/DFGCapabilities.cpp
r244811 r245906 109 109 110 110 switch (opcodeID) { 111 case op_wide: 111 case op_wide16: 112 case op_wide32: 112 113 RELEASE_ASSERT_NOT_REACHED(); 113 114 case op_enter: -
trunk/Source/JavaScriptCore/generator/Argument.rb
r240041 r245906 43 43 end 44 44 45 def create_reference_param 46 "#{@type.to_s}& #{@name}" 47 end 48 45 49 def field_name 46 50 "m_#{@name}" … … 68 72 void set#{capitalized_name}(#{@type.to_s} value, Functor func) 69 73 { 70 if (isWide()) 71 set#{capitalized_name}<OpcodeSize::Wide>(value, func); 74 if (isWide32()) 75 set#{capitalized_name}<OpcodeSize::Wide32>(value, func); 76 else if (isWide16()) 77 set#{capitalized_name}<OpcodeSize::Wide16>(value, func); 72 78 else 73 79 set#{capitalized_name}<OpcodeSize::Narrow>(value, func); … … 79 85 if (!#{Fits::check "size", "value", @type}) 80 86 value = func(); 81 auto* stream = bitwise_cast<typename TypeBySize<size>:: type*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value);87 auto* stream = bitwise_cast<typename TypeBySize<size>::unsignedType*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value); 82 88 *stream = #{Fits::convert "size", "value", @type}; 83 89 } -
trunk/Source/JavaScriptCore/generator/DSL.rb
r240023 r245906 145 145 template.multiline_comment = nil 146 146 template.line_comment = "#" 147 template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide )) .join("\n")147 template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide16) + opcodes.map.with_index(&:set_entry_address_wide32)) .join("\n") 148 148 end 149 149 end -
trunk/Source/JavaScriptCore/generator/Metadata.rb
r240041 r245906 113 113 end 114 114 115 def emitter_local_name 116 "__metadataID" 117 end 118 115 119 def emitter_local 116 120 unless @@emitter_local 117 @@emitter_local = Argument.new( "__metadataID", :unsigned, -1)121 @@emitter_local = Argument.new(emitter_local_name, :unsigned, -1) 118 122 end 119 123 -
trunk/Source/JavaScriptCore/generator/Opcode.rb
r240041 r245906 33 33 module Size 34 34 Narrow = "OpcodeSize::Narrow" 35 Wide = "OpcodeSize::Wide" 35 Wide16 = "OpcodeSize::Wide16" 36 Wide32 = "OpcodeSize::Wide32" 36 37 end 37 38 … … 75 76 end 76 77 78 def typed_reference_args 79 return if @args.nil? 80 81 @args.map(&:create_reference_param).unshift("").join(", ") 82 end 83 77 84 def untyped_args 78 85 return if @args.nil? … … 82 89 83 90 def map_fields_with_size(prefix, size, &block) 84 args = [Argument.new("opcodeID", : unsigned, 0)]91 args = [Argument.new("opcodeID", :OpcodeID, 0)] 85 92 args += @args.dup if @args 86 93 unless @metadata.empty? … … 109 116 110 117 def emitter 111 op_wide = Argument.new("op_wide", :unsigned, 0) 118 op_wide16 = Argument.new("op_wide16", :OpcodeID, 0) 119 op_wide32 = Argument.new("op_wide32", :OpcodeID, 0) 112 120 metadata_param = @metadata.empty? ? "" : ", #{@metadata.emitter_local.create_param}" 113 121 metadata_arg = @metadata.empty? ? "" : ", #{@metadata.emitter_local.name}" … … 115 123 static void emit(BytecodeGenerator* gen#{typed_args}) 116 124 { 117 #{@metadata.create_emitter_local} 118 emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg}) 119 || emit<OpcodeSize::Wide, Assert, true>(gen#{untyped_args}#{metadata_arg}); 125 emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(gen#{untyped_args}); 120 126 } 121 127 #{%{ … … 125 131 return emit<size, shouldAssert>(gen#{untyped_args}#{metadata_arg}); 126 132 } 133 134 template<OpcodeSize size> 135 static bool checkWithoutMetadataID(BytecodeGenerator* gen#{typed_args}) 136 { 137 decltype(gen->addMetadataFor(opcodeID)) __metadataID { }; 138 return checkImpl<size>(gen#{untyped_args}#{metadata_arg}); 139 } 127 140 } unless @metadata.empty?} 128 141 template<OpcodeSize size, FitsAssertion shouldAssert = Assert, bool recordOpcode = true> … … 135 148 } 136 149 150 template<OpcodeSize size> 151 static void emitWithSmallestSizeRequirement(BytecodeGenerator* gen#{typed_args}) 152 { 153 #{@metadata.create_emitter_local} 154 if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Narrow)) { 155 if (emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg})) 156 return; 157 } 158 if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Wide16)) { 159 if (emit<OpcodeSize::Wide16, NoAssert, true>(gen#{untyped_args}#{metadata_arg})) 160 return; 161 } 162 emit<OpcodeSize::Wide32, Assert, true>(gen#{untyped_args}#{metadata_arg}); 163 } 164 137 165 private: 166 template<OpcodeSize size> 167 static bool checkImpl(BytecodeGenerator* gen#{typed_reference_args}#{metadata_param}) 168 { 169 UNUSED_PARAM(gen); 170 #if OS(WINDOWS) && ENABLE(C_LOOP) 171 // FIXME: Disable wide16 optimization for Windows CLoop 172 // https://bugs.webkit.org/show_bug.cgi?id=198283 173 if (size == OpcodeSize::Wide16) 174 return false; 175 #endif 176 return #{map_fields_with_size("", "size", &:fits_check).join "\n && "} 177 && (size == OpcodeSize::Wide16 ? #{op_wide16.fits_check(Size::Narrow)} : true) 178 && (size == OpcodeSize::Wide32 ? #{op_wide32.fits_check(Size::Narrow)} : true); 179 } 180 138 181 template<OpcodeSize size, bool recordOpcode> 139 182 static bool emitImpl(BytecodeGenerator* gen#{typed_args}#{metadata_param}) 140 183 { 141 if (size == OpcodeSize::Wide) 142 gen->alignWideOpcode(); 143 if (#{map_fields_with_size("", "size", &:fits_check).join "\n && "} 144 && (size == OpcodeSize::Wide ? #{op_wide.fits_check(Size::Narrow)} : true)) { 184 if (size == OpcodeSize::Wide16) 185 gen->alignWideOpcode16(); 186 else if (size == OpcodeSize::Wide32) 187 gen->alignWideOpcode32(); 188 if (checkImpl<size>(gen#{untyped_args}#{metadata_arg})) { 145 189 if (recordOpcode) 146 190 gen->recordOpcode(opcodeID); 147 if (size == OpcodeSize::Wide) 148 #{op_wide.fits_write Size::Narrow} 191 if (size == OpcodeSize::Wide16) 192 #{op_wide16.fits_write Size::Narrow} 193 else if (size == OpcodeSize::Wide32) 194 #{op_wide32.fits_write Size::Narrow} 149 195 #{map_fields_with_size(" ", "size", &:fits_write).join "\n"} 150 196 return true; … … 160 206 <<-EOF 161 207 template<typename Block> 162 void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, bool __isWide)163 { 164 dumper->printLocationAndOp(__location, &"* #{@name}"[!__isWide]);208 void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, int __sizeShiftAmount) 209 { 210 dumper->printLocationAndOp(__location, &"**#{@name}"[2 - __sizeShiftAmount]); 165 211 #{print_args { |arg| 166 212 <<-EOF.chomp … … 183 229 } 184 230 231 #{capitalized_name}(const uint16_t* stream) 232 #{init.call("OpcodeSize::Wide16")} 233 { 234 ASSERT_UNUSED(stream, stream[0] == opcodeID); 235 } 236 237 185 238 #{capitalized_name}(const uint32_t* stream) 186 #{init.call("OpcodeSize::Wide ")}239 #{init.call("OpcodeSize::Wide32")} 187 240 { 188 241 ASSERT_UNUSED(stream, stream[0] == opcodeID); … … 191 244 static #{capitalized_name} decode(const uint8_t* stream) 192 245 { 193 if (*stream != op_wide)194 return { stream};195 196 auto wideStream = bitwise_cast<const uint32_t*>(stream + 1);197 return { wideStream };246 if (*stream == op_wide32) 247 return { bitwise_cast<const uint32_t*>(stream + 1) }; 248 if (*stream == op_wide16) 249 return { bitwise_cast<const uint16_t*>(stream + 1) }; 250 return { stream }; 198 251 } 199 252 EOF … … 220 273 end 221 274 222 def set_entry_address_wide(id) 223 "setEntryAddressWide(#{id}, _#{full_name}_wide)" 275 def set_entry_address_wide16(id) 276 "setEntryAddressWide16(#{id}, _#{full_name}_wide16)" 277 end 278 279 def set_entry_address_wide32(id) 280 "setEntryAddressWide32(#{id}, _#{full_name}_wide32)" 224 281 end 225 282 … … 254 311 <<-EOF.chomp 255 312 case #{op.name}: 256 __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction-> isWide());313 __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->sizeShiftAmount()); 257 314 break; 258 315 EOF -
trunk/Source/JavaScriptCore/generator/Section.rb
r238761 r245906 101 101 } 102 102 opcodes.each { |opcode| 103 out.write("#define #{opcode.name}_wide_value_string \"#{num_opcodes + opcode.id}\"\n") 103 out.write("#define #{opcode.name}_wide16_value_string \"#{num_opcodes + opcode.id}\"\n") 104 } 105 opcodes.each { |opcode| 106 out.write("#define #{opcode.name}_wide32_value_string \"#{num_opcodes * 2 + opcode.id}\"\n") 104 107 } 105 108 end -
trunk/Source/JavaScriptCore/jit/JITExceptions.cpp
r240637 r245906 75 75 catchRoutine = handler->nativeCode.executableAddress(); 76 76 #else 77 catchRoutine = catchPCForInterpreter->isWide() 78 ? LLInt::getWideCodePtr(catchPCForInterpreter->opcodeID()) 79 : LLInt::getCodePtr(catchPCForInterpreter->opcodeID()); 77 if (catchPCForInterpreter->isWide32()) 78 catchRoutine = LLInt::getWide32CodePtr(catchPCForInterpreter->opcodeID()); 79 else if (catchPCForInterpreter->isWide16()) 80 catchRoutine = LLInt::getWide16CodePtr(catchPCForInterpreter->opcodeID()); 81 else 82 catchRoutine = LLInt::getCodePtr(catchPCForInterpreter->opcodeID()); 80 83 #endif 81 84 } else -
trunk/Source/JavaScriptCore/llint/LLIntData.cpp
r239255 r245906 50 50 uint8_t Data::s_exceptionInstructions[maxOpcodeLength + 1] = { }; 51 51 Opcode g_opcodeMap[numOpcodeIDs] = { }; 52 Opcode g_opcodeMapWide[numOpcodeIDs] = { }; 52 Opcode g_opcodeMapWide16[numOpcodeIDs] = { }; 53 Opcode g_opcodeMapWide32[numOpcodeIDs] = { }; 53 54 54 55 #if !ENABLE(C_LOOP) 55 extern "C" void llint_entry(void*, void* );56 extern "C" void llint_entry(void*, void*, void*); 56 57 #endif 57 58 … … 62 63 63 64 #else // !ENABLE(C_LOOP) 64 llint_entry(&g_opcodeMap, &g_opcodeMapWide );65 llint_entry(&g_opcodeMap, &g_opcodeMapWide16, &g_opcodeMapWide32); 65 66 66 67 for (int i = 0; i < numOpcodeIDs; ++i) { 67 68 g_opcodeMap[i] = tagCodePtr(g_opcodeMap[i], BytecodePtrTag); 68 g_opcodeMapWide[i] = tagCodePtr(g_opcodeMapWide[i], BytecodePtrTag); 69 g_opcodeMapWide16[i] = tagCodePtr(g_opcodeMapWide16[i], BytecodePtrTag); 70 g_opcodeMapWide32[i] = tagCodePtr(g_opcodeMapWide32[i], BytecodePtrTag); 69 71 } 70 72 -
trunk/Source/JavaScriptCore/llint/LLIntData.h
r237728 r245906 44 44 45 45 extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMap[numOpcodeIDs]; 46 extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide[numOpcodeIDs]; 46 extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide16[numOpcodeIDs]; 47 extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide32[numOpcodeIDs]; 47 48 48 49 class Data { … … 58 59 friend Instruction* exceptionInstructions(); 59 60 friend Opcode* opcodeMap(); 60 friend Opcode* opcodeMapWide(); 61 friend Opcode* opcodeMapWide16(); 62 friend Opcode* opcodeMapWide32(); 61 63 friend Opcode getOpcode(OpcodeID); 62 friend Opcode getOpcodeWide(OpcodeID); 64 friend Opcode getOpcodeWide16(OpcodeID); 65 friend Opcode getOpcodeWide32(OpcodeID); 63 66 template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID); 64 template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID); 67 template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID); 68 template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID); 65 69 template<PtrTag tag> friend MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID); 66 70 }; … … 78 82 } 79 83 80 inline Opcode* opcodeMapWide ()84 inline Opcode* opcodeMapWide16() 81 85 { 82 return g_opcodeMapWide; 86 return g_opcodeMapWide16; 87 } 88 89 inline Opcode* opcodeMapWide32() 90 { 91 return g_opcodeMapWide32; 83 92 } 84 93 … … 92 101 } 93 102 94 inline Opcode getOpcodeWide (OpcodeID id)103 inline Opcode getOpcodeWide16(OpcodeID id) 95 104 { 96 105 #if ENABLE(COMPUTED_GOTO_OPCODES) 97 return g_opcodeMapWide[id]; 106 return g_opcodeMapWide16[id]; 107 #else 108 UNUSED_PARAM(id); 109 RELEASE_ASSERT_NOT_REACHED(); 110 #endif 111 } 112 113 inline Opcode getOpcodeWide32(OpcodeID id) 114 { 115 #if ENABLE(COMPUTED_GOTO_OPCODES) 116 return g_opcodeMapWide32[id]; 98 117 #else 99 118 UNUSED_PARAM(id); … … 111 130 112 131 template<PtrTag tag> 113 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide CodePtr(OpcodeID opcodeID)132 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID) 114 133 { 115 void* address = reinterpret_cast<void*>(getOpcodeWide(opcodeID)); 134 void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID)); 135 address = retagCodePtr<BytecodePtrTag, tag>(address); 136 return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address); 137 } 138 139 template<PtrTag tag> 140 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID) 141 { 142 void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID)); 116 143 address = retagCodePtr<BytecodePtrTag, tag>(address); 117 144 return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address); … … 142 169 } 143 170 144 ALWAYS_INLINE void* getWide CodePtr(OpcodeID id)171 ALWAYS_INLINE void* getWide16CodePtr(OpcodeID id) 145 172 { 146 return reinterpret_cast<void*>(getOpcodeWide(id)); 173 return reinterpret_cast<void*>(getOpcodeWide16(id)); 174 } 175 176 ALWAYS_INLINE void* getWide32CodePtr(OpcodeID id) 177 { 178 return reinterpret_cast<void*>(getOpcodeWide32(id)); 147 179 } 148 180 #endif -
trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
r243254 r245906 31 31 32 32 #if ENABLE(C_LOOP) 33 #if !OS(WINDOWS) 33 34 #define OFFLINE_ASM_C_LOOP 1 35 #define OFFLINE_ASM_C_LOOP_WIN 0 36 #else 37 #define OFFLINE_ASM_C_LOOP 0 38 #define OFFLINE_ASM_C_LOOP_WIN 1 39 #endif 34 40 #define OFFLINE_ASM_X86 0 35 41 #define OFFLINE_ASM_X86_WIN 0 … … 46 52 47 53 #define OFFLINE_ASM_C_LOOP 0 54 #define OFFLINE_ASM_C_LOOP_WIN 0 48 55 49 56 #if CPU(X86) && !COMPILER(MSVC) -
trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
r245658 r245906 1723 1723 } 1724 1724 1725 LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide) 1726 { 1727 return commonCallEval(exec, pc, LLInt::getWideCodePtr<JSEntryPtrTag>(llint_generic_return_point)); 1725 LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide16) 1726 { 1727 return commonCallEval(exec, pc, LLInt::getWide16CodePtr<JSEntryPtrTag>(llint_generic_return_point)); 1728 } 1729 1730 LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide32) 1731 { 1732 return commonCallEval(exec, pc, LLInt::getWide32CodePtr<JSEntryPtrTag>(llint_generic_return_point)); 1728 1733 } 1729 1734 -
trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.h
r237547 r245906 118 118 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_construct_varargs); 119 119 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval); 120 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide); 120 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide16); 121 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide32); 121 122 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tear_off_arguments); 122 123 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_strcat); -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r245669 r245906 1 # Copyr ight (C) 2011-2019 Apple Inc. All rights reserved.1 # Copyrsght (C) 2011-2019 Apple Inc. All rights reserved. 2 2 # 3 3 # Redistribution and use in source and binary forms, with or without … … 219 219 if X86_64 or X86_64_WIN or ARM64 or ARM64E 220 220 const CalleeSaveSpaceAsVirtualRegisters = 4 221 elsif C_LOOP 221 elsif C_LOOP or C_LOOP_WIN 222 222 const CalleeSaveSpaceAsVirtualRegisters = 1 223 223 elsif ARMv7 … … 278 278 const tagTypeNumber = csr5 279 279 const tagMask = csr6 280 elsif C_LOOP 280 elsif C_LOOP or C_LOOP_WIN 281 281 const PB = csr0 282 282 const tagTypeNumber = csr1 … … 287 287 else 288 288 const PC = t4 # When changing this, make sure LLIntPC is up to date in LLIntPCRanges.h 289 if C_LOOP 289 if C_LOOP or C_LOOP_WIN 290 290 const metadataTable = csr3 291 291 elsif ARMv7 … … 312 312 end 313 313 314 macro dispatchWide() 314 macro dispatchWide16() 315 dispatch(constexpr %opcodeName%_length * 2 + 1) 316 end 317 318 macro dispatchWide32() 315 319 dispatch(constexpr %opcodeName%_length * 4 + 1) 316 320 end 317 321 318 size(dispatchNarrow, dispatchWide , macro (dispatch) dispatch() end)322 size(dispatchNarrow, dispatchWide16, dispatchWide32, macro (dispatch) dispatch() end) 319 323 end 320 324 321 325 macro getu(size, opcodeStruct, fieldName, dst) 322 size(getuOperandNarrow, getuOperandWide , macro (getu)326 size(getuOperandNarrow, getuOperandWide16, getuOperandWide32, macro (getu) 323 327 getu(opcodeStruct, fieldName, dst) 324 328 end) … … 326 330 327 331 macro get(size, opcodeStruct, fieldName, dst) 328 size(getOperandNarrow, getOperandWide , macro (get)332 size(getOperandNarrow, getOperandWide16, getOperandWide32, macro (get) 329 333 get(opcodeStruct, fieldName, dst) 330 334 end) 331 335 end 332 336 333 macro narrow(narrowFn, wide Fn, k)337 macro narrow(narrowFn, wide16Fn, wide32Fn, k) 334 338 k(narrowFn) 335 339 end 336 340 337 macro wide(narrowFn, wideFn, k) 338 k(wideFn) 341 macro wide16(narrowFn, wide16Fn, wide32Fn, k) 342 k(wide16Fn) 343 end 344 345 macro wide32(narrowFn, wide16Fn, wide32Fn, k) 346 k(wide32Fn) 339 347 end 340 348 … … 363 371 fn(narrow) 364 372 365 _%label%_wide: 373 # FIXME: We cannot enable wide16 bytecode in Windows CLoop. With MSVC, as CLoop::execute gets larger code 374 # size, CLoop::execute gets higher stack height requirement. This makes CLoop::execute takes 160KB stack 375 # per call, causes stack overflow error easily. For now, we disable wide16 optimization for Windows CLoop. 376 # https://bugs.webkit.org/show_bug.cgi?id=198283 377 if not C_LOOP_WIN 378 _%label%_wide16: 366 379 prologue() 367 fn(wide) 380 fn(wide16) 381 end 382 383 _%label%_wide32: 384 prologue() 385 fn(wide32) 368 386 end 369 387 … … 476 494 477 495 # Bytecode operand constants. 478 const FirstConstantRegisterIndexNarrow = 16 479 const FirstConstantRegisterIndexWide = constexpr FirstConstantRegisterIndex 496 const FirstConstantRegisterIndexNarrow = constexpr FirstConstantRegisterIndex8 497 const FirstConstantRegisterIndexWide16 = constexpr FirstConstantRegisterIndex16 498 const FirstConstantRegisterIndexWide32 = constexpr FirstConstantRegisterIndex 480 499 481 500 # Code type constants. … … 523 542 # Some common utilities. 524 543 macro crash() 525 if C_LOOP 544 if C_LOOP or C_LOOP_WIN 526 545 cloopCrash 527 546 else … … 606 625 macro checkStackPointerAlignment(tempReg, location) 607 626 if ASSERT_ENABLED 608 if ARM64 or ARM64E or C_LOOP 627 if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN 609 628 # ARM64 and ARM64E will check for us! 610 # C_LOOP does not need the alignment, and can use a little perf629 # C_LOOP or C_LOOP_WIN does not need the alignment, and can use a little perf 611 630 # improvement from avoiding useless work. 612 631 else … … 626 645 end 627 646 628 if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN647 if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN 629 648 const CalleeSaveRegisterCount = 0 630 649 elsif ARMv7 … … 643 662 644 663 macro pushCalleeSaves() 645 if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN664 if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN 646 665 elsif ARMv7 647 666 emit "push {r4-r6, r8-r11}" … … 664 683 665 684 macro popCalleeSaves() 666 if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN685 if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN 667 686 elsif ARMv7 668 687 emit "pop {r4-r6, r8-r11}" … … 683 702 684 703 macro preserveCallerPCAndCFR() 685 if C_LOOP or ARMv7 or MIPS704 if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS 686 705 push lr 687 706 push cfr … … 698 717 macro restoreCallerPCAndCFR() 699 718 move cfr, sp 700 if C_LOOP or ARMv7 or MIPS719 if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS 701 720 pop cfr 702 721 pop lr … … 710 729 macro preserveCalleeSavesUsedByLLInt() 711 730 subp CalleeSaveSpaceStackAligned, sp 712 if C_LOOP 731 if C_LOOP or C_LOOP_WIN 713 732 storep metadataTable, -PtrSize[cfr] 714 733 elsif ARMv7 or MIPS … … 733 752 734 753 macro restoreCalleeSavesUsedByLLInt() 735 if C_LOOP 754 if C_LOOP or C_LOOP_WIN 736 755 loadp -PtrSize[cfr], metadataTable 737 756 elsif ARMv7 or MIPS … … 844 863 845 864 macro preserveReturnAddressAfterCall(destinationRegister) 846 if C_LOOP or ARMv7 or ARM64 or ARM64E or MIPS847 # In C_LOOP case, we're only preserving the bytecode vPC.865 if C_LOOP or C_LOOP_WIN or ARMv7 or ARM64 or ARM64E or MIPS 866 # In C_LOOP or C_LOOP_WIN case, we're only preserving the bytecode vPC. 848 867 move lr, destinationRegister 849 868 elsif X86 or X86_WIN or X86_64 or X86_64_WIN … … 860 879 elsif ARM64 or ARM64E 861 880 push cfr, lr 862 elsif C_LOOP or ARMv7 or MIPS881 elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS 863 882 push lr 864 883 push cfr … … 872 891 elsif ARM64 or ARM64E 873 892 pop lr, cfr 874 elsif C_LOOP or ARMv7 or MIPS893 elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS 875 894 pop cfr 876 895 pop lr … … 906 925 907 926 macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag) 908 if C_LOOP 927 if C_LOOP or C_LOOP_WIN 909 928 cloopCallJSFunction callee 910 929 else … … 944 963 andi ~StackAlignmentMask, temp2 945 964 946 if ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS965 if ARMv7 or ARM64 or ARM64E or C_LOOP or C_LOOP_WIN or MIPS 947 966 addp CallerFrameAndPCSize, sp 948 967 subi CallerFrameAndPCSize, temp2 … … 1028 1047 1029 1048 macro assertNotConstant(size, index) 1030 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide , macro (FirstConstantRegisterIndex)1049 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) 1031 1050 assert(macro (ok) bilt index, FirstConstantRegisterIndex, ok end) 1032 1051 end) … … 1080 1099 end 1081 1100 codeBlockGetter(t1) 1082 if not C_LOOP1101 if not (C_LOOP or C_LOOP_WIN) 1083 1102 baddis 5, CodeBlock::m_llintExecuteCounter + BaselineExecutionCounter::m_counter[t1], .continue 1084 1103 if JSVALUE64 … … 1130 1149 bpa t0, cfr, .needStackCheck 1131 1150 loadp CodeBlock::m_vm[t1], t2 1132 if C_LOOP 1151 if C_LOOP or C_LOOP_WIN 1133 1152 bpbeq VM::m_cloopStackLimit[t2], t0, .stackHeightOK 1134 1153 else … … 1233 1252 # EncodedJSValue vmEntryToNativeFunction(void* code, VM* vm, ProtoCallFrame* protoFrame) 1234 1253 1235 if C_LOOP 1254 if C_LOOP or C_LOOP_WIN 1236 1255 _llint_vm_entry_to_javascript: 1237 1256 else … … 1242 1261 1243 1262 1244 if C_LOOP 1263 if C_LOOP or C_LOOP_WIN 1245 1264 _llint_vm_entry_to_native: 1246 1265 else … … 1251 1270 1252 1271 1253 if not C_LOOP1272 if not (C_LOOP or C_LOOP_WIN) 1254 1273 # void sanitizeStackForVMImpl(VM* vm) 1255 1274 global _sanitizeStackForVMImpl … … 1291 1310 end 1292 1311 1293 if C_LOOP 1312 if C_LOOP or C_LOOP_WIN 1294 1313 # Dummy entry point the C Loop uses to initialize. 1295 1314 _llint_entry: … … 1313 1332 end 1314 1333 1315 # The PC base is in t 2, as this is what _llint_entry leaves behind through1316 # initPCRelative(t 2)1334 # The PC base is in t3, as this is what _llint_entry leaves behind through 1335 # initPCRelative(t3) 1317 1336 macro setEntryAddress(index, label) 1318 1337 setEntryAddressCommon(index, label, a0) 1319 1338 end 1320 1339 1321 macro setEntryAddressWide (index, label)1340 macro setEntryAddressWide16(index, label) 1322 1341 setEntryAddressCommon(index, label, a1) 1342 end 1343 1344 macro setEntryAddressWide32(index, label) 1345 setEntryAddressCommon(index, label, a2) 1323 1346 end 1324 1347 1325 1348 macro setEntryAddressCommon(index, label, map) 1326 1349 if X86_64 or X86_64_WIN 1327 leap (label - _relativePCBase)[t2], t3 1350 leap (label - _relativePCBase)[t3], t4 1351 move index, t5 1352 storep t4, [map, t5, 8] 1353 elsif X86 or X86_WIN 1354 leap (label - _relativePCBase)[t3], t4 1355 move index, t5 1356 storep t4, [map, t5, 4] 1357 elsif ARM64 or ARM64E 1358 pcrtoaddr label, t3 1328 1359 move index, t4 1329 storep t3, [map, t4, 8] 1330 elsif X86 or X86_WIN 1331 leap (label - _relativePCBase)[t2], t3 1332 move index, t4 1333 storep t3, [map, t4, 4] 1334 elsif ARM64 or ARM64E 1335 pcrtoaddr label, t2 1336 move index, t4 1337 storep t2, [map, t4, PtrSize] 1360 storep t3, [map, t4, PtrSize] 1338 1361 elsif ARMv7 1339 1362 mvlbl (label - _relativePCBase), t4 1340 addp t4, t 2, t41341 move index, t 31342 storep t4, [map, t 3, 4]1363 addp t4, t3, t4 1364 move index, t5 1365 storep t4, [map, t5, 4] 1343 1366 elsif MIPS 1344 1367 la label, t4 1345 1368 la _relativePCBase, t3 1346 1369 subp t3, t4 1347 addp t4, t 2, t41348 move index, t 31349 storep t4, [map, t 3, 4]1370 addp t4, t3, t4 1371 move index, t5 1372 storep t4, [map, t5, 4] 1350 1373 end 1351 1374 end … … 1359 1382 loadp 20[sp], a0 1360 1383 loadp 24[sp], a1 1361 end 1362 1363 initPCRelative(t2) 1384 loadp 28[sp], a2 1385 end 1386 1387 initPCRelative(t3) 1364 1388 1365 1389 # Include generated bytecode initialization file. … … 1371 1395 end 1372 1396 1373 _llint_op_wide: 1374 nextInstructionWide() 1375 1376 _llint_op_wide_wide: 1397 _llint_op_wide16: 1398 nextInstructionWide16() 1399 1400 _llint_op_wide32: 1401 nextInstructionWide32() 1402 1403 macro noWide(label) 1404 _llint_%label%_wide16: 1377 1405 crash() 1378 1406 1379 _llint_ op_enter_wide:1407 _llint_%label%_wide32: 1380 1408 crash() 1409 end 1410 1411 noWide(op_wide16) 1412 noWide(op_wide32) 1413 noWide(op_enter) 1381 1414 1382 1415 op(llint_program_prologue, macro () … … 1779 1812 prepareForRegularCall) 1780 1813 1781 _llint_op_call_eval_wide :1814 _llint_op_call_eval_wide16: 1782 1815 slowPathForCall( 1783 wide ,1816 wide16, 1784 1817 OpCallEval, 1785 macro () dispatchOp(wide , op_call_eval) end,1786 _llint_slow_path_call_eval_wide ,1818 macro () dispatchOp(wide16, op_call_eval) end, 1819 _llint_slow_path_call_eval_wide16, 1787 1820 prepareForRegularCall) 1788 1821 1789 _llint_generic_return_point: 1790 dispatchAfterCall(narrow, OpCallEval, macro () 1791 dispatchOp(narrow, op_call_eval) 1822 _llint_op_call_eval_wide32: 1823 slowPathForCall( 1824 wide32, 1825 OpCallEval, 1826 macro () dispatchOp(wide32, op_call_eval) end, 1827 _llint_slow_path_call_eval_wide32, 1828 prepareForRegularCall) 1829 1830 1831 commonOp(llint_generic_return_point, macro () end, macro (size) 1832 dispatchAfterCall(size, OpCallEval, macro () 1833 dispatchOp(size, op_call_eval) 1792 1834 end) 1793 1794 _llint_generic_return_point_wide: 1795 dispatchAfterCall(wide, OpCallEval, macro() 1796 dispatchOp(wide, op_call_eval) 1797 end) 1835 end) 1836 1798 1837 1799 1838 llintOp(op_identity_with_profile, OpIdentityWithProfile, macro (unused, unused, dispatch) -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
r239940 r245906 250 250 if (UNLIKELY(isInitializationPass)) { 251 251 Opcode* opcodeMap = LLInt::opcodeMap(); 252 Opcode* opcodeMapWide = LLInt::opcodeMapWide(); 252 Opcode* opcodeMapWide16 = LLInt::opcodeMapWide16(); 253 Opcode* opcodeMapWide32 = LLInt::opcodeMapWide32(); 253 254 254 255 #if ENABLE(COMPUTED_GOTO_OPCODES) 255 256 #define OPCODE_ENTRY(__opcode, length) \ 256 257 opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode); \ 257 opcodeMapWide[__opcode] = bitwise_cast<void*>(&&__opcode##_wide); 258 opcodeMapWide16[__opcode] = bitwise_cast<void*>(&&__opcode##_wide16); \ 259 opcodeMapWide32[__opcode] = bitwise_cast<void*>(&&__opcode##_wide32); 258 260 259 261 #define LLINT_OPCODE_ENTRY(__opcode, length) \ … … 264 266 #define OPCODE_ENTRY(__opcode, length) \ 265 267 opcodeMap[__opcode] = __opcode; \ 266 opcodeMapWide[__opcode] = static_cast<OpcodeID>(__opcode##_wide); 268 opcodeMapWide16[__opcode] = static_cast<OpcodeID>(__opcode##_wide16); \ 269 opcodeMapWide32[__opcode] = static_cast<OpcodeID>(__opcode##_wide32); 267 270 268 271 #define LLINT_OPCODE_ENTRY(__opcode, length) \ … … 286 289 287 290 // Define the pseudo registers used by the LLINT C Loop backend: 288 ASSERT(sizeof(CLoopRegister) == sizeof(intptr_t));291 static_assert(sizeof(CLoopRegister) == sizeof(intptr_t)); 289 292 290 293 // The CLoop llint backend is initially based on the ARMv7 backend, and -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r245658 r245906 30 30 end 31 31 32 macro nextInstructionWide() 32 macro nextInstructionWide16() 33 loadh 1[PC], t0 34 leap _g_opcodeMapWide16, t1 35 jmp [t1, t0, 4], BytecodePtrTag 36 end 37 38 macro nextInstructionWide32() 33 39 loadi 1[PC], t0 34 leap _g_opcodeMapWide , t140 leap _g_opcodeMapWide32, t1 35 41 jmp [t1, t0, 4], BytecodePtrTag 36 42 end … … 41 47 42 48 macro getOperandNarrow(opcodeStruct, fieldName, dst) 43 loadbsp constexpr %opcodeStruct%_%fieldName%_index[PC], dst 44 end 45 46 macro getuOperandWide(opcodeStruct, fieldName, dst) 49 loadbsi constexpr %opcodeStruct%_%fieldName%_index[PC], dst 50 end 51 52 macro getuOperandWide16(opcodeStruct, fieldName, dst) 53 loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst 54 end 55 56 macro getOperandWide16(opcodeStruct, fieldName, dst) 57 loadhsi constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst 58 end 59 60 macro getuOperandWide32(opcodeStruct, fieldName, dst) 47 61 loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst 48 62 end 49 63 50 macro getOperandWide (opcodeStruct, fieldName, dst)64 macro getOperandWide32(opcodeStruct, fieldName, dst) 51 65 loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst 52 66 end … … 97 111 call function 98 112 addp 16, sp 99 elsif C_LOOP 113 elsif C_LOOP or C_LOOP_WIN 100 114 cloopCallSlowPath function, a0, a1 101 115 else … … 105 119 106 120 macro cCall2Void(function) 107 if C_LOOP 121 if C_LOOP or C_LOOP_WIN 108 122 cloopCallSlowPathVoid function, a0, a1 109 123 else … … 122 136 call function 123 137 addp 16, sp 124 elsif C_LOOP 138 elsif C_LOOP or C_LOOP_WIN 125 139 error 126 140 else … … 191 205 # and the frame for the JS code we're executing. We need to do this check 192 206 # before we start copying the args from the protoCallFrame below. 193 if C_LOOP 207 if C_LOOP or C_LOOP_WIN 194 208 bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK 195 209 move entry, t4 … … 309 323 addp CallerFrameAndPCSize, sp 310 324 checkStackPointerAlignment(temp, 0xbad0dc02) 311 if C_LOOP 325 if C_LOOP or C_LOOP_WIN 312 326 cloopCallJSFunction entry 313 327 else … … 321 335 move entry, temp1 322 336 storep cfr, [sp] 323 if C_LOOP 337 if C_LOOP or C_LOOP_WIN 324 338 move sp, a0 325 339 storep lr, PtrSize[sp] … … 448 462 # changed. 449 463 macro loadConstantOrVariable(size, index, tag, payload) 450 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide , macro (FirstConstantRegisterIndex)464 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) 451 465 bigteq index, FirstConstantRegisterIndex, .constant 452 466 loadi TagOffset[cfr, index, 8], tag … … 464 478 465 479 macro loadConstantOrVariableTag(size, index, tag) 466 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide , macro (FirstConstantRegisterIndex)480 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) 467 481 bigteq index, FirstConstantRegisterIndex, .constant 468 482 loadi TagOffset[cfr, index, 8], tag … … 479 493 # Index and payload may be the same register. Index may be clobbered. 480 494 macro loadConstantOrVariable2Reg(size, index, tag, payload) 481 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide , macro (FirstConstantRegisterIndex)495 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) 482 496 bigteq index, FirstConstantRegisterIndex, .constant 483 497 loadi TagOffset[cfr, index, 8], tag … … 497 511 498 512 macro loadConstantOrVariablePayloadTagCustom(size, index, tagCheck, payload) 499 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide , macro (FirstConstantRegisterIndex)513 size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) 500 514 bigteq index, FirstConstantRegisterIndex, .constant 501 515 tagCheck(TagOffset[cfr, index, 8]) … … 1983 1997 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 1984 1998 addp 8, sp 1985 elsif ARMv7 or C_LOOP or MIPS1999 elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS 1986 2000 if MIPS 1987 2001 # calling convention says to save stack space for 4 first registers in … … 2000 2014 loadp JSFunction::m_executable[t1], t1 2001 2015 checkStackPointerAlignment(t3, 0xdead0001) 2002 if C_LOOP 2016 if C_LOOP or C_LOOP_WIN 2003 2017 cloopCallNative executableOffsetToFunction[t1] 2004 2018 else … … 2050 2064 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 2051 2065 addp 8, sp 2052 elsif ARMv7 or C_LOOP or MIPS2066 elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS 2053 2067 subp 8, sp # align stack pointer 2054 2068 # t1 already contains the Callee. … … 2059 2073 loadi Callee + PayloadOffset[cfr], t1 2060 2074 checkStackPointerAlignment(t3, 0xdead0001) 2061 if C_LOOP 2075 if C_LOOP or C_LOOP_WIN 2062 2076 cloopCallNative offsetOfFunction[t1] 2063 2077 else -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r245658 r245906 31 31 end 32 32 33 macro nextInstructionWide() 33 macro nextInstructionWide16() 34 loadh 1[PB, PC, 1], t0 35 leap _g_opcodeMapWide16, t1 36 jmp [t1, t0, PtrSize], BytecodePtrTag 37 end 38 39 macro nextInstructionWide32() 34 40 loadi 1[PB, PC, 1], t0 35 leap _g_opcodeMapWide , t141 leap _g_opcodeMapWide32, t1 36 42 jmp [t1, t0, PtrSize], BytecodePtrTag 37 43 end … … 42 48 43 49 macro getOperandNarrow(opcodeStruct, fieldName, dst) 44 loadbsp constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst 45 end 46 47 macro getuOperandWide(opcodeStruct, fieldName, dst) 50 loadbsq constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst 51 end 52 53 macro getuOperandWide16(opcodeStruct, fieldName, dst) 54 loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst 55 end 56 57 macro getOperandWide16(opcodeStruct, fieldName, dst) 58 loadhsq constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst 59 end 60 61 macro getuOperandWide32(opcodeStruct, fieldName, dst) 48 62 loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst 49 63 end 50 64 51 macro getOperandWide (opcodeStruct, fieldName, dst)65 macro getOperandWide32(opcodeStruct, fieldName, dst) 52 66 loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst 53 67 end … … 110 124 move 8[r0], r1 111 125 move [r0], r0 112 elsif C_LOOP 126 elsif C_LOOP or C_LOOP_WIN 113 127 cloopCallSlowPath function, a0, a1 114 128 else … … 118 132 119 133 macro cCall2Void(function) 120 if C_LOOP 134 if C_LOOP or C_LOOP_WIN 121 135 cloopCallSlowPathVoid function, a0, a1 122 136 elsif X86_64_WIN … … 180 194 # and the frame for the JS code we're executing. We need to do this check 181 195 # before we start copying the args from the protoCallFrame below. 182 if C_LOOP 196 if C_LOOP or C_LOOP_WIN 183 197 bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK 184 198 move entry, t4 … … 286 300 macro makeJavaScriptCall(entry, temp, unused) 287 301 addp 16, sp 288 if C_LOOP 302 if C_LOOP or C_LOOP_WIN 289 303 cloopCallJSFunction entry 290 304 else … … 298 312 storep cfr, [sp] 299 313 move sp, a0 300 if C_LOOP 314 if C_LOOP or C_LOOP_WIN 301 315 storep lr, 8[sp] 302 316 cloopCallNative temp … … 410 424 411 425 macro uncage(basePtr, mask, ptr, scratchOrLength) 412 if GIGACAGE_ENABLED and not C_LOOP426 if GIGACAGE_ENABLED and not (C_LOOP or C_LOOP_WIN) 413 427 loadp basePtr, scratchOrLength 414 428 btpz scratchOrLength, .done … … 451 465 end 452 466 453 macro loadWide ()454 bpgteq index, FirstConstantRegisterIndexWide , .constant467 macro loadWide16() 468 bpgteq index, FirstConstantRegisterIndexWide16, .constant 455 469 loadq [cfr, index, 8], value 456 470 jmp .done … … 458 472 loadp CodeBlock[cfr], value 459 473 loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value 460 subp FirstConstantRegisterIndexWide, index 474 loadq -(FirstConstantRegisterIndexWide16 * 8)[value, index, 8], value 475 .done: 476 end 477 478 macro loadWide32() 479 bpgteq index, FirstConstantRegisterIndexWide32, .constant 480 loadq [cfr, index, 8], value 481 jmp .done 482 .constant: 483 loadp CodeBlock[cfr], value 484 loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value 485 subp FirstConstantRegisterIndexWide32, index 461 486 loadq [value, index, 8], value 462 487 .done: 463 488 end 464 489 465 size(loadNarrow, loadWide , macro (load) load() end)490 size(loadNarrow, loadWide16, loadWide32, macro (load) load() end) 466 491 end 467 492 … … 1519 1544 1520 1545 # We have Int8ArrayType. 1521 loadbs [t3, t1], t01546 loadbsi [t3, t1], t0 1522 1547 finishIntGetByVal(t0, t1) 1523 1548 … … 1539 1564 1540 1565 # We have Int16ArrayType. 1541 loadhs [t3, t1, 2], t01566 loadhsi [t3, t1, 2], t0 1542 1567 finishIntGetByVal(t0, t1) 1543 1568 … … 2061 2086 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1 2062 2087 storep cfr, VM::topCallFrame[t1] 2063 if ARM64 or ARM64E or C_LOOP 2088 if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN 2064 2089 storep lr, ReturnPC[cfr] 2065 2090 end … … 2068 2093 loadp JSFunction::m_executable[t1], t1 2069 2094 checkStackPointerAlignment(t3, 0xdead0001) 2070 if C_LOOP 2095 if C_LOOP or C_LOOP_WIN 2071 2096 cloopCallNative executableOffsetToFunction[t1] 2072 2097 else … … 2101 2126 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1 2102 2127 storep cfr, VM::topCallFrame[t1] 2103 if ARM64 or ARM64E or C_LOOP 2128 if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN 2104 2129 storep lr, ReturnPC[cfr] 2105 2130 end … … 2107 2132 loadp Callee[cfr], t1 2108 2133 checkStackPointerAlignment(t3, 0xdead0001) 2109 if C_LOOP 2134 if C_LOOP or C_LOOP_WIN 2110 2135 cloopCallNative offsetOfFunction[t1] 2111 2136 else -
trunk/Source/JavaScriptCore/offlineasm/arm.rb
r239867 r245906 445 445 when "loadb" 446 446 $asm.puts "ldrb #{armFlippedOperands(operands)}" 447 when "loadbs ", "loadbsp"447 when "loadbsi" 448 448 $asm.puts "ldrsb.w #{armFlippedOperands(operands)}" 449 449 when "storeb" … … 451 451 when "loadh" 452 452 $asm.puts "ldrh #{armFlippedOperands(operands)}" 453 when "loadhs "453 when "loadhsi" 454 454 $asm.puts "ldrsh.w #{armFlippedOperands(operands)}" 455 455 when "storeh" -
trunk/Source/JavaScriptCore/offlineasm/arm64.rb
r245064 r245906 279 279 if node.is_a? Instruction 280 280 case node.opcode 281 when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbs ", "loadh", "loadhs", "leap"281 when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbsi", "loadbsq", "loadh", "loadhsi", "loadhsq", "leap" 282 282 labelRef = node.operands[0] 283 283 if labelRef.is_a? LabelReference … … 375 375 | node, address | 376 376 case node.opcode 377 when "loadb", "loadbs ", "loadbsp", "storeb", /^bb/, /^btb/, /^cb/, /^tb/377 when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/ 378 378 size = 1 379 when "loadh", "loadhs "379 when "loadh", "loadhsi", "loadhsq" 380 380 size = 2 381 381 when "loadi", "loadis", "storei", "addi", "andi", "lshifti", "muli", "negi", … … 710 710 when "loadb" 711 711 emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word) 712 when "loadbs "712 when "loadbsi" 713 713 emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word) 714 when "loadbs p"715 emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], : ptr)714 when "loadbsq" 715 emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :quad) 716 716 when "storeb" 717 717 emitARM64Unflipped("strb", operands, :word) 718 718 when "loadh" 719 719 emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word) 720 when "loadhs "720 when "loadhsi" 721 721 emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word) 722 when "loadhsq" 723 emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :quad) 722 724 when "storeh" 723 725 emitARM64Unflipped("strh", operands, :word) -
trunk/Source/JavaScriptCore/offlineasm/asm.rb
r237803 r245906 394 394 # always by itself so this check to turn off $enableDebugAnnotations won't 395 395 # affect the generation for any other backend. 396 if backend == "C_LOOP" 396 if backend == "C_LOOP" || backend == "C_LOOP_WIN" 397 397 $enableDebugAnnotations = false 398 398 end -
trunk/Source/JavaScriptCore/offlineasm/backends.rb
r238439 r245906 45 45 "ARM64E", 46 46 "MIPS", 47 "C_LOOP" 47 "C_LOOP", 48 "C_LOOP_WIN" 48 49 ] 49 50 … … 63 64 "ARM64E", 64 65 "MIPS", 65 "C_LOOP" 66 "C_LOOP", 67 "C_LOOP_WIN" 66 68 ] 67 69 -
trunk/Source/JavaScriptCore/offlineasm/cloop.rb
r242240 r245906 657 657 when "loadb" 658 658 $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint8MemRef};" 659 when "loadbs "660 $asm.putc "#{operands[1].clLValue(: intptr)} = (uint32_t)(#{operands[0].int8MemRef});"661 when "loadbs p"662 $asm.putc "#{operands[1].clLValue(: intptr)} =#{operands[0].int8MemRef};"659 when "loadbsi" 660 $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int8MemRef});" 661 when "loadbsq" 662 $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int8MemRef};" 663 663 when "storeb" 664 664 $asm.putc "#{operands[1].uint8MemRef} = #{operands[0].clValue(:int8)};" 665 665 when "loadh" 666 666 $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint16MemRef};" 667 when "loadhs" 668 $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int16MemRef});" 667 when "loadhsi" 668 $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int16MemRef});" 669 when "loadhsq" 670 $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int16MemRef};" 669 671 when "storeh" 670 672 $asm.putc "*#{operands[1].uint16MemRef} = #{operands[0].clValue(:int16)};" … … 1157 1159 end 1158 1160 1161 def lowerC_LOOP_WIN 1162 lowerC_LOOP 1163 end 1164 1159 1165 def recordMetaDataC_LOOP 1160 1166 $asm.codeOrigin codeOriginString if $enableCodeOriginComments -
trunk/Source/JavaScriptCore/offlineasm/instructions.rb
r245064 r245906 54 54 "loadis", 55 55 "loadb", 56 "loadbs ",57 "loadbs p",56 "loadbsi", 57 "loadbsq", 58 58 "loadh", 59 "loadhs", 59 "loadhsi", 60 "loadhsq", 60 61 "storei", 61 62 "storeb", -
trunk/Source/JavaScriptCore/offlineasm/mips.rb
r240432 r245906 881 881 when "loadb" 882 882 $asm.puts "lbu #{mipsFlippedOperands(operands)}" 883 when "loadbs ", "loadbsp"883 when "loadbsi" 884 884 $asm.puts "lb #{mipsFlippedOperands(operands)}" 885 885 when "storeb" … … 887 887 when "loadh" 888 888 $asm.puts "lhu #{mipsFlippedOperands(operands)}" 889 when "loadhs "889 when "loadhsi" 890 890 $asm.puts "lh #{mipsFlippedOperands(operands)}" 891 891 when "storeh" -
trunk/Source/JavaScriptCore/offlineasm/x86.rb
r245064 r245906 940 940 $asm.puts "movzx #{x86LoadOperands(:byte, :int)}" 941 941 end 942 when "loadbs "942 when "loadbsi" 943 943 if !isIntelSyntax 944 944 $asm.puts "movsbl #{x86LoadOperands(:byte, :int)}" … … 946 946 $asm.puts "movsx #{x86LoadOperands(:byte, :int)}" 947 947 end 948 when "loadbs p"948 when "loadbsq" 949 949 if !isIntelSyntax 950 $asm.puts "movsb #{x86Suffix(:ptr)} #{x86LoadOperands(:byte, :ptr)}"951 else 952 $asm.puts "movsx #{x86LoadOperands(:byte, : ptr)}"950 $asm.puts "movsbq #{x86LoadOperands(:byte, :quad)}" 951 else 952 $asm.puts "movsx #{x86LoadOperands(:byte, :quad)}" 953 953 end 954 954 when "loadh" … … 958 958 $asm.puts "movzx #{x86LoadOperands(:half, :int)}" 959 959 end 960 when "loadhs "960 when "loadhsi" 961 961 if !isIntelSyntax 962 962 $asm.puts "movswl #{x86LoadOperands(:half, :int)}" 963 963 else 964 964 $asm.puts "movsx #{x86LoadOperands(:half, :int)}" 965 end 966 when "loadhsq" 967 if !isIntelSyntax 968 $asm.puts "movswq #{x86LoadOperands(:half, :quad)}" 969 else 970 $asm.puts "movsx #{x86LoadOperands(:half, :quad)}" 965 971 end 966 972 when "storeb" -
trunk/Source/JavaScriptCore/parser/ResultType.h
r238778 r245906 195 195 OperandTypes(ResultType first = ResultType::unknownType(), ResultType second = ResultType::unknownType()) 196 196 { 197 // We have to initialize one of the int to ensure that 198 // the entire struct is initialized. 199 m_u.i = 0; 200 m_u.rds.first = first.m_bits; 201 m_u.rds.second = second.m_bits; 202 } 203 204 union { 205 struct { 206 ResultType::Type first; 207 ResultType::Type second; 208 } rds; 209 int i; 210 } m_u; 197 m_first = first.m_bits; 198 m_second = second.m_bits; 199 } 200 201 ResultType::Type m_first; 202 ResultType::Type m_second; 211 203 212 204 ResultType first() const 213 205 { 214 return ResultType(m_ u.rds.first);206 return ResultType(m_first); 215 207 } 216 208 217 209 ResultType second() const 218 210 { 219 return ResultType(m_ u.rds.second);220 } 221 222 int toInt()223 { 224 return m_u.i;225 }226 static OperandTypes fromInt(int value)227 { 228 OperandTypes types;229 types.m_u.i = value;230 return types;211 return ResultType(m_second); 212 } 213 214 uint16_t bits() 215 { 216 static_assert(sizeof(OperandTypes) == sizeof(uint16_t)); 217 return bitwise_cast<uint16_t>(*this); 218 } 219 220 static OperandTypes fromBits(uint16_t bits) 221 { 222 return bitwise_cast<OperandTypes>(bits); 231 223 } 232 224
Note: See TracChangeset
for help on using the changeset viewer.