Changeset 245906 in webkit


Ignore:
Timestamp:
May 30, 2019 2:40:35 PM (5 years ago)
Author:
ysuzuki@apple.com
Message:

[JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
https://bugs.webkit.org/show_bug.cgi?id=197979

Reviewed by Filip Pizlo.

JSTests:

  • stress/16bit-code.js: Added.

(shouldBe):

  • stress/32bit-code.js: Added.

(shouldBe):

Source/JavaScriptCore:

This patch introduces 16bit bytecode size. Previously, we had two versions of bytecodes, 8bit and 32bit. However,
in Gmail, we found that a lot of bytecodes get 32bit because they do not fit in 8bit. 8bit is very small and large
function easily emits a lot of 32bit bytecodes because of large VirtualRegister number etc. But they almost always
fit in 16bit. If we can have 16bit version of bytecode, we can make most of the current 32bit bytecodes 16bit and
save memory.

We rename rename op_wide to op_wide32 and introduce op_wide16. The mechanism is similar to old op_wide. When we
get op_wide16, the following bytecode data is 16bit, and we execute 16bit version of bytecode in LLInt.

We also disable this op_wide16 feature in Windows CLoop, which is used in AppleWin port. When the code size of
CLoop::execute increases, MSVC starts generating CLoop::execute function with very large stack allocation
requirement. Even before introducing this 16bit bytecode, CLoop::execute in AppleWin takes almost 100KB stack
height. After introducing this, it becomes 160KB. While the semantics of the function is correctly compiled,
such a large stack allocation is not essentially necessary, and this leads to stack overflow errors quite easily,
and tests fail with AppleWin port because it starts throwing stack overflow range error in various places.
In this patch, for now, we just disable op_wide16 feature for AppleWin so that CLoop::execute takes 100KB
stack allocation because this patch is not focusing on fixing AppleWin's CLoop issue. We introduce a new backend
type for LLInt, "C_LOOP_WIN". "C_LOOP_WIN" do not generate wide16 version of code to reduce the code size of
CLoop::execute. In the future, we should investigate whether this MSVC issue is fixed in Visual Studio 2019.
Or we should consider always enabling ASM LLInt for Windows.

This patch improves Gmail by 7MB at least.

  • CMakeLists.txt:
  • bytecode/BytecodeConventions.h:
  • bytecode/BytecodeDumper.cpp:

(JSC::BytecodeDumper<Block>::dumpBlock):

  • bytecode/BytecodeList.rb:
  • bytecode/BytecodeRewriter.h:

(JSC::BytecodeRewriter::Fragment::align):

  • bytecode/BytecodeUseDef.h:

(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):

  • bytecode/CodeBlock.cpp:

(JSC::CodeBlock::finishCreation):

  • bytecode/CodeBlock.h:

(JSC::CodeBlock::metadataTable const):

  • bytecode/Fits.h:
  • bytecode/Instruction.h:

(JSC::Instruction::opcodeID const):
(JSC::Instruction::isWide16 const):
(JSC::Instruction::isWide32 const):
(JSC::Instruction::hasMetadata const):
(JSC::Instruction::sizeShiftAmount const):
(JSC::Instruction::size const):
(JSC::Instruction::wide16 const):
(JSC::Instruction::wide32 const):
(JSC::Instruction::isWide const): Deleted.
(JSC::Instruction::wide const): Deleted.

  • bytecode/InstructionStream.h:

(JSC::InstructionStreamWriter::write):

  • bytecode/Opcode.h:
  • bytecode/OpcodeSize.h:
  • bytecompiler/BytecodeGenerator.cpp:

(JSC::BytecodeGenerator::alignWideOpcode16):
(JSC::BytecodeGenerator::alignWideOpcode32):
(JSC::BytecodeGenerator::emitGetByVal): Previously, we always emit 32bit op_get_by_val for bytecodes in for-in context because
its operand can be replaced to the other VirtualRegister later. But if we know that replacing VirtualRegister can fit in 8bit / 16bit
a-priori, we should not emit 32bit version. We expose OpXXX::checkWithoutMetadataID to check whether we could potentially compact
the bytecode for the given operands.

(JSC::BytecodeGenerator::emitYieldPoint):
(JSC::StructureForInContext::finalize):
(JSC::BytecodeGenerator::alignWideOpcode): Deleted.

  • bytecompiler/BytecodeGenerator.h:

(JSC::BytecodeGenerator::write):

  • dfg/DFGCapabilities.cpp:

(JSC::DFG::capabilityLevel):

  • generator/Argument.rb:
  • generator/DSL.rb:
  • generator/Metadata.rb:
  • generator/Opcode.rb: A little bit weird but checkImpl's argument must be reference. We are relying on that BoundLabel is once modified in

this check phase, and the modified BoundLabel will be used when emitting the code. If checkImpl copies the passed BoundLabel, this modification
will be discarded in this checkImpl function and make the code generation broken.

  • generator/Section.rb:
  • jit/JITExceptions.cpp:

(JSC::genericUnwind):

  • llint/LLIntData.cpp:

(JSC::LLInt::initialize):

  • llint/LLIntData.h:

(JSC::LLInt::opcodeMapWide16):
(JSC::LLInt::opcodeMapWide32):
(JSC::LLInt::getOpcodeWide16):
(JSC::LLInt::getOpcodeWide32):
(JSC::LLInt::getWide16CodePtr):
(JSC::LLInt::getWide32CodePtr):
(JSC::LLInt::opcodeMapWide): Deleted.
(JSC::LLInt::getOpcodeWide): Deleted.
(JSC::LLInt::getWideCodePtr): Deleted.

  • llint/LLIntOfflineAsmConfig.h:
  • llint/LLIntSlowPaths.cpp:

(JSC::LLInt::LLINT_SLOW_PATH_DECL):

  • llint/LLIntSlowPaths.h:
  • llint/LowLevelInterpreter.asm:
  • llint/LowLevelInterpreter.cpp:

(JSC::CLoop::execute):

  • llint/LowLevelInterpreter32_64.asm:
  • llint/LowLevelInterpreter64.asm:
  • offlineasm/arm.rb:
  • offlineasm/arm64.rb:
  • offlineasm/asm.rb:
  • offlineasm/backends.rb:
  • offlineasm/cloop.rb:
  • offlineasm/instructions.rb:
  • offlineasm/mips.rb:
  • offlineasm/x86.rb: Load operation with sign extension should also have the extended size information. For example, loadbs should be

converted to loadbsi for 32bit sign extension (and loadbsq for 64bit sign extension). And use loadbsq / loadhsq for loading VirtualRegister
information in LowLevelInterpreter64 since they will be used for pointer arithmetic and they are using machine register width.

  • parser/ResultType.h:

(JSC::OperandTypes::OperandTypes):
(JSC::OperandTypes::first const):
(JSC::OperandTypes::second const):
(JSC::OperandTypes::bits):
(JSC::OperandTypes::fromBits):
(): Deleted.
(JSC::OperandTypes::toInt): Deleted.
(JSC::OperandTypes::fromInt): Deleted.
We reduce sizeof(OperandTypes) from unsigned to uint16_t, which guarantees that OperandTypes always fit in 16bit bytecode.

Location:
trunk
Files:
2 added
42 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r245895 r245906  
     12019-05-30  Tadeu Zagallo  <tzagallo@apple.com> and Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
     4        https://bugs.webkit.org/show_bug.cgi?id=197979
     5
     6        Reviewed by Filip Pizlo.
     7
     8        * stress/16bit-code.js: Added.
     9        (shouldBe):
     10        * stress/32bit-code.js: Added.
     11        (shouldBe):
     12
    1132019-05-30  Justin Michaud  <justin_michaud@apple.com>
    214
  • trunk/Source/JavaScriptCore/CMakeLists.txt

    r245492 r245906  
    237237
    238238if (WIN32)
    239   set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP")
     239    set(OFFLINE_ASM_BACKEND "X86_WIN, X86_64_WIN, C_LOOP_WIN")
    240240else ()
    241241    if (WTF_CPU_X86)
  • trunk/Source/JavaScriptCore/ChangeLog

    r245895 r245906  
     12019-05-30  Tadeu Zagallo  <tzagallo@apple.com> and Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
     4        https://bugs.webkit.org/show_bug.cgi?id=197979
     5
     6        Reviewed by Filip Pizlo.
     7
     8        This patch introduces 16bit bytecode size. Previously, we had two versions of bytecodes, 8bit and 32bit. However,
     9        in Gmail, we found that a lot of bytecodes get 32bit because they do not fit in 8bit. 8bit is very small and large
     10        function easily emits a lot of 32bit bytecodes because of large VirtualRegister number etc. But they almost always
     11        fit in 16bit. If we can have 16bit version of bytecode, we can make most of the current 32bit bytecodes 16bit and
     12        save memory.
     13
     14        We rename rename op_wide to op_wide32 and introduce op_wide16. The mechanism is similar to old op_wide. When we
     15        get op_wide16, the following bytecode data is 16bit, and we execute 16bit version of bytecode in LLInt.
     16
     17        We also disable this op_wide16 feature in Windows CLoop, which is used in AppleWin port. When the code size of
     18        CLoop::execute increases, MSVC starts generating CLoop::execute function with very large stack allocation
     19        requirement. Even before introducing this 16bit bytecode, CLoop::execute in AppleWin takes almost 100KB stack
     20        height. After introducing this, it becomes 160KB. While the semantics of the function is correctly compiled,
     21        such a large stack allocation is not essentially necessary, and this leads to stack overflow errors quite easily,
     22        and tests fail with AppleWin port because it starts throwing stack overflow range error in various places.
     23        In this patch, for now, we just disable op_wide16 feature for AppleWin so that CLoop::execute takes 100KB
     24        stack allocation because this patch is not focusing on fixing AppleWin's CLoop issue. We introduce a new backend
     25        type for LLInt, "C_LOOP_WIN". "C_LOOP_WIN" do not generate wide16 version of code to reduce the code size of
     26        CLoop::execute. In the future, we should investigate whether this MSVC issue is fixed in Visual Studio 2019.
     27        Or we should consider always enabling ASM LLInt for Windows.
     28
     29        This patch improves Gmail by 7MB at least.
     30
     31        * CMakeLists.txt:
     32        * bytecode/BytecodeConventions.h:
     33        * bytecode/BytecodeDumper.cpp:
     34        (JSC::BytecodeDumper<Block>::dumpBlock):
     35        * bytecode/BytecodeList.rb:
     36        * bytecode/BytecodeRewriter.h:
     37        (JSC::BytecodeRewriter::Fragment::align):
     38        * bytecode/BytecodeUseDef.h:
     39        (JSC::computeUsesForBytecodeOffset):
     40        (JSC::computeDefsForBytecodeOffset):
     41        * bytecode/CodeBlock.cpp:
     42        (JSC::CodeBlock::finishCreation):
     43        * bytecode/CodeBlock.h:
     44        (JSC::CodeBlock::metadataTable const):
     45        * bytecode/Fits.h:
     46        * bytecode/Instruction.h:
     47        (JSC::Instruction::opcodeID const):
     48        (JSC::Instruction::isWide16 const):
     49        (JSC::Instruction::isWide32 const):
     50        (JSC::Instruction::hasMetadata const):
     51        (JSC::Instruction::sizeShiftAmount const):
     52        (JSC::Instruction::size const):
     53        (JSC::Instruction::wide16 const):
     54        (JSC::Instruction::wide32 const):
     55        (JSC::Instruction::isWide const): Deleted.
     56        (JSC::Instruction::wide const): Deleted.
     57        * bytecode/InstructionStream.h:
     58        (JSC::InstructionStreamWriter::write):
     59        * bytecode/Opcode.h:
     60        * bytecode/OpcodeSize.h:
     61        * bytecompiler/BytecodeGenerator.cpp:
     62        (JSC::BytecodeGenerator::alignWideOpcode16):
     63        (JSC::BytecodeGenerator::alignWideOpcode32):
     64        (JSC::BytecodeGenerator::emitGetByVal): Previously, we always emit 32bit op_get_by_val for bytecodes in `for-in` context because
     65        its operand can be replaced to the other VirtualRegister later. But if we know that replacing VirtualRegister can fit in 8bit / 16bit
     66        a-priori, we should not emit 32bit version. We expose OpXXX::checkWithoutMetadataID to check whether we could potentially compact
     67        the bytecode for the given operands.
     68
     69        (JSC::BytecodeGenerator::emitYieldPoint):
     70        (JSC::StructureForInContext::finalize):
     71        (JSC::BytecodeGenerator::alignWideOpcode): Deleted.
     72        * bytecompiler/BytecodeGenerator.h:
     73        (JSC::BytecodeGenerator::write):
     74        * dfg/DFGCapabilities.cpp:
     75        (JSC::DFG::capabilityLevel):
     76        * generator/Argument.rb:
     77        * generator/DSL.rb:
     78        * generator/Metadata.rb:
     79        * generator/Opcode.rb: A little bit weird but checkImpl's argument must be reference. We are relying on that BoundLabel is once modified in
     80        this check phase, and the modified BoundLabel will be used when emitting the code. If checkImpl copies the passed BoundLabel, this modification
     81        will be discarded in this checkImpl function and make the code generation broken.
     82
     83        * generator/Section.rb:
     84        * jit/JITExceptions.cpp:
     85        (JSC::genericUnwind):
     86        * llint/LLIntData.cpp:
     87        (JSC::LLInt::initialize):
     88        * llint/LLIntData.h:
     89        (JSC::LLInt::opcodeMapWide16):
     90        (JSC::LLInt::opcodeMapWide32):
     91        (JSC::LLInt::getOpcodeWide16):
     92        (JSC::LLInt::getOpcodeWide32):
     93        (JSC::LLInt::getWide16CodePtr):
     94        (JSC::LLInt::getWide32CodePtr):
     95        (JSC::LLInt::opcodeMapWide): Deleted.
     96        (JSC::LLInt::getOpcodeWide): Deleted.
     97        (JSC::LLInt::getWideCodePtr): Deleted.
     98        * llint/LLIntOfflineAsmConfig.h:
     99        * llint/LLIntSlowPaths.cpp:
     100        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
     101        * llint/LLIntSlowPaths.h:
     102        * llint/LowLevelInterpreter.asm:
     103        * llint/LowLevelInterpreter.cpp:
     104        (JSC::CLoop::execute):
     105        * llint/LowLevelInterpreter32_64.asm:
     106        * llint/LowLevelInterpreter64.asm:
     107        * offlineasm/arm.rb:
     108        * offlineasm/arm64.rb:
     109        * offlineasm/asm.rb:
     110        * offlineasm/backends.rb:
     111        * offlineasm/cloop.rb:
     112        * offlineasm/instructions.rb:
     113        * offlineasm/mips.rb:
     114        * offlineasm/x86.rb: Load operation with sign extension should also have the extended size information. For example, loadbs should be
     115        converted to loadbsi for 32bit sign extension (and loadbsq for 64bit sign extension). And use loadbsq / loadhsq for loading VirtualRegister
     116        information in LowLevelInterpreter64 since they will be used for pointer arithmetic and they are using machine register width.
     117
     118        * parser/ResultType.h:
     119        (JSC::OperandTypes::OperandTypes):
     120        (JSC::OperandTypes::first const):
     121        (JSC::OperandTypes::second const):
     122        (JSC::OperandTypes::bits):
     123        (JSC::OperandTypes::fromBits):
     124        (): Deleted.
     125        (JSC::OperandTypes::toInt): Deleted.
     126        (JSC::OperandTypes::fromInt): Deleted.
     127        We reduce sizeof(OperandTypes) from unsigned to uint16_t, which guarantees that OperandTypes always fit in 16bit bytecode.
     128
    11292019-05-30  Justin Michaud  <justin_michaud@apple.com>
    2130
  • trunk/Source/JavaScriptCore/bytecode/BytecodeConventions.h

    r206525 r245906  
    3030//      0x00000000-0x3FFFFFFF  Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe.
    3131//      0x40000000-0x7FFFFFFF  Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock.
    32 static const int FirstConstantRegisterIndex = 0x40000000;
     32static constexpr int FirstConstantRegisterIndex = 0x40000000;
     33
     34static constexpr int FirstConstantRegisterIndex8 = 16;
     35static constexpr int FirstConstantRegisterIndex16 = 64;
     36static constexpr int FirstConstantRegisterIndex32 = FirstConstantRegisterIndex;
  • trunk/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp

    r241104 r245906  
    194194{
    195195    size_t instructionCount = 0;
    196     size_t wideInstructionCount = 0;
     196    size_t wide16InstructionCount = 0;
     197    size_t wide32InstructionCount = 0;
    197198    size_t instructionWithMetadataCount = 0;
    198199
    199200    for (const auto& instruction : instructions) {
    200         if (instruction->isWide())
    201             ++wideInstructionCount;
    202         if (instruction->opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA)
     201        if (instruction->isWide16())
     202            ++wide16InstructionCount;
     203        else if (instruction->isWide32())
     204            ++wide32InstructionCount;
     205        if (instruction->hasMetadata())
    203206            ++instructionWithMetadataCount;
    204207        ++instructionCount;
     
    207210    out.print(*block);
    208211    out.printf(
    209         ": %lu instructions (%lu wide instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)",
     212        ": %lu instructions (%lu 16-bit instructions, %lu 32-bit instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)",
    210213        static_cast<unsigned long>(instructionCount),
    211         static_cast<unsigned long>(wideInstructionCount),
     214        static_cast<unsigned long>(wide16InstructionCount),
     215        static_cast<unsigned long>(wide32InstructionCount),
    212216        static_cast<unsigned long>(instructionWithMetadataCount),
    213217        static_cast<unsigned long>(instructions.sizeInBytes() + block->metadataSizeInBytes()),
  • trunk/Source/JavaScriptCore/bytecode/BytecodeList.rb

    r245658 r245906  
    8383    op_prefix: "op_"
    8484
    85 op :wide
     85op :wide16
     86op :wide32
    8687
    8788op :enter
     
    11411142op :llint_cloop_did_return_from_js_22
    11421143op :llint_cloop_did_return_from_js_23
     1144op :llint_cloop_did_return_from_js_24
     1145op :llint_cloop_did_return_from_js_25
     1146op :llint_cloop_did_return_from_js_26
     1147op :llint_cloop_did_return_from_js_27
     1148op :llint_cloop_did_return_from_js_28
     1149op :llint_cloop_did_return_from_js_29
     1150op :llint_cloop_did_return_from_js_30
     1151op :llint_cloop_did_return_from_js_31
     1152op :llint_cloop_did_return_from_js_32
     1153op :llint_cloop_did_return_from_js_33
     1154op :llint_cloop_did_return_from_js_34
    11431155
    11441156end_section :CLoopHelpers
  • trunk/Source/JavaScriptCore/bytecode/BytecodeRewriter.h

    r237933 r245906  
    162162#if CPU(NEEDS_ALIGNED_ACCESS)
    163163            m_bytecodeGenerator.withWriter(m_writer, [&] {
    164                 while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide)
     164                while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide32)
    165165                    OpNop::emit<OpcodeSize::Narrow>(&m_bytecodeGenerator);
    166166            });
  • trunk/Source/JavaScriptCore/bytecode/BytecodeUseDef.h

    r244088 r245906  
    6969
    7070    switch (opcodeID) {
    71     case op_wide:
     71    case op_wide16:
     72    case op_wide32:
    7273        RELEASE_ASSERT_NOT_REACHED();
    7374
     
    290291{
    291292    switch (opcodeID) {
    292     case op_wide:
     293    case op_wide16:
     294    case op_wide32:
    293295        RELEASE_ASSERT_NOT_REACHED();
    294296
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r245667 r245906  
    446446                HandlerInfo& handler = m_rareData->m_exceptionHandlers[i];
    447447#if ENABLE(JIT)
    448                 MacroAssemblerCodePtr<BytecodePtrTag> codePtr = instructions().at(unlinkedHandler.target)->isWide()
    449                     ? LLInt::getWideCodePtr<BytecodePtrTag>(op_catch)
    450                     : LLInt::getCodePtr<BytecodePtrTag>(op_catch);
     448                auto instruction = instructions().at(unlinkedHandler.target);
     449                MacroAssemblerCodePtr<BytecodePtrTag> codePtr;
     450                if (instruction->isWide32())
     451                    codePtr = LLInt::getWide32CodePtr<BytecodePtrTag>(op_catch);
     452                else if (instruction->isWide16())
     453                    codePtr = LLInt::getWide16CodePtr<BytecodePtrTag>(op_catch);
     454                else
     455                    codePtr = LLInt::getCodePtr<BytecodePtrTag>(op_catch);
    451456                handler.initialize(unlinkedHandler, CodeLocationLabel<ExceptionHandlerPtrTag>(codePtr.retagged<ExceptionHandlerPtrTag>()));
    452457#else
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.h

    r245667 r245906  
    146146    JS_EXPORT_PRIVATE void dump(PrintStream&) const;
    147147
     148    MetadataTable* metadataTable() const { return m_metadata.get(); }
     149
    148150    int numParameters() const { return m_numParameters; }
    149151    void setNumParameters(int newValue);
  • trunk/Source/JavaScriptCore/bytecode/Fits.h

    r245213 r245906  
    5252template<typename T, OpcodeSize size>
    5353struct Fits<T, size, std::enable_if_t<sizeof(T) == size, std::true_type>> {
     54    using TargetType = typename TypeBySize<size>::unsignedType;
     55
    5456    static bool check(T) { return true; }
    5557
    56     static typename TypeBySize<size>::type convert(T t) { return bitwise_cast<typename TypeBySize<size>::type>(t); }
    57 
    58     template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>>
    59     static T1 convert(typename TypeBySize<size1>::type t) { return bitwise_cast<T1>(t); }
     58    static TargetType convert(T t) { return bitwise_cast<TargetType>(t); }
     59
     60    template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, TargetType>::value, std::true_type>>
     61    static T1 convert(TargetType t) { return bitwise_cast<T1>(t); }
    6062};
    6163
    6264template<typename T, OpcodeSize size>
    63 struct Fits<T, size, std::enable_if_t<sizeof(T) < size, std::true_type>> {
    64     static bool check(T) { return true; }
    65 
    66     static typename TypeBySize<size>::type convert(T t) { return static_cast<typename TypeBySize<size>::type>(t); }
    67 
    68     template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>>
    69     static T1 convert(typename TypeBySize<size1>::type t) { return static_cast<T1>(t); }
    70 };
     65struct Fits<T, size, std::enable_if_t<std::is_integral<T>::value && sizeof(T) != size && !std::is_same<bool, T>::value, std::true_type>> {
     66    using TargetType = std::conditional_t<std::is_unsigned<T>::value, typename TypeBySize<size>::unsignedType, typename TypeBySize<size>::signedType>;
     67
     68    static bool check(T t)
     69    {
     70        return t >= std::numeric_limits<TargetType>::min() && t <= std::numeric_limits<TargetType>::max();
     71    }
     72
     73    static TargetType convert(T t)
     74    {
     75        ASSERT(check(t));
     76        return static_cast<TargetType>(t);
     77    }
     78
     79    template<class T1 = T, OpcodeSize size1 = size, typename TargetType1 = TargetType, typename = std::enable_if_t<!std::is_same<T1, TargetType1>::value, std::true_type>>
     80    static T1 convert(TargetType1 t) { return static_cast<T1>(t); }
     81};
     82
     83template<OpcodeSize size>
     84struct Fits<bool, size, std::enable_if_t<size != sizeof(bool), std::true_type>> : public Fits<uint8_t, size> {
     85    using Base = Fits<uint8_t, size>;
     86
     87    static bool check(bool e) { return Base::check(static_cast<uint8_t>(e)); }
     88
     89    static typename Base::TargetType convert(bool e)
     90    {
     91        return Base::convert(static_cast<uint8_t>(e));
     92    }
     93
     94    static bool convert(typename Base::TargetType e)
     95    {
     96        return Base::convert(e);
     97    }
     98};
     99
     100template<OpcodeSize size>
     101struct FirstConstant;
    71102
    72103template<>
    73 struct Fits<uint32_t, OpcodeSize::Narrow> {
    74     static bool check(unsigned u) { return u <= UINT8_MAX; }
    75 
    76     static uint8_t convert(unsigned u)
    77     {
    78         ASSERT(check(u));
    79         return static_cast<uint8_t>(u);
    80     }
    81     static unsigned convert(uint8_t u)
    82     {
    83         return u;
    84     }
     104struct FirstConstant<OpcodeSize::Narrow> {
     105    static constexpr int index = FirstConstantRegisterIndex8;
    85106};
    86107
    87108template<>
    88 struct Fits<int, OpcodeSize::Narrow> {
    89     static bool check(int i)
    90     {
    91         return i >= INT8_MIN && i <= INT8_MAX;
    92     }
    93 
    94     static uint8_t convert(int i)
    95     {
    96         ASSERT(check(i));
    97         return static_cast<uint8_t>(i);
    98     }
    99 
    100     static int convert(uint8_t i)
    101     {
    102         return static_cast<int8_t>(i);
    103     }
    104 };
    105 
    106 template<>
    107 struct Fits<VirtualRegister, OpcodeSize::Narrow> {
     109struct FirstConstant<OpcodeSize::Wide16> {
     110    static constexpr int index = FirstConstantRegisterIndex16;
     111};
     112
     113template<OpcodeSize size>
     114struct Fits<VirtualRegister, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> {
     115    // Narrow:
    108116    // -128..-1  local variables
    109117    //    0..15  arguments
    110118    //   16..127 constants
    111     static constexpr int s_firstConstantIndex = 16;
     119    //
     120    // Wide16:
     121    // -2**15..-1  local variables
     122    //      0..64  arguments
     123    //     64..2**15-1 constants
     124
     125    using TargetType = typename TypeBySize<size>::signedType;
     126
     127    static constexpr int s_firstConstantIndex = FirstConstant<size>::index;
    112128    static bool check(VirtualRegister r)
    113129    {
    114130        if (r.isConstant())
    115             return (s_firstConstantIndex + r.toConstantIndex()) <= INT8_MAX;
    116         return r.offset() >= INT8_MIN && r.offset() < s_firstConstantIndex;
    117     }
    118 
    119     static uint8_t convert(VirtualRegister r)
     131            return (s_firstConstantIndex + r.toConstantIndex()) <= std::numeric_limits<TargetType>::max();
     132        return r.offset() >= std::numeric_limits<TargetType>::min() && r.offset() < s_firstConstantIndex;
     133    }
     134
     135    static TargetType convert(VirtualRegister r)
    120136    {
    121137        ASSERT(check(r));
    122138        if (r.isConstant())
    123             return static_cast<int8_t>(s_firstConstantIndex + r.toConstantIndex());
    124         return static_cast<int8_t>(r.offset());
    125     }
    126 
    127     static VirtualRegister convert(uint8_t u)
    128     {
    129         int i = static_cast<int>(static_cast<int8_t>(u));
     139            return static_cast<TargetType>(s_firstConstantIndex + r.toConstantIndex());
     140        return static_cast<TargetType>(r.offset());
     141    }
     142
     143    static VirtualRegister convert(TargetType u)
     144    {
     145        int i = static_cast<int>(static_cast<TargetType>(u));
    130146        if (i >= s_firstConstantIndex)
    131147            return VirtualRegister { (i - s_firstConstantIndex) + FirstConstantRegisterIndex };
     
    134150};
    135151
    136 template<>
    137 struct Fits<SymbolTableOrScopeDepth, OpcodeSize::Narrow> {
    138     static bool check(SymbolTableOrScopeDepth u)
    139     {
    140         return u.raw() <= UINT8_MAX;
    141     }
    142 
    143     static uint8_t convert(SymbolTableOrScopeDepth u)
    144     {
    145         ASSERT(check(u));
    146         return static_cast<uint8_t>(u.raw());
    147     }
    148 
    149     static SymbolTableOrScopeDepth convert(uint8_t u)
    150     {
    151         return SymbolTableOrScopeDepth::raw(u);
    152     }
    153 };
    154 
    155 template<>
    156 struct Fits<Special::Pointer, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
    157     using Base = Fits<int, OpcodeSize::Narrow>;
    158     static bool check(Special::Pointer sp) { return Base::check(static_cast<int>(sp)); }
    159     static uint8_t convert(Special::Pointer sp)
    160     {
    161         return Base::convert(static_cast<int>(sp));
    162     }
    163     static Special::Pointer convert(uint8_t sp)
    164     {
    165         return static_cast<Special::Pointer>(Base::convert(sp));
    166     }
    167 };
    168 
    169 template<>
    170 struct Fits<GetPutInfo, OpcodeSize::Narrow> {
     152template<OpcodeSize size>
     153struct Fits<SymbolTableOrScopeDepth, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> : public Fits<unsigned, size> {
     154    static_assert(sizeof(SymbolTableOrScopeDepth) == sizeof(unsigned));
     155    using TargetType = typename TypeBySize<size>::unsignedType;
     156    using Base = Fits<unsigned, size>;
     157
     158    static bool check(SymbolTableOrScopeDepth u) { return Base::check(u.raw()); }
     159
     160    static TargetType convert(SymbolTableOrScopeDepth u)
     161    {
     162        return Base::convert(u.raw());
     163    }
     164
     165    static SymbolTableOrScopeDepth convert(TargetType u)
     166    {
     167        return SymbolTableOrScopeDepth::raw(Base::convert(u));
     168    }
     169};
     170
     171template<OpcodeSize size>
     172struct Fits<GetPutInfo, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> {
     173    using TargetType = typename TypeBySize<size>::unsignedType;
     174
    171175    // 13 Resolve Types
    172176    // 3 Initialization Modes
     
    198202    }
    199203
    200     static uint8_t convert(GetPutInfo gpi)
     204    static TargetType convert(GetPutInfo gpi)
    201205    {
    202206        ASSERT(check(gpi));
     
    207211    }
    208212
    209     static GetPutInfo convert(uint8_t gpi)
     213    static GetPutInfo convert(TargetType gpi)
    210214    {
    211215        auto resolveType = static_cast<ResolveType>((gpi & s_resolveTypeBits) >> 3);
     
    216220};
    217221
    218 template<>
    219 struct Fits<DebugHookType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
    220     using Base = Fits<int, OpcodeSize::Narrow>;
    221     static bool check(DebugHookType dht) { return Base::check(static_cast<int>(dht)); }
    222     static uint8_t convert(DebugHookType dht)
    223     {
    224         return Base::convert(static_cast<int>(dht));
    225     }
    226     static DebugHookType convert(uint8_t dht)
    227     {
    228         return static_cast<DebugHookType>(Base::convert(dht));
    229     }
    230 };
    231 
    232 template<>
    233 struct Fits<ProfileTypeBytecodeFlag, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
    234     using Base = Fits<int, OpcodeSize::Narrow>;
    235     static bool check(ProfileTypeBytecodeFlag ptbf) { return Base::check(static_cast<int>(ptbf)); }
    236     static uint8_t convert(ProfileTypeBytecodeFlag ptbf)
    237     {
    238         return Base::convert(static_cast<int>(ptbf));
    239     }
    240     static ProfileTypeBytecodeFlag convert(uint8_t ptbf)
    241     {
    242         return static_cast<ProfileTypeBytecodeFlag>(Base::convert(ptbf));
    243     }
    244 };
    245 
    246 template<>
    247 struct Fits<ResolveType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
    248     using Base = Fits<int, OpcodeSize::Narrow>;
    249     static bool check(ResolveType rt) { return Base::check(static_cast<int>(rt)); }
    250     static uint8_t convert(ResolveType rt)
    251     {
    252         return Base::convert(static_cast<int>(rt));
    253     }
    254 
    255     static ResolveType convert(uint8_t rt)
    256     {
    257         return static_cast<ResolveType>(Base::convert(rt));
    258     }
    259 };
    260 
    261 template<>
    262 struct Fits<OperandTypes, OpcodeSize::Narrow> {
     222template<typename E, OpcodeSize size>
     223struct Fits<E, size, std::enable_if_t<sizeof(E) != size && std::is_enum<E>::value, std::true_type>> : public Fits<std::underlying_type_t<E>, size> {
     224    using Base = Fits<std::underlying_type_t<E>, size>;
     225
     226    static bool check(E e) { return Base::check(static_cast<std::underlying_type_t<E>>(e)); }
     227
     228    static typename Base::TargetType convert(E e)
     229    {
     230        return Base::convert(static_cast<std::underlying_type_t<E>>(e));
     231    }
     232
     233    static E convert(typename Base::TargetType e)
     234    {
     235        return static_cast<E>(Base::convert(e));
     236    }
     237};
     238
     239template<OpcodeSize size>
     240struct Fits<OperandTypes, size, std::enable_if_t<sizeof(OperandTypes) != size, std::true_type>> {
     241    static_assert(sizeof(OperandTypes) == sizeof(uint16_t));
     242    using TargetType = typename TypeBySize<size>::unsignedType;
     243
    263244    // a pair of (ResultType::Type, ResultType::Type) - try to fit each type into 4 bits
    264245    // additionally, encode unknown types as 0 rather than the | of all types
    265     static constexpr int s_maxType = 0x10;
     246    static constexpr unsigned typeWidth = 4;
     247    static constexpr unsigned maxType = (1 << typeWidth) - 1;
    266248
    267249    static bool check(OperandTypes types)
    268250    {
    269         auto first = types.first().bits();
    270         auto second = types.second().bits();
    271         if (first == ResultType::unknownType().bits())
    272             first = 0;
    273         if (second == ResultType::unknownType().bits())
    274             second = 0;
    275         return first < s_maxType && second < s_maxType;
    276     }
    277 
    278     static uint8_t convert(OperandTypes types)
    279     {
    280         ASSERT(check(types));
    281         auto first = types.first().bits();
    282         auto second = types.second().bits();
    283         if (first == ResultType::unknownType().bits())
    284             first = 0;
    285         if (second == ResultType::unknownType().bits())
    286             second = 0;
    287         return (first << 4) | second;
    288     }
    289 
    290     static OperandTypes convert(uint8_t types)
    291     {
    292         auto first = (types & (0xf << 4)) >> 4;
    293         auto second = (types & 0xf);
    294         if (!first)
    295             first = ResultType::unknownType().bits();
    296         if (!second)
    297             second = ResultType::unknownType().bits();
    298         return OperandTypes(ResultType(first), ResultType(second));
    299     }
    300 };
    301 
    302 template<>
    303 struct Fits<PutByIdFlags, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> {
    304     // only ever encoded in the bytecode stream as 0 or 1, so the trivial encoding should be good enough
    305     using Base = Fits<int, OpcodeSize::Narrow>;
    306     static bool check(PutByIdFlags flags) { return Base::check(static_cast<int>(flags)); }
    307     static uint8_t convert(PutByIdFlags flags)
    308     {
    309         return Base::convert(static_cast<int>(flags));
    310     }
    311 
    312     static PutByIdFlags convert(uint8_t flags)
    313     {
    314         return static_cast<PutByIdFlags>(Base::convert(flags));
    315     }
    316 };
    317 
    318 template<OpcodeSize size>
    319 struct Fits<BoundLabel, size> : Fits<int, size> {
     251        if (size == OpcodeSize::Narrow) {
     252            auto first = types.first().bits();
     253            auto second = types.second().bits();
     254            if (first == ResultType::unknownType().bits())
     255                first = 0;
     256            if (second == ResultType::unknownType().bits())
     257                second = 0;
     258            return first <= maxType && second <= maxType;
     259        }
     260        return true;
     261    }
     262
     263    static TargetType convert(OperandTypes types)
     264    {
     265        if (size == OpcodeSize::Narrow) {
     266            ASSERT(check(types));
     267            auto first = types.first().bits();
     268            auto second = types.second().bits();
     269            if (first == ResultType::unknownType().bits())
     270                first = 0;
     271            if (second == ResultType::unknownType().bits())
     272                second = 0;
     273            return (first << typeWidth) | second;
     274        }
     275        return static_cast<TargetType>(types.bits());
     276    }
     277
     278    static OperandTypes convert(TargetType types)
     279    {
     280        if (size == OpcodeSize::Narrow) {
     281            auto first = types >> typeWidth;
     282            auto second = types & maxType;
     283            if (!first)
     284                first = ResultType::unknownType().bits();
     285            if (!second)
     286                second = ResultType::unknownType().bits();
     287            return OperandTypes(ResultType(first), ResultType(second));
     288        }
     289        return OperandTypes::fromBits(static_cast<uint16_t>(types));
     290    }
     291};
     292
     293template<OpcodeSize size>
     294struct Fits<BoundLabel, size> : public Fits<int, size> {
    320295    // This is a bit hacky: we need to delay computing jump targets, since we
    321296    // might have to emit `nop`s to align the instructions stream. Additionally,
     
    331306    }
    332307
    333     static typename TypeBySize<size>::type convert(BoundLabel& label)
     308    static typename Base::TargetType convert(BoundLabel& label)
    334309    {
    335310        return Base::convert(label.commitTarget());
    336311    }
    337312
    338     static BoundLabel convert(typename TypeBySize<size>::type target)
     313    static BoundLabel convert(typename Base::TargetType target)
    339314    {
    340315        return BoundLabel(Base::convert(target));
  • trunk/Source/JavaScriptCore/bytecode/Instruction.h

    r243162 r245906  
    4646
    4747    private:
    48         typename TypeBySize<Width>::type m_opcode;
     48        typename TypeBySize<Width>::unsignedType m_opcode;
    4949    };
    5050
     
    5252    OpcodeID opcodeID() const
    5353    {
    54         if (isWide())
    55             return wide()->opcodeID();
     54        if (isWide32())
     55            return wide32()->opcodeID();
     56        if (isWide16())
     57            return wide16()->opcodeID();
    5658        return narrow()->opcodeID();
    5759    }
     
    6264    }
    6365
    64     bool isWide() const
     66    bool isWide16() const
    6567    {
    66         return narrow()->opcodeID() == op_wide;
     68        return narrow()->opcodeID() == op_wide16;
     69    }
     70
     71    bool isWide32() const
     72    {
     73        return narrow()->opcodeID() == op_wide32;
     74    }
     75
     76    bool hasMetadata() const
     77    {
     78        return opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA;
     79    }
     80
     81    int sizeShiftAmount() const
     82    {
     83        if (isWide32())
     84            return 2;
     85        if (isWide16())
     86            return 1;
     87        return 0;
    6788    }
    6889
    6990    size_t size() const
    7091    {
    71         auto wide = isWide();
    72         auto padding = wide ? 1 : 0;
    73         auto size = wide ? 4 : 1;
     92        auto sizeShiftAmount = this->sizeShiftAmount();
     93        auto padding = sizeShiftAmount ? 1 : 0;
     94        auto size = 1 << sizeShiftAmount;
    7495        return opcodeLengths[opcodeID()] * size + padding;
    7596    }
     
    107128    }
    108129
    109     const Impl<OpcodeSize::Wide>* wide() const
     130    const Impl<OpcodeSize::Wide16>* wide16() const
    110131    {
    111132
    112         ASSERT(isWide());
    113         return reinterpret_cast<const Impl<OpcodeSize::Wide>*>(bitwise_cast<uintptr_t>(this) + 1);
     133        ASSERT(isWide16());
     134        return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(bitwise_cast<uintptr_t>(this) + 1);
     135    }
     136
     137    const Impl<OpcodeSize::Wide32>* wide32() const
     138    {
     139
     140        ASSERT(isWide32());
     141        return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(bitwise_cast<uintptr_t>(this) + 1);
    114142    }
    115143};
  • trunk/Source/JavaScriptCore/bytecode/InstructionStream.h

    r240684 r245906  
    211211        }
    212212    }
     213
     214    void write(uint16_t h)
     215    {
     216        ASSERT(!m_finalized);
     217        uint8_t bytes[2];
     218        std::memcpy(bytes, &h, sizeof(h));
     219
     220        // Though not always obvious, we don't have to invert the order of the
     221        // bytes written here for CPU(BIG_ENDIAN). This is because the incoming
     222        // i value is already ordered in big endian on CPU(BIG_EDNDIAN) platforms.
     223        write(bytes[0]);
     224        write(bytes[1]);
     225    }
     226
    213227    void write(uint32_t i)
    214228    {
  • trunk/Source/JavaScriptCore/bytecode/Opcode.h

    r245658 r245906  
    6767#if ENABLE(C_LOOP) && !HAVE(COMPUTED_GOTO)
    6868
    69 #define OPCODE_ID_ENUM(opcode, length) opcode##_wide = numOpcodeIDs + opcode,
    70     enum OpcodeIDWide : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
     69#define OPCODE_ID_ENUM(opcode, length) opcode##_wide16 = numOpcodeIDs + opcode,
     70    enum OpcodeIDWide16 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
     71#undef OPCODE_ID_ENUM
     72
     73#define OPCODE_ID_ENUM(opcode, length) opcode##_wide32 = numOpcodeIDs * 2 + opcode,
     74    enum OpcodeIDWide32 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
    7175#undef OPCODE_ID_ENUM
    7276#endif
  • trunk/Source/JavaScriptCore/bytecode/OpcodeSize.h

    r237547 r245906  
    3030enum OpcodeSize {
    3131    Narrow = 1,
    32     Wide = 4,
     32    Wide16 = 2,
     33    Wide32 = 4,
    3334};
    3435
     
    3839template<>
    3940struct TypeBySize<OpcodeSize::Narrow> {
    40     using type = uint8_t;
     41    using signedType = int8_t;
     42    using unsignedType = uint8_t;
    4143};
    4244
    4345template<>
    44 struct TypeBySize<OpcodeSize::Wide> {
    45     using type = uint32_t;
     46struct TypeBySize<OpcodeSize::Wide16> {
     47    using signedType = int16_t;
     48    using unsignedType = uint16_t;
     49};
     50
     51template<>
     52struct TypeBySize<OpcodeSize::Wide32> {
     53    using signedType = int32_t;
     54    using unsignedType = uint32_t;
    4655};
    4756
     
    5564
    5665template<>
    57 struct PaddingBySize<OpcodeSize::Wide> {
     66struct PaddingBySize<OpcodeSize::Wide16> {
     67    static constexpr uint8_t value = 1;
     68};
     69
     70template<>
     71struct PaddingBySize<OpcodeSize::Wide32> {
    5872    static constexpr uint8_t value = 1;
    5973};
  • trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp

    r245213 r245906  
    13401340}
    13411341
    1342 void BytecodeGenerator::alignWideOpcode()
     1342void BytecodeGenerator::alignWideOpcode16()
    13431343{
    13441344#if CPU(NEEDS_ALIGNED_ACCESS)
    1345     while ((m_writer.position() + 1) % OpcodeSize::Wide)
     1345    while ((m_writer.position() + 1) % OpcodeSize::Wide16)
     1346        OpNop::emit<OpcodeSize::Narrow>(this);
     1347#endif
     1348}
     1349
     1350void BytecodeGenerator::alignWideOpcode32()
     1351{
     1352#if CPU(NEEDS_ALIGNED_ACCESS)
     1353    while ((m_writer.position() + 1) % OpcodeSize::Wide32)
    13461354        OpNop::emit<OpcodeSize::Narrow>(this);
    13471355#endif
     
    27222730        if (context.isIndexedForInContext()) {
    27232731            auto& indexedContext = context.asIndexedForInContext();
    2724             OpGetByVal::emit<OpcodeSize::Wide>(this, kill(dst), base, indexedContext.index());
     2732            kill(dst);
     2733            if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Narrow>(this, dst, base, property))
     2734                OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(this, dst, base, indexedContext.index());
     2735            else if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Wide16>(this, dst, base, property))
     2736                OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Wide16>(this, dst, base, indexedContext.index());
     2737            else
     2738                OpGetByVal::emit<OpcodeSize::Wide32>(this, dst, base, indexedContext.index());
    27252739            indexedContext.addGetInst(m_lastInstruction.offset(), property->index());
    27262740            return dst;
    27272741        }
    27282742
     2743        // We cannot do the above optimization here since OpGetDirectPname => OpGetByVal conversion involves different metadata ID allocation.
    27292744        StructureForInContext& structureContext = context.asStructureForInContext();
    2730         OpGetDirectPname::emit<OpcodeSize::Wide>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator());
     2745        OpGetDirectPname::emit<OpcodeSize::Wide32>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator());
    27312746
    27322747        structureContext.addGetInst(m_lastInstruction.offset(), property->index());
     
    44814496    // conservatively align for the bytecode rewriter: it will delete this yield and
    44824497    // append a fragment, so we make sure that the start of the fragments is aligned
    4483     while (m_writer.position() % OpcodeSize::Wide)
     4498    while (m_writer.position() % OpcodeSize::Wide32)
    44844499        OpNop::emit<OpcodeSize::Narrow>(this);
    44854500#endif
     
    49844999        auto instruction = generator.m_writer.ref(instIndex);
    49855000        auto end = instIndex + instruction->size();
    4986         ASSERT(instruction->isWide());
     5001        ASSERT(instruction->isWide32());
    49875002
    49885003        generator.m_writer.seek(instIndex);
     
    49975012        // 2. base stays the same.
    49985013        // 3. property gets switched to the original property.
    4999         OpGetByVal::emit<OpcodeSize::Wide>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex));
     5014        OpGetByVal::emit<OpcodeSize::Wide32>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex));
    50005015
    50015016        // 4. nop out the remaining bytes
     
    50195034        unsigned instIndex = instPair.first;
    50205035        int propertyRegIndex = instPair.second;
    5021         // FIXME: we should not have to force this get_by_val to be wide, just guarantee that propertyRegIndex fits
    5022         // https://bugs.webkit.org/show_bug.cgi?id=190929
    50235036        generator.m_writer.ref(instIndex)->cast<OpGetByVal>()->setProperty(VirtualRegister(propertyRegIndex), []() {
    50245037            ASSERT_NOT_REACHED();
  • trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h

    r245213 r245906  
    11631163
    11641164        void write(uint8_t byte) { m_writer.write(byte); }
     1165        void write(uint16_t h) { m_writer.write(h); }
    11651166        void write(uint32_t i) { m_writer.write(i); }
    1166         void alignWideOpcode();
     1167        void write(int8_t byte) { m_writer.write(static_cast<uint8_t>(byte)); }
     1168        void write(int16_t h) { m_writer.write(static_cast<uint16_t>(h)); }
     1169        void write(int32_t i) { m_writer.write(static_cast<uint32_t>(i)); }
     1170        void alignWideOpcode16();
     1171        void alignWideOpcode32();
    11671172
    11681173        class PreservedTDZStack {
  • trunk/Source/JavaScriptCore/dfg/DFGCapabilities.cpp

    r244811 r245906  
    109109   
    110110    switch (opcodeID) {
    111     case op_wide:
     111    case op_wide16:
     112    case op_wide32:
    112113        RELEASE_ASSERT_NOT_REACHED();
    113114    case op_enter:
  • trunk/Source/JavaScriptCore/generator/Argument.rb

    r240041 r245906  
    4343    end
    4444
     45    def create_reference_param
     46        "#{@type.to_s}& #{@name}"
     47    end
     48
    4549    def field_name
    4650        "m_#{@name}"
     
    6872    void set#{capitalized_name}(#{@type.to_s} value, Functor func)
    6973    {
    70         if (isWide())
    71             set#{capitalized_name}<OpcodeSize::Wide>(value, func);
     74        if (isWide32())
     75            set#{capitalized_name}<OpcodeSize::Wide32>(value, func);
     76        else if (isWide16())
     77            set#{capitalized_name}<OpcodeSize::Wide16>(value, func);
    7278        else
    7379            set#{capitalized_name}<OpcodeSize::Narrow>(value, func);
     
    7985        if (!#{Fits::check "size", "value", @type})
    8086            value = func();
    81         auto* stream = bitwise_cast<typename TypeBySize<size>::type*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value);
     87        auto* stream = bitwise_cast<typename TypeBySize<size>::unsignedType*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value);
    8288        *stream = #{Fits::convert "size", "value", @type};
    8389    }
  • trunk/Source/JavaScriptCore/generator/DSL.rb

    r240023 r245906  
    145145            template.multiline_comment = nil
    146146            template.line_comment = "#"
    147             template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide)) .join("\n")
     147            template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide16) + opcodes.map.with_index(&:set_entry_address_wide32)) .join("\n")
    148148        end
    149149    end
  • trunk/Source/JavaScriptCore/generator/Metadata.rb

    r240041 r245906  
    113113    end
    114114
     115    def emitter_local_name
     116        "__metadataID"
     117    end
     118
    115119    def emitter_local
    116120        unless @@emitter_local
    117             @@emitter_local = Argument.new("__metadataID", :unsigned, -1)
     121            @@emitter_local = Argument.new(emitter_local_name, :unsigned, -1)
    118122        end
    119123
  • trunk/Source/JavaScriptCore/generator/Opcode.rb

    r240041 r245906  
    3333    module Size
    3434        Narrow = "OpcodeSize::Narrow"
    35         Wide = "OpcodeSize::Wide"
     35        Wide16 = "OpcodeSize::Wide16"
     36        Wide32 = "OpcodeSize::Wide32"
    3637    end
    3738
     
    7576    end
    7677
     78    def typed_reference_args
     79        return if @args.nil?
     80
     81        @args.map(&:create_reference_param).unshift("").join(", ")
     82    end
     83
    7784    def untyped_args
    7885        return if @args.nil?
     
    8289
    8390    def map_fields_with_size(prefix, size, &block)
    84         args = [Argument.new("opcodeID", :unsigned, 0)]
     91        args = [Argument.new("opcodeID", :OpcodeID, 0)]
    8592        args += @args.dup if @args
    8693        unless @metadata.empty?
     
    109116
    110117    def emitter
    111         op_wide = Argument.new("op_wide", :unsigned, 0)
     118        op_wide16 = Argument.new("op_wide16", :OpcodeID, 0)
     119        op_wide32 = Argument.new("op_wide32", :OpcodeID, 0)
    112120        metadata_param = @metadata.empty? ? "" : ", #{@metadata.emitter_local.create_param}"
    113121        metadata_arg = @metadata.empty? ? "" : ", #{@metadata.emitter_local.name}"
     
    115123    static void emit(BytecodeGenerator* gen#{typed_args})
    116124    {
    117         #{@metadata.create_emitter_local}
    118         emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg})
    119             || emit<OpcodeSize::Wide, Assert, true>(gen#{untyped_args}#{metadata_arg});
     125        emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(gen#{untyped_args});
    120126    }
    121127#{%{
     
    125131        return emit<size, shouldAssert>(gen#{untyped_args}#{metadata_arg});
    126132    }
     133
     134    template<OpcodeSize size>
     135    static bool checkWithoutMetadataID(BytecodeGenerator* gen#{typed_args})
     136    {
     137        decltype(gen->addMetadataFor(opcodeID)) __metadataID { };
     138        return checkImpl<size>(gen#{untyped_args}#{metadata_arg});
     139    }
    127140} unless @metadata.empty?}
    128141    template<OpcodeSize size, FitsAssertion shouldAssert = Assert, bool recordOpcode = true>
     
    135148    }
    136149
     150    template<OpcodeSize size>
     151    static void emitWithSmallestSizeRequirement(BytecodeGenerator* gen#{typed_args})
     152    {
     153        #{@metadata.create_emitter_local}
     154        if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Narrow)) {
     155            if (emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg}))
     156                return;
     157        }
     158        if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Wide16)) {
     159            if (emit<OpcodeSize::Wide16, NoAssert, true>(gen#{untyped_args}#{metadata_arg}))
     160                return;
     161        }
     162        emit<OpcodeSize::Wide32, Assert, true>(gen#{untyped_args}#{metadata_arg});
     163    }
     164
    137165private:
     166    template<OpcodeSize size>
     167    static bool checkImpl(BytecodeGenerator* gen#{typed_reference_args}#{metadata_param})
     168    {
     169        UNUSED_PARAM(gen);
     170#if OS(WINDOWS) && ENABLE(C_LOOP)
     171        // FIXME: Disable wide16 optimization for Windows CLoop
     172        // https://bugs.webkit.org/show_bug.cgi?id=198283
     173        if (size == OpcodeSize::Wide16)
     174            return false;
     175#endif
     176        return #{map_fields_with_size("", "size", &:fits_check).join "\n            && "}
     177            && (size == OpcodeSize::Wide16 ? #{op_wide16.fits_check(Size::Narrow)} : true)
     178            && (size == OpcodeSize::Wide32 ? #{op_wide32.fits_check(Size::Narrow)} : true);
     179    }
     180
    138181    template<OpcodeSize size, bool recordOpcode>
    139182    static bool emitImpl(BytecodeGenerator* gen#{typed_args}#{metadata_param})
    140183    {
    141         if (size == OpcodeSize::Wide)
    142             gen->alignWideOpcode();
    143         if (#{map_fields_with_size("", "size", &:fits_check).join "\n            && "}
    144             && (size == OpcodeSize::Wide ? #{op_wide.fits_check(Size::Narrow)} : true)) {
     184        if (size == OpcodeSize::Wide16)
     185            gen->alignWideOpcode16();
     186        else if (size == OpcodeSize::Wide32)
     187            gen->alignWideOpcode32();
     188        if (checkImpl<size>(gen#{untyped_args}#{metadata_arg})) {
    145189            if (recordOpcode)
    146190                gen->recordOpcode(opcodeID);
    147             if (size == OpcodeSize::Wide)
    148                 #{op_wide.fits_write Size::Narrow}
     191            if (size == OpcodeSize::Wide16)
     192                #{op_wide16.fits_write Size::Narrow}
     193            else if (size == OpcodeSize::Wide32)
     194                #{op_wide32.fits_write Size::Narrow}
    149195#{map_fields_with_size("            ", "size", &:fits_write).join "\n"}
    150196            return true;
     
    160206        <<-EOF
    161207    template<typename Block>
    162     void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, bool __isWide)
    163     {
    164         dumper->printLocationAndOp(__location, &"*#{@name}"[!__isWide]);
     208    void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, int __sizeShiftAmount)
     209    {
     210        dumper->printLocationAndOp(__location, &"**#{@name}"[2 - __sizeShiftAmount]);
    165211#{print_args { |arg|
    166212<<-EOF.chomp
     
    183229    }
    184230
     231    #{capitalized_name}(const uint16_t* stream)
     232        #{init.call("OpcodeSize::Wide16")}
     233    {
     234        ASSERT_UNUSED(stream, stream[0] == opcodeID);
     235    }
     236
     237
    185238    #{capitalized_name}(const uint32_t* stream)
    186         #{init.call("OpcodeSize::Wide")}
     239        #{init.call("OpcodeSize::Wide32")}
    187240    {
    188241        ASSERT_UNUSED(stream, stream[0] == opcodeID);
     
    191244    static #{capitalized_name} decode(const uint8_t* stream)
    192245    {
    193         if (*stream != op_wide)
    194             return { stream };
    195 
    196         auto wideStream = bitwise_cast<const uint32_t*>(stream + 1);
    197         return { wideStream };
     246        if (*stream == op_wide32)
     247            return { bitwise_cast<const uint32_t*>(stream + 1) };
     248        if (*stream == op_wide16)
     249            return { bitwise_cast<const uint16_t*>(stream + 1) };
     250        return { stream };
    198251    }
    199252EOF
     
    220273    end
    221274
    222     def set_entry_address_wide(id)
    223         "setEntryAddressWide(#{id}, _#{full_name}_wide)"
     275    def set_entry_address_wide16(id)
     276        "setEntryAddressWide16(#{id}, _#{full_name}_wide16)"
     277    end
     278
     279    def set_entry_address_wide32(id)
     280        "setEntryAddressWide32(#{id}, _#{full_name}_wide32)"
    224281    end
    225282
     
    254311        <<-EOF.chomp
    255312    case #{op.name}:
    256         __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->isWide());
     313        __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->sizeShiftAmount());
    257314        break;
    258315EOF
  • trunk/Source/JavaScriptCore/generator/Section.rb

    r238761 r245906  
    101101          }
    102102          opcodes.each { |opcode|
    103               out.write("#define #{opcode.name}_wide_value_string \"#{num_opcodes + opcode.id}\"\n")
     103              out.write("#define #{opcode.name}_wide16_value_string \"#{num_opcodes + opcode.id}\"\n")
     104          }
     105          opcodes.each { |opcode|
     106              out.write("#define #{opcode.name}_wide32_value_string \"#{num_opcodes * 2 + opcode.id}\"\n")
    104107          }
    105108      end
  • trunk/Source/JavaScriptCore/jit/JITExceptions.cpp

    r240637 r245906  
    7575        catchRoutine = handler->nativeCode.executableAddress();
    7676#else
    77         catchRoutine = catchPCForInterpreter->isWide()
    78             ? LLInt::getWideCodePtr(catchPCForInterpreter->opcodeID())
    79             : LLInt::getCodePtr(catchPCForInterpreter->opcodeID());
     77        if (catchPCForInterpreter->isWide32())
     78            catchRoutine = LLInt::getWide32CodePtr(catchPCForInterpreter->opcodeID());
     79        else if (catchPCForInterpreter->isWide16())
     80            catchRoutine = LLInt::getWide16CodePtr(catchPCForInterpreter->opcodeID());
     81        else
     82            catchRoutine = LLInt::getCodePtr(catchPCForInterpreter->opcodeID());
    8083#endif
    8184    } else
  • trunk/Source/JavaScriptCore/llint/LLIntData.cpp

    r239255 r245906  
    5050uint8_t Data::s_exceptionInstructions[maxOpcodeLength + 1] = { };
    5151Opcode g_opcodeMap[numOpcodeIDs] = { };
    52 Opcode g_opcodeMapWide[numOpcodeIDs] = { };
     52Opcode g_opcodeMapWide16[numOpcodeIDs] = { };
     53Opcode g_opcodeMapWide32[numOpcodeIDs] = { };
    5354
    5455#if !ENABLE(C_LOOP)
    55 extern "C" void llint_entry(void*, void*);
     56extern "C" void llint_entry(void*, void*, void*);
    5657#endif
    5758
     
    6263
    6364#else // !ENABLE(C_LOOP)
    64     llint_entry(&g_opcodeMap, &g_opcodeMapWide);
     65    llint_entry(&g_opcodeMap, &g_opcodeMapWide16, &g_opcodeMapWide32);
    6566
    6667    for (int i = 0; i < numOpcodeIDs; ++i) {
    6768        g_opcodeMap[i] = tagCodePtr(g_opcodeMap[i], BytecodePtrTag);
    68         g_opcodeMapWide[i] = tagCodePtr(g_opcodeMapWide[i], BytecodePtrTag);
     69        g_opcodeMapWide16[i] = tagCodePtr(g_opcodeMapWide16[i], BytecodePtrTag);
     70        g_opcodeMapWide32[i] = tagCodePtr(g_opcodeMapWide32[i], BytecodePtrTag);
    6971    }
    7072
  • trunk/Source/JavaScriptCore/llint/LLIntData.h

    r237728 r245906  
    4444
    4545extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMap[numOpcodeIDs];
    46 extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide[numOpcodeIDs];
     46extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide16[numOpcodeIDs];
     47extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide32[numOpcodeIDs];
    4748
    4849class Data {
     
    5859    friend Instruction* exceptionInstructions();
    5960    friend Opcode* opcodeMap();
    60     friend Opcode* opcodeMapWide();
     61    friend Opcode* opcodeMapWide16();
     62    friend Opcode* opcodeMapWide32();
    6163    friend Opcode getOpcode(OpcodeID);
    62     friend Opcode getOpcodeWide(OpcodeID);
     64    friend Opcode getOpcodeWide16(OpcodeID);
     65    friend Opcode getOpcodeWide32(OpcodeID);
    6366    template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID);
    64     template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID);
     67    template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID);
     68    template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID);
    6569    template<PtrTag tag> friend MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID);
    6670};
     
    7882}
    7983
    80 inline Opcode* opcodeMapWide()
     84inline Opcode* opcodeMapWide16()
    8185{
    82     return g_opcodeMapWide;
     86    return g_opcodeMapWide16;
     87}
     88
     89inline Opcode* opcodeMapWide32()
     90{
     91    return g_opcodeMapWide32;
    8392}
    8493
     
    92101}
    93102
    94 inline Opcode getOpcodeWide(OpcodeID id)
     103inline Opcode getOpcodeWide16(OpcodeID id)
    95104{
    96105#if ENABLE(COMPUTED_GOTO_OPCODES)
    97     return g_opcodeMapWide[id];
     106    return g_opcodeMapWide16[id];
     107#else
     108    UNUSED_PARAM(id);
     109    RELEASE_ASSERT_NOT_REACHED();
     110#endif
     111}
     112
     113inline Opcode getOpcodeWide32(OpcodeID id)
     114{
     115#if ENABLE(COMPUTED_GOTO_OPCODES)
     116    return g_opcodeMapWide32[id];
    98117#else
    99118    UNUSED_PARAM(id);
     
    111130
    112131template<PtrTag tag>
    113 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID opcodeID)
     132ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID)
    114133{
    115     void* address = reinterpret_cast<void*>(getOpcodeWide(opcodeID));
     134    void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID));
     135    address = retagCodePtr<BytecodePtrTag, tag>(address);
     136    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
     137}
     138
     139template<PtrTag tag>
     140ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID)
     141{
     142    void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID));
    116143    address = retagCodePtr<BytecodePtrTag, tag>(address);
    117144    return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address);
     
    142169}
    143170
    144 ALWAYS_INLINE void* getWideCodePtr(OpcodeID id)
     171ALWAYS_INLINE void* getWide16CodePtr(OpcodeID id)
    145172{
    146     return reinterpret_cast<void*>(getOpcodeWide(id));
     173    return reinterpret_cast<void*>(getOpcodeWide16(id));
     174}
     175
     176ALWAYS_INLINE void* getWide32CodePtr(OpcodeID id)
     177{
     178    return reinterpret_cast<void*>(getOpcodeWide32(id));
    147179}
    148180#endif
  • trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h

    r243254 r245906  
    3131
    3232#if ENABLE(C_LOOP)
     33#if !OS(WINDOWS)
    3334#define OFFLINE_ASM_C_LOOP 1
     35#define OFFLINE_ASM_C_LOOP_WIN 0
     36#else
     37#define OFFLINE_ASM_C_LOOP 0
     38#define OFFLINE_ASM_C_LOOP_WIN 1
     39#endif
    3440#define OFFLINE_ASM_X86 0
    3541#define OFFLINE_ASM_X86_WIN 0
     
    4652
    4753#define OFFLINE_ASM_C_LOOP 0
     54#define OFFLINE_ASM_C_LOOP_WIN 0
    4855
    4956#if CPU(X86) && !COMPILER(MSVC)
  • trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp

    r245658 r245906  
    17231723}
    17241724
    1725 LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide)
    1726 {
    1727     return commonCallEval(exec, pc, LLInt::getWideCodePtr<JSEntryPtrTag>(llint_generic_return_point));
     1725LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide16)
     1726{
     1727    return commonCallEval(exec, pc, LLInt::getWide16CodePtr<JSEntryPtrTag>(llint_generic_return_point));
     1728}
     1729
     1730LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide32)
     1731{
     1732    return commonCallEval(exec, pc, LLInt::getWide32CodePtr<JSEntryPtrTag>(llint_generic_return_point));
    17281733}
    17291734
  • trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.h

    r237547 r245906  
    118118LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_construct_varargs);
    119119LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval);
    120 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide);
     120LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide16);
     121LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide32);
    121122LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tear_off_arguments);
    122123LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_strcat);
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm

    r245669 r245906  
    1 # Copyright (C) 2011-2019 Apple Inc. All rights reserved.
     1# Copyrsght (C) 2011-2019 Apple Inc. All rights reserved.
    22#
    33# Redistribution and use in source and binary forms, with or without
     
    219219if X86_64 or X86_64_WIN or ARM64 or ARM64E
    220220    const CalleeSaveSpaceAsVirtualRegisters = 4
    221 elsif C_LOOP
     221elsif C_LOOP or C_LOOP_WIN
    222222    const CalleeSaveSpaceAsVirtualRegisters = 1
    223223elsif ARMv7
     
    278278        const tagTypeNumber = csr5
    279279        const tagMask = csr6
    280     elsif C_LOOP
     280    elsif C_LOOP or C_LOOP_WIN
    281281        const PB = csr0
    282282        const tagTypeNumber = csr1
     
    287287else
    288288    const PC = t4 # When changing this, make sure LLIntPC is up to date in LLIntPCRanges.h
    289     if C_LOOP
     289    if C_LOOP or C_LOOP_WIN
    290290        const metadataTable = csr3
    291291    elsif ARMv7
     
    312312    end
    313313
    314     macro dispatchWide()
     314    macro dispatchWide16()
     315        dispatch(constexpr %opcodeName%_length * 2 + 1)
     316    end
     317
     318    macro dispatchWide32()
    315319        dispatch(constexpr %opcodeName%_length * 4 + 1)
    316320    end
    317321
    318     size(dispatchNarrow, dispatchWide, macro (dispatch) dispatch() end)
     322    size(dispatchNarrow, dispatchWide16, dispatchWide32, macro (dispatch) dispatch() end)
    319323end
    320324
    321325macro getu(size, opcodeStruct, fieldName, dst)
    322     size(getuOperandNarrow, getuOperandWide, macro (getu)
     326    size(getuOperandNarrow, getuOperandWide16, getuOperandWide32, macro (getu)
    323327        getu(opcodeStruct, fieldName, dst)
    324328    end)
     
    326330
    327331macro get(size, opcodeStruct, fieldName, dst)
    328     size(getOperandNarrow, getOperandWide, macro (get)
     332    size(getOperandNarrow, getOperandWide16, getOperandWide32, macro (get)
    329333        get(opcodeStruct, fieldName, dst)
    330334    end)
    331335end
    332336
    333 macro narrow(narrowFn, wideFn, k)
     337macro narrow(narrowFn, wide16Fn, wide32Fn, k)
    334338    k(narrowFn)
    335339end
    336340
    337 macro wide(narrowFn, wideFn, k)
    338     k(wideFn)
     341macro wide16(narrowFn, wide16Fn, wide32Fn, k)
     342    k(wide16Fn)
     343end
     344
     345macro wide32(narrowFn, wide16Fn, wide32Fn, k)
     346    k(wide32Fn)
    339347end
    340348
     
    363371    fn(narrow)
    364372
    365 _%label%_wide:
     373# FIXME: We cannot enable wide16 bytecode in Windows CLoop. With MSVC, as CLoop::execute gets larger code
     374# size, CLoop::execute gets higher stack height requirement. This makes CLoop::execute takes 160KB stack
     375# per call, causes stack overflow error easily. For now, we disable wide16 optimization for Windows CLoop.
     376# https://bugs.webkit.org/show_bug.cgi?id=198283
     377if not C_LOOP_WIN
     378_%label%_wide16:
    366379    prologue()
    367     fn(wide)
     380    fn(wide16)
     381end
     382
     383_%label%_wide32:
     384    prologue()
     385    fn(wide32)
    368386end
    369387
     
    476494
    477495# Bytecode operand constants.
    478 const FirstConstantRegisterIndexNarrow = 16
    479 const FirstConstantRegisterIndexWide = constexpr FirstConstantRegisterIndex
     496const FirstConstantRegisterIndexNarrow = constexpr FirstConstantRegisterIndex8
     497const FirstConstantRegisterIndexWide16 = constexpr FirstConstantRegisterIndex16
     498const FirstConstantRegisterIndexWide32 = constexpr FirstConstantRegisterIndex
    480499
    481500# Code type constants.
     
    523542# Some common utilities.
    524543macro crash()
    525     if C_LOOP
     544    if C_LOOP or C_LOOP_WIN
    526545        cloopCrash
    527546    else
     
    606625macro checkStackPointerAlignment(tempReg, location)
    607626    if ASSERT_ENABLED
    608         if ARM64 or ARM64E or C_LOOP
     627        if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
    609628            # ARM64 and ARM64E will check for us!
    610             # C_LOOP does not need the alignment, and can use a little perf
     629            # C_LOOP or C_LOOP_WIN does not need the alignment, and can use a little perf
    611630            # improvement from avoiding useless work.
    612631        else
     
    626645end
    627646
    628 if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
     647if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
    629648    const CalleeSaveRegisterCount = 0
    630649elsif ARMv7
     
    643662
    644663macro pushCalleeSaves()
    645     if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
     664    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
    646665    elsif ARMv7
    647666        emit "push {r4-r6, r8-r11}"
     
    664683
    665684macro popCalleeSaves()
    666     if C_LOOP or ARM64 or ARM64E or X86_64 or X86_64_WIN
     685    if C_LOOP or C_LOOP_WIN or ARM64 or ARM64E or X86_64 or X86_64_WIN
    667686    elsif ARMv7
    668687        emit "pop {r4-r6, r8-r11}"
     
    683702
    684703macro preserveCallerPCAndCFR()
    685     if C_LOOP or ARMv7 or MIPS
     704    if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
    686705        push lr
    687706        push cfr
     
    698717macro restoreCallerPCAndCFR()
    699718    move cfr, sp
    700     if C_LOOP or ARMv7 or MIPS
     719    if C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
    701720        pop cfr
    702721        pop lr
     
    710729macro preserveCalleeSavesUsedByLLInt()
    711730    subp CalleeSaveSpaceStackAligned, sp
    712     if C_LOOP
     731    if C_LOOP or C_LOOP_WIN
    713732        storep metadataTable, -PtrSize[cfr]
    714733    elsif ARMv7 or MIPS
     
    733752
    734753macro restoreCalleeSavesUsedByLLInt()
    735     if C_LOOP
     754    if C_LOOP or C_LOOP_WIN
    736755        loadp -PtrSize[cfr], metadataTable
    737756    elsif ARMv7 or MIPS
     
    844863
    845864macro preserveReturnAddressAfterCall(destinationRegister)
    846     if C_LOOP or ARMv7 or ARM64 or ARM64E or MIPS
    847         # In C_LOOP case, we're only preserving the bytecode vPC.
     865    if C_LOOP or C_LOOP_WIN or ARMv7 or ARM64 or ARM64E or MIPS
     866        # In C_LOOP or C_LOOP_WIN case, we're only preserving the bytecode vPC.
    848867        move lr, destinationRegister
    849868    elsif X86 or X86_WIN or X86_64 or X86_64_WIN
     
    860879    elsif ARM64 or ARM64E
    861880        push cfr, lr
    862     elsif C_LOOP or ARMv7 or MIPS
     881    elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
    863882        push lr
    864883        push cfr
     
    872891    elsif ARM64 or ARM64E
    873892        pop lr, cfr
    874     elsif C_LOOP or ARMv7 or MIPS
     893    elsif C_LOOP or C_LOOP_WIN or ARMv7 or MIPS
    875894        pop cfr
    876895        pop lr
     
    906925
    907926macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag)
    908     if C_LOOP
     927    if C_LOOP or C_LOOP_WIN
    909928        cloopCallJSFunction callee
    910929    else
     
    944963    andi ~StackAlignmentMask, temp2
    945964
    946     if ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS
     965    if ARMv7 or ARM64 or ARM64E or C_LOOP or C_LOOP_WIN or MIPS
    947966        addp CallerFrameAndPCSize, sp
    948967        subi CallerFrameAndPCSize, temp2
     
    10281047
    10291048macro assertNotConstant(size, index)
    1030     size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
     1049    size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
    10311050        assert(macro (ok) bilt index, FirstConstantRegisterIndex, ok end)
    10321051    end)
     
    10801099    end
    10811100    codeBlockGetter(t1)
    1082     if not C_LOOP
     1101    if not (C_LOOP or C_LOOP_WIN)
    10831102        baddis 5, CodeBlock::m_llintExecuteCounter + BaselineExecutionCounter::m_counter[t1], .continue
    10841103        if JSVALUE64
     
    11301149    bpa t0, cfr, .needStackCheck
    11311150    loadp CodeBlock::m_vm[t1], t2
    1132     if C_LOOP
     1151    if C_LOOP or C_LOOP_WIN
    11331152        bpbeq VM::m_cloopStackLimit[t2], t0, .stackHeightOK
    11341153    else
     
    12331252# EncodedJSValue vmEntryToNativeFunction(void* code, VM* vm, ProtoCallFrame* protoFrame)
    12341253
    1235 if C_LOOP
     1254if C_LOOP or C_LOOP_WIN
    12361255    _llint_vm_entry_to_javascript:
    12371256else
     
    12421261
    12431262
    1244 if C_LOOP
     1263if C_LOOP or C_LOOP_WIN
    12451264    _llint_vm_entry_to_native:
    12461265else
     
    12511270
    12521271
    1253 if not C_LOOP
     1272if not (C_LOOP or C_LOOP_WIN)
    12541273    # void sanitizeStackForVMImpl(VM* vm)
    12551274    global _sanitizeStackForVMImpl
     
    12911310end
    12921311
    1293 if C_LOOP
     1312if C_LOOP or C_LOOP_WIN
    12941313    # Dummy entry point the C Loop uses to initialize.
    12951314    _llint_entry:
     
    13131332end
    13141333
    1315 # The PC base is in t2, as this is what _llint_entry leaves behind through
    1316 # initPCRelative(t2)
     1334# The PC base is in t3, as this is what _llint_entry leaves behind through
     1335# initPCRelative(t3)
    13171336macro setEntryAddress(index, label)
    13181337    setEntryAddressCommon(index, label, a0)
    13191338end
    13201339
    1321 macro setEntryAddressWide(index, label)
     1340macro setEntryAddressWide16(index, label)
    13221341     setEntryAddressCommon(index, label, a1)
     1342end
     1343
     1344macro setEntryAddressWide32(index, label)
     1345     setEntryAddressCommon(index, label, a2)
    13231346end
    13241347
    13251348macro setEntryAddressCommon(index, label, map)
    13261349    if X86_64 or X86_64_WIN
    1327         leap (label - _relativePCBase)[t2], t3
     1350        leap (label - _relativePCBase)[t3], t4
     1351        move index, t5
     1352        storep t4, [map, t5, 8]
     1353    elsif X86 or X86_WIN
     1354        leap (label - _relativePCBase)[t3], t4
     1355        move index, t5
     1356        storep t4, [map, t5, 4]
     1357    elsif ARM64 or ARM64E
     1358        pcrtoaddr label, t3
    13281359        move index, t4
    1329         storep t3, [map, t4, 8]
    1330     elsif X86 or X86_WIN
    1331         leap (label - _relativePCBase)[t2], t3
    1332         move index, t4
    1333         storep t3, [map, t4, 4]
    1334     elsif ARM64 or ARM64E
    1335         pcrtoaddr label, t2
    1336         move index, t4
    1337         storep t2, [map, t4, PtrSize]
     1360        storep t3, [map, t4, PtrSize]
    13381361    elsif ARMv7
    13391362        mvlbl (label - _relativePCBase), t4
    1340         addp t4, t2, t4
    1341         move index, t3
    1342         storep t4, [map, t3, 4]
     1363        addp t4, t3, t4
     1364        move index, t5
     1365        storep t4, [map, t5, 4]
    13431366    elsif MIPS
    13441367        la label, t4
    13451368        la _relativePCBase, t3
    13461369        subp t3, t4
    1347         addp t4, t2, t4
    1348         move index, t3
    1349         storep t4, [map, t3, 4]
     1370        addp t4, t3, t4
     1371        move index, t5
     1372        storep t4, [map, t5, 4]
    13501373    end
    13511374end
     
    13591382        loadp 20[sp], a0
    13601383        loadp 24[sp], a1
    1361     end
    1362 
    1363     initPCRelative(t2)
     1384        loadp 28[sp], a2
     1385    end
     1386
     1387    initPCRelative(t3)
    13641388
    13651389    # Include generated bytecode initialization file.
     
    13711395end
    13721396
    1373 _llint_op_wide:
    1374     nextInstructionWide()
    1375 
    1376 _llint_op_wide_wide:
     1397_llint_op_wide16:
     1398    nextInstructionWide16()
     1399
     1400_llint_op_wide32:
     1401    nextInstructionWide32()
     1402
     1403macro noWide(label)
     1404_llint_%label%_wide16:
    13771405    crash()
    13781406
    1379 _llint_op_enter_wide:
     1407_llint_%label%_wide32:
    13801408    crash()
     1409end
     1410
     1411noWide(op_wide16)
     1412noWide(op_wide32)
     1413noWide(op_enter)
    13811414
    13821415op(llint_program_prologue, macro ()
     
    17791812        prepareForRegularCall)
    17801813
    1781 _llint_op_call_eval_wide:
     1814_llint_op_call_eval_wide16:
    17821815    slowPathForCall(
    1783         wide,
     1816        wide16,
    17841817        OpCallEval,
    1785         macro () dispatchOp(wide, op_call_eval) end,
    1786         _llint_slow_path_call_eval_wide,
     1818        macro () dispatchOp(wide16, op_call_eval) end,
     1819        _llint_slow_path_call_eval_wide16,
    17871820        prepareForRegularCall)
    17881821
    1789 _llint_generic_return_point:
    1790     dispatchAfterCall(narrow, OpCallEval, macro ()
    1791         dispatchOp(narrow, op_call_eval)
     1822_llint_op_call_eval_wide32:
     1823    slowPathForCall(
     1824        wide32,
     1825        OpCallEval,
     1826        macro () dispatchOp(wide32, op_call_eval) end,
     1827        _llint_slow_path_call_eval_wide32,
     1828        prepareForRegularCall)
     1829
     1830
     1831commonOp(llint_generic_return_point, macro () end, macro (size)
     1832    dispatchAfterCall(size, OpCallEval, macro ()
     1833        dispatchOp(size, op_call_eval)
    17921834    end)
    1793 
    1794 _llint_generic_return_point_wide:
    1795     dispatchAfterCall(wide, OpCallEval, macro()
    1796         dispatchOp(wide, op_call_eval)
    1797     end)
     1835end)
     1836
    17981837
    17991838llintOp(op_identity_with_profile, OpIdentityWithProfile, macro (unused, unused, dispatch)
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp

    r239940 r245906  
    250250    if (UNLIKELY(isInitializationPass)) {
    251251        Opcode* opcodeMap = LLInt::opcodeMap();
    252         Opcode* opcodeMapWide = LLInt::opcodeMapWide();
     252        Opcode* opcodeMapWide16 = LLInt::opcodeMapWide16();
     253        Opcode* opcodeMapWide32 = LLInt::opcodeMapWide32();
    253254
    254255#if ENABLE(COMPUTED_GOTO_OPCODES)
    255256        #define OPCODE_ENTRY(__opcode, length) \
    256257            opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode); \
    257             opcodeMapWide[__opcode] = bitwise_cast<void*>(&&__opcode##_wide);
     258            opcodeMapWide16[__opcode] = bitwise_cast<void*>(&&__opcode##_wide16); \
     259            opcodeMapWide32[__opcode] = bitwise_cast<void*>(&&__opcode##_wide32);
    258260
    259261        #define LLINT_OPCODE_ENTRY(__opcode, length) \
     
    264266        #define OPCODE_ENTRY(__opcode, length) \
    265267            opcodeMap[__opcode] = __opcode; \
    266             opcodeMapWide[__opcode] = static_cast<OpcodeID>(__opcode##_wide);
     268            opcodeMapWide16[__opcode] = static_cast<OpcodeID>(__opcode##_wide16); \
     269            opcodeMapWide32[__opcode] = static_cast<OpcodeID>(__opcode##_wide32);
    267270
    268271        #define LLINT_OPCODE_ENTRY(__opcode, length) \
     
    286289
    287290    // Define the pseudo registers used by the LLINT C Loop backend:
    288     ASSERT(sizeof(CLoopRegister) == sizeof(intptr_t));
     291    static_assert(sizeof(CLoopRegister) == sizeof(intptr_t));
    289292
    290293    // The CLoop llint backend is initially based on the ARMv7 backend, and
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm

    r245658 r245906  
    3030end
    3131
    32 macro nextInstructionWide()
     32macro nextInstructionWide16()
     33    loadh 1[PC], t0
     34    leap _g_opcodeMapWide16, t1
     35    jmp [t1, t0, 4], BytecodePtrTag
     36end
     37
     38macro nextInstructionWide32()
    3339    loadi 1[PC], t0
    34     leap _g_opcodeMapWide, t1
     40    leap _g_opcodeMapWide32, t1
    3541    jmp [t1, t0, 4], BytecodePtrTag
    3642end
     
    4147
    4248macro getOperandNarrow(opcodeStruct, fieldName, dst)
    43     loadbsp constexpr %opcodeStruct%_%fieldName%_index[PC], dst
    44 end
    45 
    46 macro getuOperandWide(opcodeStruct, fieldName, dst)
     49    loadbsi constexpr %opcodeStruct%_%fieldName%_index[PC], dst
     50end
     51
     52macro getuOperandWide16(opcodeStruct, fieldName, dst)
     53    loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst
     54end
     55
     56macro getOperandWide16(opcodeStruct, fieldName, dst)
     57    loadhsi constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst
     58end
     59
     60macro getuOperandWide32(opcodeStruct, fieldName, dst)
    4761    loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst
    4862end
    4963
    50 macro getOperandWide(opcodeStruct, fieldName, dst)
     64macro getOperandWide32(opcodeStruct, fieldName, dst)
    5165    loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst
    5266end
     
    97111        call function
    98112        addp 16, sp
    99     elsif C_LOOP
     113    elsif C_LOOP or C_LOOP_WIN
    100114        cloopCallSlowPath function, a0, a1
    101115    else
     
    105119
    106120macro cCall2Void(function)
    107     if C_LOOP
     121    if C_LOOP or C_LOOP_WIN
    108122        cloopCallSlowPathVoid function, a0, a1
    109123    else
     
    122136        call function
    123137        addp 16, sp
    124     elsif C_LOOP
     138    elsif C_LOOP or C_LOOP_WIN
    125139        error
    126140    else
     
    191205    # and the frame for the JS code we're executing. We need to do this check
    192206    # before we start copying the args from the protoCallFrame below.
    193     if C_LOOP
     207    if C_LOOP or C_LOOP_WIN
    194208        bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK
    195209        move entry, t4
     
    309323    addp CallerFrameAndPCSize, sp
    310324    checkStackPointerAlignment(temp, 0xbad0dc02)
    311     if C_LOOP
     325    if C_LOOP or C_LOOP_WIN
    312326        cloopCallJSFunction entry
    313327    else
     
    321335    move entry, temp1
    322336    storep cfr, [sp]
    323     if C_LOOP
     337    if C_LOOP or C_LOOP_WIN
    324338        move sp, a0
    325339        storep lr, PtrSize[sp]
     
    448462# changed.
    449463macro loadConstantOrVariable(size, index, tag, payload)
    450     size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
     464    size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
    451465        bigteq index, FirstConstantRegisterIndex, .constant
    452466        loadi TagOffset[cfr, index, 8], tag
     
    464478
    465479macro loadConstantOrVariableTag(size, index, tag)
    466     size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
     480    size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
    467481        bigteq index, FirstConstantRegisterIndex, .constant
    468482        loadi TagOffset[cfr, index, 8], tag
     
    479493# Index and payload may be the same register. Index may be clobbered.
    480494macro loadConstantOrVariable2Reg(size, index, tag, payload)
    481     size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
     495    size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
    482496        bigteq index, FirstConstantRegisterIndex, .constant
    483497        loadi TagOffset[cfr, index, 8], tag
     
    497511
    498512macro loadConstantOrVariablePayloadTagCustom(size, index, tagCheck, payload)
    499     size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex)
     513    size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex)
    500514        bigteq index, FirstConstantRegisterIndex, .constant
    501515        tagCheck(TagOffset[cfr, index, 8])
     
    19831997        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    19841998        addp 8, sp
    1985     elsif ARMv7 or C_LOOP or MIPS
     1999    elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS
    19862000        if MIPS
    19872001        # calling convention says to save stack space for 4 first registers in
     
    20002014        loadp JSFunction::m_executable[t1], t1
    20012015        checkStackPointerAlignment(t3, 0xdead0001)
    2002         if C_LOOP
     2016        if C_LOOP or C_LOOP_WIN
    20032017            cloopCallNative executableOffsetToFunction[t1]
    20042018        else
     
    20502064        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    20512065        addp 8, sp
    2052     elsif ARMv7 or C_LOOP or MIPS
     2066    elsif ARMv7 or C_LOOP or C_LOOP_WIN or MIPS
    20532067        subp 8, sp # align stack pointer
    20542068        # t1 already contains the Callee.
     
    20592073        loadi Callee + PayloadOffset[cfr], t1
    20602074        checkStackPointerAlignment(t3, 0xdead0001)
    2061         if C_LOOP
     2075        if C_LOOP or C_LOOP_WIN
    20622076            cloopCallNative offsetOfFunction[t1]
    20632077        else
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

    r245658 r245906  
    3131end
    3232
    33 macro nextInstructionWide()
     33macro nextInstructionWide16()
     34    loadh 1[PB, PC, 1], t0
     35    leap _g_opcodeMapWide16, t1
     36    jmp [t1, t0, PtrSize], BytecodePtrTag
     37end
     38
     39macro nextInstructionWide32()
    3440    loadi 1[PB, PC, 1], t0
    35     leap _g_opcodeMapWide, t1
     41    leap _g_opcodeMapWide32, t1
    3642    jmp [t1, t0, PtrSize], BytecodePtrTag
    3743end
     
    4248
    4349macro getOperandNarrow(opcodeStruct, fieldName, dst)
    44     loadbsp constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst
    45 end
    46 
    47 macro getuOperandWide(opcodeStruct, fieldName, dst)
     50    loadbsq constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst
     51end
     52
     53macro getuOperandWide16(opcodeStruct, fieldName, dst)
     54    loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst
     55end
     56
     57macro getOperandWide16(opcodeStruct, fieldName, dst)
     58    loadhsq constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst
     59end
     60
     61macro getuOperandWide32(opcodeStruct, fieldName, dst)
    4862    loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst
    4963end
    5064
    51 macro getOperandWide(opcodeStruct, fieldName, dst)
     65macro getOperandWide32(opcodeStruct, fieldName, dst)
    5266    loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst
    5367end
     
    110124        move 8[r0], r1
    111125        move [r0], r0
    112     elsif C_LOOP
     126    elsif C_LOOP or C_LOOP_WIN
    113127        cloopCallSlowPath function, a0, a1
    114128    else
     
    118132
    119133macro cCall2Void(function)
    120     if C_LOOP
     134    if C_LOOP or C_LOOP_WIN
    121135        cloopCallSlowPathVoid function, a0, a1
    122136    elsif X86_64_WIN
     
    180194    # and the frame for the JS code we're executing. We need to do this check
    181195    # before we start copying the args from the protoCallFrame below.
    182     if C_LOOP
     196    if C_LOOP or C_LOOP_WIN
    183197        bpaeq t3, VM::m_cloopStackLimit[vm], .stackHeightOK
    184198        move entry, t4
     
    286300macro makeJavaScriptCall(entry, temp, unused)
    287301    addp 16, sp
    288     if C_LOOP
     302    if C_LOOP or C_LOOP_WIN
    289303        cloopCallJSFunction entry
    290304    else
     
    298312    storep cfr, [sp]
    299313    move sp, a0
    300     if C_LOOP
     314    if C_LOOP or C_LOOP_WIN
    301315        storep lr, 8[sp]
    302316        cloopCallNative temp
     
    410424
    411425macro uncage(basePtr, mask, ptr, scratchOrLength)
    412     if GIGACAGE_ENABLED and not C_LOOP
     426    if GIGACAGE_ENABLED and not (C_LOOP or C_LOOP_WIN)
    413427        loadp basePtr, scratchOrLength
    414428        btpz scratchOrLength, .done
     
    451465    end
    452466
    453     macro loadWide()
    454         bpgteq index, FirstConstantRegisterIndexWide, .constant
     467    macro loadWide16()
     468        bpgteq index, FirstConstantRegisterIndexWide16, .constant
    455469        loadq [cfr, index, 8], value
    456470        jmp .done
     
    458472        loadp CodeBlock[cfr], value
    459473        loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value
    460         subp FirstConstantRegisterIndexWide, index
     474        loadq -(FirstConstantRegisterIndexWide16 * 8)[value, index, 8], value
     475    .done:
     476    end
     477
     478    macro loadWide32()
     479        bpgteq index, FirstConstantRegisterIndexWide32, .constant
     480        loadq [cfr, index, 8], value
     481        jmp .done
     482    .constant:
     483        loadp CodeBlock[cfr], value
     484        loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value
     485        subp FirstConstantRegisterIndexWide32, index
    461486        loadq [value, index, 8], value
    462487    .done:
    463488    end
    464489
    465     size(loadNarrow, loadWide, macro (load) load() end)
     490    size(loadNarrow, loadWide16, loadWide32, macro (load) load() end)
    466491end
    467492
     
    15191544
    15201545    # We have Int8ArrayType.
    1521     loadbs [t3, t1], t0
     1546    loadbsi [t3, t1], t0
    15221547    finishIntGetByVal(t0, t1)
    15231548
     
    15391564
    15401565    # We have Int16ArrayType.
    1541     loadhs [t3, t1, 2], t0
     1566    loadhsi [t3, t1, 2], t0
    15421567    finishIntGetByVal(t0, t1)
    15431568
     
    20612086    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
    20622087    storep cfr, VM::topCallFrame[t1]
    2063     if ARM64 or ARM64E or C_LOOP
     2088    if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
    20642089        storep lr, ReturnPC[cfr]
    20652090    end
     
    20682093    loadp JSFunction::m_executable[t1], t1
    20692094    checkStackPointerAlignment(t3, 0xdead0001)
    2070     if C_LOOP
     2095    if C_LOOP or C_LOOP_WIN
    20712096        cloopCallNative executableOffsetToFunction[t1]
    20722097    else
     
    21012126    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
    21022127    storep cfr, VM::topCallFrame[t1]
    2103     if ARM64 or ARM64E or C_LOOP
     2128    if ARM64 or ARM64E or C_LOOP or C_LOOP_WIN
    21042129        storep lr, ReturnPC[cfr]
    21052130    end
     
    21072132    loadp Callee[cfr], t1
    21082133    checkStackPointerAlignment(t3, 0xdead0001)
    2109     if C_LOOP
     2134    if C_LOOP or C_LOOP_WIN
    21102135        cloopCallNative offsetOfFunction[t1]
    21112136    else
  • trunk/Source/JavaScriptCore/offlineasm/arm.rb

    r239867 r245906  
    445445        when "loadb"
    446446            $asm.puts "ldrb #{armFlippedOperands(operands)}"
    447         when "loadbs", "loadbsp"
     447        when "loadbsi"
    448448            $asm.puts "ldrsb.w #{armFlippedOperands(operands)}"
    449449        when "storeb"
     
    451451        when "loadh"
    452452            $asm.puts "ldrh #{armFlippedOperands(operands)}"
    453         when "loadhs"
     453        when "loadhsi"
    454454            $asm.puts "ldrsh.w #{armFlippedOperands(operands)}"
    455455        when "storeh"
  • trunk/Source/JavaScriptCore/offlineasm/arm64.rb

    r245064 r245906  
    279279        if node.is_a? Instruction
    280280            case node.opcode
    281             when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbs", "loadh", "loadhs", "leap"
     281            when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbsi", "loadbsq", "loadh", "loadhsi", "loadhsq", "leap"
    282282                labelRef = node.operands[0]
    283283                if labelRef.is_a? LabelReference
     
    375375            | node, address |
    376376            case node.opcode
    377             when "loadb", "loadbs", "loadbsp", "storeb", /^bb/, /^btb/, /^cb/, /^tb/
     377            when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/
    378378                size = 1
    379             when "loadh", "loadhs"
     379            when "loadh", "loadhsi", "loadhsq"
    380380                size = 2
    381381            when "loadi", "loadis", "storei", "addi", "andi", "lshifti", "muli", "negi",
     
    710710        when "loadb"
    711711            emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word)
    712         when "loadbs"
     712        when "loadbsi"
    713713            emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word)
    714         when "loadbsp"
    715             emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :ptr)
     714        when "loadbsq"
     715            emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :quad)
    716716        when "storeb"
    717717            emitARM64Unflipped("strb", operands, :word)
    718718        when "loadh"
    719719            emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word)
    720         when "loadhs"
     720        when "loadhsi"
    721721            emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word)
     722        when "loadhsq"
     723            emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :quad)
    722724        when "storeh"
    723725            emitARM64Unflipped("strh", operands, :word)
  • trunk/Source/JavaScriptCore/offlineasm/asm.rb

    r237803 r245906  
    394394            # always by itself so this check to turn off $enableDebugAnnotations won't
    395395            # affect the generation for any other backend.
    396             if backend == "C_LOOP"
     396            if backend == "C_LOOP" || backend == "C_LOOP_WIN"
    397397                $enableDebugAnnotations = false
    398398            end
  • trunk/Source/JavaScriptCore/offlineasm/backends.rb

    r238439 r245906  
    4545     "ARM64E",
    4646     "MIPS",
    47      "C_LOOP"
     47     "C_LOOP",
     48     "C_LOOP_WIN"
    4849    ]
    4950
     
    6364     "ARM64E",
    6465     "MIPS",
    65      "C_LOOP"
     66     "C_LOOP",
     67     "C_LOOP_WIN"
    6668    ]
    6769
  • trunk/Source/JavaScriptCore/offlineasm/cloop.rb

    r242240 r245906  
    657657        when "loadb"
    658658            $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint8MemRef};"
    659         when "loadbs"
    660             $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int8MemRef});"
    661         when "loadbsp"
    662             $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].int8MemRef};"
     659        when "loadbsi"
     660            $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int8MemRef});"
     661        when "loadbsq"
     662            $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int8MemRef};"
    663663        when "storeb"
    664664            $asm.putc "#{operands[1].uint8MemRef} = #{operands[0].clValue(:int8)};"
    665665        when "loadh"
    666666            $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint16MemRef};"
    667         when "loadhs"
    668             $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int16MemRef});"
     667        when "loadhsi"
     668            $asm.putc "#{operands[1].clLValue(:uint32)} = (uint32_t)((int32_t)#{operands[0].int16MemRef});"
     669        when "loadhsq"
     670            $asm.putc "#{operands[1].clLValue(:uint64)} = (int64_t)#{operands[0].int16MemRef};"
    669671        when "storeh"
    670672            $asm.putc "*#{operands[1].uint16MemRef} = #{operands[0].clValue(:int16)};"
     
    11571159    end
    11581160
     1161    def lowerC_LOOP_WIN
     1162        lowerC_LOOP
     1163    end
     1164
    11591165    def recordMetaDataC_LOOP
    11601166        $asm.codeOrigin codeOriginString if $enableCodeOriginComments
  • trunk/Source/JavaScriptCore/offlineasm/instructions.rb

    r245064 r245906  
    5454     "loadis",
    5555     "loadb",
    56      "loadbs",
    57      "loadbsp",
     56     "loadbsi",
     57     "loadbsq",
    5858     "loadh",
    59      "loadhs",
     59     "loadhsi",
     60     "loadhsq",
    6061     "storei",
    6162     "storeb",
  • trunk/Source/JavaScriptCore/offlineasm/mips.rb

    r240432 r245906  
    881881        when "loadb"
    882882            $asm.puts "lbu #{mipsFlippedOperands(operands)}"
    883         when "loadbs", "loadbsp"
     883        when "loadbsi"
    884884            $asm.puts "lb #{mipsFlippedOperands(operands)}"
    885885        when "storeb"
     
    887887        when "loadh"
    888888            $asm.puts "lhu #{mipsFlippedOperands(operands)}"
    889         when "loadhs"
     889        when "loadhsi"
    890890            $asm.puts "lh #{mipsFlippedOperands(operands)}"
    891891        when "storeh"
  • trunk/Source/JavaScriptCore/offlineasm/x86.rb

    r245064 r245906  
    940940                $asm.puts "movzx #{x86LoadOperands(:byte, :int)}"
    941941            end
    942         when "loadbs"
     942        when "loadbsi"
    943943            if !isIntelSyntax
    944944                $asm.puts "movsbl #{x86LoadOperands(:byte, :int)}"
     
    946946                $asm.puts "movsx #{x86LoadOperands(:byte, :int)}"
    947947            end
    948         when "loadbsp"
     948        when "loadbsq"
    949949            if !isIntelSyntax
    950                 $asm.puts "movsb#{x86Suffix(:ptr)} #{x86LoadOperands(:byte, :ptr)}"
    951             else
    952                 $asm.puts "movsx #{x86LoadOperands(:byte, :ptr)}"
     950                $asm.puts "movsbq #{x86LoadOperands(:byte, :quad)}"
     951            else
     952                $asm.puts "movsx #{x86LoadOperands(:byte, :quad)}"
    953953            end
    954954        when "loadh"
     
    958958                $asm.puts "movzx #{x86LoadOperands(:half, :int)}"
    959959            end
    960         when "loadhs"
     960        when "loadhsi"
    961961            if !isIntelSyntax
    962962                $asm.puts "movswl #{x86LoadOperands(:half, :int)}"
    963963            else
    964964                $asm.puts "movsx #{x86LoadOperands(:half, :int)}"
     965            end
     966        when "loadhsq"
     967            if !isIntelSyntax
     968                $asm.puts "movswq #{x86LoadOperands(:half, :quad)}"
     969            else
     970                $asm.puts "movsx #{x86LoadOperands(:half, :quad)}"
    965971            end
    966972        when "storeb"
  • trunk/Source/JavaScriptCore/parser/ResultType.h

    r238778 r245906  
    195195        OperandTypes(ResultType first = ResultType::unknownType(), ResultType second = ResultType::unknownType())
    196196        {
    197             // We have to initialize one of the int to ensure that
    198             // the entire struct is initialized.
    199             m_u.i = 0;
    200             m_u.rds.first = first.m_bits;
    201             m_u.rds.second = second.m_bits;
    202         }
    203        
    204         union {
    205             struct {
    206                 ResultType::Type first;
    207                 ResultType::Type second;
    208             } rds;
    209             int i;
    210         } m_u;
     197            m_first = first.m_bits;
     198            m_second = second.m_bits;
     199        }
     200       
     201        ResultType::Type m_first;
     202        ResultType::Type m_second;
    211203
    212204        ResultType first() const
    213205        {
    214             return ResultType(m_u.rds.first);
     206            return ResultType(m_first);
    215207        }
    216208
    217209        ResultType second() const
    218210        {
    219             return ResultType(m_u.rds.second);
    220         }
    221 
    222         int toInt()
    223         {
    224             return m_u.i;
    225         }
    226         static OperandTypes fromInt(int value)
    227         {
    228             OperandTypes types;
    229             types.m_u.i = value;
    230             return types;
     211            return ResultType(m_second);
     212        }
     213
     214        uint16_t bits()
     215        {
     216            static_assert(sizeof(OperandTypes) == sizeof(uint16_t));
     217            return bitwise_cast<uint16_t>(*this);
     218        }
     219
     220        static OperandTypes fromBits(uint16_t bits)
     221        {
     222            return bitwise_cast<OperandTypes>(bits);
    231223        }
    232224
Note: See TracChangeset for help on using the changeset viewer.