Changeset 237173 in webkit


Ignore:
Timestamp:
Oct 16, 2018, 12:19:13 AM (7 years ago)
Author:
keith_miller@apple.com
Message:

Support arm64 CPUs with a 32-bit address space
https://bugs.webkit.org/show_bug.cgi?id=190273

Reviewed by Michael Saboff.

Source/JavaScriptCore:

This patch adds support for arm64_32 in the LLInt. In order to
make this work we needed to add a new type that reflects the size
of a cpu register. This type is called CPURegister or UCPURegister
for the unsigned version. Most places that used void* or intptr_t
to refer to a register have been changed to use this new type.

  • JavaScriptCore.xcodeproj/project.pbxproj:
  • assembler/ARM64Assembler.h:

(JSC::isInt):
(JSC::is4ByteAligned):
(JSC::PairPostIndex::PairPostIndex):
(JSC::PairPreIndex::PairPreIndex):
(JSC::ARM64Assembler::readPointer):
(JSC::ARM64Assembler::readCallTarget):
(JSC::ARM64Assembler::computeJumpType):
(JSC::ARM64Assembler::linkCompareAndBranch):
(JSC::ARM64Assembler::linkConditionalBranch):
(JSC::ARM64Assembler::linkTestAndBranch):
(JSC::ARM64Assembler::loadRegisterLiteral):
(JSC::ARM64Assembler::loadStoreRegisterPairPostIndex):
(JSC::ARM64Assembler::loadStoreRegisterPairPreIndex):
(JSC::ARM64Assembler::loadStoreRegisterPairOffset):
(JSC::ARM64Assembler::loadStoreRegisterPairNonTemporal):
(JSC::isInt7): Deleted.
(JSC::isInt11): Deleted.

  • assembler/CPU.h:

(JSC::isAddress64Bit):
(JSC::isAddress32Bit):

  • assembler/MacroAssembler.h:

(JSC::MacroAssembler::shouldBlind):

  • assembler/MacroAssemblerARM64.cpp:

(JSC::MacroAssemblerARM64::collectCPUFeatures):

  • assembler/MacroAssemblerARM64.h:

(JSC::MacroAssemblerARM64::load):
(JSC::MacroAssemblerARM64::store):
(JSC::MacroAssemblerARM64::isInIntRange): Deleted.

  • assembler/Printer.h:
  • assembler/ProbeContext.h:

(JSC::Probe::CPUState::gpr):
(JSC::Probe::CPUState::spr):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):

  • b3/B3ConstPtrValue.h:
  • b3/B3StackmapSpecial.cpp:

(JSC::B3::StackmapSpecial::isArgValidForRep):

  • b3/air/AirArg.h:

(JSC::B3::Air::Arg::stackSlot const):
(JSC::B3::Air::Arg::special const):

  • b3/air/testair.cpp:
  • b3/testb3.cpp:

(JSC::B3::testStoreConstantPtr):
(JSC::B3::testInterpreter):
(JSC::B3::testAddShl32):
(JSC::B3::testLoadBaseIndexShift32):

  • bindings/ScriptFunctionCall.cpp:

(Deprecated::ScriptCallArgumentHandler::appendArgument):

  • bindings/ScriptFunctionCall.h:
  • bytecode/CodeBlock.cpp:

(JSC::roundCalleeSaveSpaceAsVirtualRegisters):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::restoreCalleeSavesFor):
(JSC::DFG::saveCalleeSavesFor):
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):

  • dfg/DFGOSRExitCompilerCommon.cpp:

(JSC::DFG::reifyInlinedCallFrames):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::compile):

  • disassembler/UDis86Disassembler.cpp:

(JSC::tryToDisassembleWithUDis86):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::compileWeakMapGet):

  • heap/MachineStackMarker.cpp:

(JSC::copyMemory):

  • interpreter/CallFrame.h:

(JSC::ExecState::returnPC const):
(JSC::ExecState::hasReturnPC const):
(JSC::ExecState::clearReturnPC):
(JSC::ExecState::returnPCOffset):
(JSC::ExecState::isGlobalExec const):
(JSC::ExecState::setReturnPC):

  • interpreter/CalleeBits.h:

(JSC::CalleeBits::boxWasm):
(JSC::CalleeBits::isWasm const):
(JSC::CalleeBits::asWasmCallee const):

  • interpreter/Interpreter.cpp:

(JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):

  • interpreter/VMEntryRecord.h:
  • jit/AssemblyHelpers.h:

(JSC::AssemblyHelpers::clearStackFrame):

  • jit/RegisterAtOffset.h:

(JSC::RegisterAtOffset::offsetAsIndex const):

  • jit/RegisterAtOffsetList.cpp:

(JSC::RegisterAtOffsetList::RegisterAtOffsetList):

  • llint/LLIntData.cpp:

(JSC::LLInt::Data::performAssertions):

  • llint/LLIntOfflineAsmConfig.h:
  • llint/LowLevelInterpreter.asm:
  • llint/LowLevelInterpreter64.asm:
  • offlineasm/arm64.rb:
  • offlineasm/asm.rb:
  • offlineasm/ast.rb:
  • offlineasm/backends.rb:
  • offlineasm/parser.rb:
  • offlineasm/x86.rb:
  • runtime/BasicBlockLocation.cpp:

(JSC::BasicBlockLocation::dumpData const):
(JSC::BasicBlockLocation::emitExecuteCode const):

  • runtime/BasicBlockLocation.h:
  • runtime/HasOwnPropertyCache.h:
  • runtime/JSBigInt.cpp:

(JSC::JSBigInt::inplaceMultiplyAdd):
(JSC::JSBigInt::digitDiv):

  • runtime/JSBigInt.h:
  • runtime/JSObject.h:
  • runtime/Options.cpp:

(JSC::jitEnabledByDefault):

  • runtime/Options.h:
  • runtime/RegExp.cpp:

(JSC::RegExp::printTraceData):

  • runtime/SamplingProfiler.cpp:

(JSC::CFrameWalker::walk):

  • runtime/SlowPathReturnType.h:

(JSC::encodeResult):
(JSC::decodeResult):

  • tools/SigillCrashAnalyzer.cpp:

(JSC::SigillCrashAnalyzer::dumpCodeBlock):

Source/WebCore:

Fix missing namespace annotation.

  • cssjit/SelectorCompiler.cpp:

(WebCore::SelectorCompiler::SelectorCodeGenerator::generateAddStyleRelation):

Source/WTF:

Use WTF_CPU_ADDRESS64/32 to decide if the system is running on arm64_32.

  • wtf/MathExtras.h:

(getLSBSet):

  • wtf/Platform.h:
Location:
trunk/Source
Files:
57 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/ChangeLog

    r237170 r237173  
     12018-10-15  Keith Miller  <keith_miller@apple.com>
     2
     3        Support arm64 CPUs with a 32-bit address space
     4        https://bugs.webkit.org/show_bug.cgi?id=190273
     5
     6        Reviewed by Michael Saboff.
     7
     8        This patch adds support for arm64_32 in the LLInt. In order to
     9        make this work we needed to add a new type that reflects the size
     10        of a cpu register. This type is called CPURegister or UCPURegister
     11        for the unsigned version. Most places that used void* or intptr_t
     12        to refer to a register have been changed to use this new type.
     13
     14        * JavaScriptCore.xcodeproj/project.pbxproj:
     15        * assembler/ARM64Assembler.h:
     16        (JSC::isInt):
     17        (JSC::is4ByteAligned):
     18        (JSC::PairPostIndex::PairPostIndex):
     19        (JSC::PairPreIndex::PairPreIndex):
     20        (JSC::ARM64Assembler::readPointer):
     21        (JSC::ARM64Assembler::readCallTarget):
     22        (JSC::ARM64Assembler::computeJumpType):
     23        (JSC::ARM64Assembler::linkCompareAndBranch):
     24        (JSC::ARM64Assembler::linkConditionalBranch):
     25        (JSC::ARM64Assembler::linkTestAndBranch):
     26        (JSC::ARM64Assembler::loadRegisterLiteral):
     27        (JSC::ARM64Assembler::loadStoreRegisterPairPostIndex):
     28        (JSC::ARM64Assembler::loadStoreRegisterPairPreIndex):
     29        (JSC::ARM64Assembler::loadStoreRegisterPairOffset):
     30        (JSC::ARM64Assembler::loadStoreRegisterPairNonTemporal):
     31        (JSC::isInt7): Deleted.
     32        (JSC::isInt11): Deleted.
     33        * assembler/CPU.h:
     34        (JSC::isAddress64Bit):
     35        (JSC::isAddress32Bit):
     36        * assembler/MacroAssembler.h:
     37        (JSC::MacroAssembler::shouldBlind):
     38        * assembler/MacroAssemblerARM64.cpp:
     39        (JSC::MacroAssemblerARM64::collectCPUFeatures):
     40        * assembler/MacroAssemblerARM64.h:
     41        (JSC::MacroAssemblerARM64::load):
     42        (JSC::MacroAssemblerARM64::store):
     43        (JSC::MacroAssemblerARM64::isInIntRange): Deleted.
     44        * assembler/Printer.h:
     45        * assembler/ProbeContext.h:
     46        (JSC::Probe::CPUState::gpr):
     47        (JSC::Probe::CPUState::spr):
     48        (JSC::Probe::Context::gpr):
     49        (JSC::Probe::Context::spr):
     50        * b3/B3ConstPtrValue.h:
     51        * b3/B3StackmapSpecial.cpp:
     52        (JSC::B3::StackmapSpecial::isArgValidForRep):
     53        * b3/air/AirArg.h:
     54        (JSC::B3::Air::Arg::stackSlot const):
     55        (JSC::B3::Air::Arg::special const):
     56        * b3/air/testair.cpp:
     57        * b3/testb3.cpp:
     58        (JSC::B3::testStoreConstantPtr):
     59        (JSC::B3::testInterpreter):
     60        (JSC::B3::testAddShl32):
     61        (JSC::B3::testLoadBaseIndexShift32):
     62        * bindings/ScriptFunctionCall.cpp:
     63        (Deprecated::ScriptCallArgumentHandler::appendArgument):
     64        * bindings/ScriptFunctionCall.h:
     65        * bytecode/CodeBlock.cpp:
     66        (JSC::roundCalleeSaveSpaceAsVirtualRegisters):
     67        * dfg/DFGOSRExit.cpp:
     68        (JSC::DFG::restoreCalleeSavesFor):
     69        (JSC::DFG::saveCalleeSavesFor):
     70        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
     71        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
     72        * dfg/DFGOSRExitCompilerCommon.cpp:
     73        (JSC::DFG::reifyInlinedCallFrames):
     74        * dfg/DFGSpeculativeJIT64.cpp:
     75        (JSC::DFG::SpeculativeJIT::compile):
     76        * disassembler/UDis86Disassembler.cpp:
     77        (JSC::tryToDisassembleWithUDis86):
     78        * ftl/FTLLowerDFGToB3.cpp:
     79        (JSC::FTL::DFG::LowerDFGToB3::compileWeakMapGet):
     80        * heap/MachineStackMarker.cpp:
     81        (JSC::copyMemory):
     82        * interpreter/CallFrame.h:
     83        (JSC::ExecState::returnPC const):
     84        (JSC::ExecState::hasReturnPC const):
     85        (JSC::ExecState::clearReturnPC):
     86        (JSC::ExecState::returnPCOffset):
     87        (JSC::ExecState::isGlobalExec const):
     88        (JSC::ExecState::setReturnPC):
     89        * interpreter/CalleeBits.h:
     90        (JSC::CalleeBits::boxWasm):
     91        (JSC::CalleeBits::isWasm const):
     92        (JSC::CalleeBits::asWasmCallee const):
     93        * interpreter/Interpreter.cpp:
     94        (JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):
     95        * interpreter/VMEntryRecord.h:
     96        * jit/AssemblyHelpers.h:
     97        (JSC::AssemblyHelpers::clearStackFrame):
     98        * jit/RegisterAtOffset.h:
     99        (JSC::RegisterAtOffset::offsetAsIndex const):
     100        * jit/RegisterAtOffsetList.cpp:
     101        (JSC::RegisterAtOffsetList::RegisterAtOffsetList):
     102        * llint/LLIntData.cpp:
     103        (JSC::LLInt::Data::performAssertions):
     104        * llint/LLIntOfflineAsmConfig.h:
     105        * llint/LowLevelInterpreter.asm:
     106        * llint/LowLevelInterpreter64.asm:
     107        * offlineasm/arm64.rb:
     108        * offlineasm/asm.rb:
     109        * offlineasm/ast.rb:
     110        * offlineasm/backends.rb:
     111        * offlineasm/parser.rb:
     112        * offlineasm/x86.rb:
     113        * runtime/BasicBlockLocation.cpp:
     114        (JSC::BasicBlockLocation::dumpData const):
     115        (JSC::BasicBlockLocation::emitExecuteCode const):
     116        * runtime/BasicBlockLocation.h:
     117        * runtime/HasOwnPropertyCache.h:
     118        * runtime/JSBigInt.cpp:
     119        (JSC::JSBigInt::inplaceMultiplyAdd):
     120        (JSC::JSBigInt::digitDiv):
     121        * runtime/JSBigInt.h:
     122        * runtime/JSObject.h:
     123        * runtime/Options.cpp:
     124        (JSC::jitEnabledByDefault):
     125        * runtime/Options.h:
     126        * runtime/RegExp.cpp:
     127        (JSC::RegExp::printTraceData):
     128        * runtime/SamplingProfiler.cpp:
     129        (JSC::CFrameWalker::walk):
     130        * runtime/SlowPathReturnType.h:
     131        (JSC::encodeResult):
     132        (JSC::decodeResult):
     133        * tools/SigillCrashAnalyzer.cpp:
     134        (JSC::SigillCrashAnalyzer::dumpCodeBlock):
     135
    11362018-10-15  Justin Fan  <justin_fan@apple.com>
    2137
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r237147 r237173  
    1014310143                        runOnlyForDeploymentPostprocessing = 0;
    1014410144                        shellPath = /bin/sh;
    10145                         shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n    exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1";
     10145                        shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n    exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1\n";
    1014610146                };
    1014710147                65FB3F6509D11E9100F49DEB /* Generate Derived Sources */ = {
  • trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h

    r237136 r237173  
    3030#include "AssemblerBuffer.h"
    3131#include "AssemblerCommon.h"
     32#include "CPU.h"
    3233#include "JSCPtrTag.h"
    3334#include <limits.h>
     
    5556namespace JSC {
    5657
    57 ALWAYS_INLINE bool isInt7(int32_t value)
     58template<size_t bits, typename Type>
     59ALWAYS_INLINE constexpr bool isInt(Type t)
    5860{
    59     return value == ((value << 25) >> 25);
     61    constexpr size_t shift = sizeof(Type) * CHAR_BIT - bits;
     62    static_assert(sizeof(Type) * CHAR_BIT > shift, "shift is larger than the size of the value");
     63    return ((t << shift) >> shift) == t;
    6064}
    6165
    62 ALWAYS_INLINE bool isInt11(int32_t value)
     66static ALWAYS_INLINE bool is4ByteAligned(const void* ptr)
    6367{
    64     return value == ((value << 21) >> 21);
     68    return !(reinterpret_cast<intptr_t>(ptr) & 0x3);
    6569}
    6670
     
    131135        : m_value(value)
    132136    {
    133         ASSERT(isInt11(value));
     137        ASSERT(isInt<11>(value));
    134138    }
    135139
     
    145149        : m_value(value)
    146150    {
    147         ASSERT(isInt11(value));
     151        ASSERT(isInt<11>(value));
    148152    }
    149153
     
    461465        union {
    462466            struct RealTypes {
    463                 intptr_t m_from : 48;
    464                 intptr_t m_to : 48;
     467                int64_t m_from;
     468                int64_t m_to;
    465469                JumpType m_type : 8;
    466470                JumpLinkType m_linkType : 8;
     
    28062810        result |= static_cast<uintptr_t>(imm16) << 16;
    28072811
     2812#if CPU(ADDRESS64)
    28082813        expected = disassembleMoveWideImediate(address + 2, sf, opc, hw, imm16, rd);
    28092814        ASSERT_UNUSED(expected, expected && sf && opc == MoveWideOp_K && hw == 2 && rd == rdFirst);
    28102815        result |= static_cast<uintptr_t>(imm16) << 32;
     2816#endif
    28112817
    28122818        return reinterpret_cast<void*>(result);
     
    28152821    static void* readCallTarget(void* from)
    28162822    {
    2817         return readPointer(reinterpret_cast<int*>(from) - 4);
     2823        return readPointer(reinterpret_cast<int*>(from) - (isAddress64Bit() ? 4 : 3));
    28182824    }
    28192825
     
    29322938            return LinkJumpNoCondition;
    29332939        case JumpCondition: {
    2934             ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
    2935             ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
     2940            ASSERT(is4ByteAligned(from));
     2941            ASSERT(is4ByteAligned(to));
    29362942            intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
    29372943
    2938             if (((relative << 43) >> 43) == relative)
     2944            if (isInt<21>(relative))
    29392945                return LinkJumpConditionDirect;
    29402946
     
    29422948            }
    29432949        case JumpCompareAndBranch:  {
    2944             ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
    2945             ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
     2950            ASSERT(is4ByteAligned(from));
     2951            ASSERT(is4ByteAligned(to));
    29462952            intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
    29472953
    2948             if (((relative << 43) >> 43) == relative)
     2954            if (isInt<21>(relative))
    29492955                return LinkJumpCompareAndBranchDirect;
    29502956
     
    29522958        }
    29532959        case JumpTestBit:   {
    2954             ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
    2955             ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
     2960            ASSERT(is4ByteAligned(from));
     2961            ASSERT(is4ByteAligned(to));
    29562962            intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
    29572963
    2958             if (((relative << 50) >> 50) == relative)
     2964            if (isInt<14>(relative))
    29592965                return LinkJumpTestBitDirect;
    29602966
     
    30743080        ASSERT(!(reinterpret_cast<intptr_t>(to) & 3));
    30753081        intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
    3076         ASSERT(((offset << 38) >> 38) == offset);
    3077 
    3078         bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits
     3082        ASSERT(isInt<26>(offset));
     3083
     3084        bool useDirect = isInt<19>(offset);
    30793085        ASSERT(!isDirect || useDirect);
    30803086
     
    31023108        ASSERT(!(reinterpret_cast<intptr_t>(to) & 3));
    31033109        intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
    3104         ASSERT(((offset << 38) >> 38) == offset);
    3105 
    3106         bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits
     3110        ASSERT(isInt<26>(offset));
     3111
     3112        bool useDirect = isInt<19>(offset);
    31073113        ASSERT(!isDirect || useDirect);
    31083114
     
    31313137        intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
    31323138        ASSERT(static_cast<int>(offset) == offset);
    3133         ASSERT(((offset << 38) >> 38) == offset);
    3134 
    3135         bool useDirect = ((offset << 50) >> 50) == offset; // Fits in 14 bits
     3139        ASSERT(isInt<26>(offset));
     3140
     3141        bool useDirect = isInt<14>(offset);
    31363142        ASSERT(!isDirect || useDirect);
    31373143
     
    35123518    ALWAYS_INLINE static int loadRegisterLiteral(LdrLiteralOp opc, bool V, int imm19, FPRegisterID rt)
    35133519    {
    3514         ASSERT(((imm19 << 13) >> 13) == imm19);
     3520        ASSERT(isInt<19>(imm19));
    35153521        return (0x18000000 | opc << 30 | V << 26 | (imm19 & 0x7ffff) << 5 | rt);
    35163522    }
     
    35433549        unsigned immedShiftAmount = memPairOffsetShift(V, size);
    35443550        int imm7 = immediate >> immedShiftAmount;
    3545         ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
     3551        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
    35463552        return (0x28800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
    35473553    }
     
    35743580        unsigned immedShiftAmount = memPairOffsetShift(V, size);
    35753581        int imm7 = immediate >> immedShiftAmount;
    3576         ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
     3582        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
    35773583        return (0x29800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
    35783584    }
     
    35913597        unsigned immedShiftAmount = memPairOffsetShift(V, size);
    35923598        int imm7 = immediate >> immedShiftAmount;
    3593         ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
     3599        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
    35943600        return (0x29000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
    35953601    }
     
    36083614        unsigned immedShiftAmount = memPairOffsetShift(V, size);
    36093615        int imm7 = immediate >> immedShiftAmount;
    3610         ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
     3616        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
    36113617        return (0x28000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
    36123618    }
  • trunk/Source/JavaScriptCore/assembler/CPU.h

    r218137 r237173  
    2929
    3030namespace JSC {
     31
     32#if USE(JSVALUE64)
     33using CPURegister = int64_t;
     34using UCPURegister = uint64_t;
     35#else
     36using CPURegister = int32_t;
     37using UCPURegister = uint32_t;
     38#endif
    3139
    3240constexpr bool isARMv7IDIVSupported()
     
    8088}
    8189
     90constexpr bool isAddress64Bit()
     91{
     92    return sizeof(void*) == 8;
     93}
     94
     95constexpr bool isAddress32Bit()
     96{
     97    return !isAddress64Bit();
     98}
     99
    82100constexpr bool isMIPS()
    83101{
  • trunk/Source/JavaScriptCore/assembler/MacroAssembler.h

    r235517 r237173  
    12731273        // First off we'll special case common, "safe" values to avoid hurting
    12741274        // performance too much
    1275         uintptr_t value = imm.asTrustedImmPtr().asIntptr();
     1275        uint64_t value = imm.asTrustedImmPtr().asIntptr();
    12761276        switch (value) {
    12771277        case 0xffff:
     
    12941294            return false;
    12951295
    1296         return shouldBlindPointerForSpecificArch(value);
     1296        return shouldBlindPointerForSpecificArch(static_cast<uintptr_t>(value));
    12971297    }
    12981298
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.cpp

    r237136 r237173  
    4949// The following are offsets for Probe::State fields accessed
    5050// by the ctiMasmProbeTrampoline stub.
     51#if CPU(ADDRESS64)
    5152#define PTR_SIZE 8
     53#else
     54#define PTR_SIZE 4
     55#endif
     56
    5257#define PROBE_PROBE_FUNCTION_OFFSET (0 * PTR_SIZE)
    5358#define PROBE_ARG_OFFSET (1 * PTR_SIZE)
     
    132137#define PROBE_SIZE (PROBE_FIRST_FPREG_OFFSET + (32 * FPREG_SIZE))
    133138
    134 #define SAVED_PROBE_RETURN_PC_OFFSET        (PROBE_SIZE + (0 * PTR_SIZE))
    135 #define PROBE_SIZE_PLUS_EXTRAS              (PROBE_SIZE + (3 * PTR_SIZE))
     139#define SAVED_PROBE_RETURN_PC_OFFSET        (PROBE_SIZE + (0 * GPREG_SIZE))
     140#define PROBE_SIZE_PLUS_EXTRAS              (PROBE_SIZE + (3 * GPREG_SIZE))
    136141
    137142// These ASSERTs remind you that if you change the layout of Probe::State,
     
    222227
    223228// Conditions for using ldp and stp.
    224 static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + PTR_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent");
     229static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + GPREG_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent");
    225230static_assert(!(PROBE_SIZE_PLUS_EXTRAS & 0xf), "PROBE_SIZE_PLUS_EXTRAS should be 16 byte aligned"); // the Probe::State copying code relies on this.
    226231
     
    230235
    231236struct IncomingProbeRecord {
    232     uintptr_t x24;
    233     uintptr_t x25;
    234     uintptr_t x26;
    235     uintptr_t x27;
    236     uintptr_t x28;
    237     uintptr_t x30; // lr
     237    UCPURegister x24;
     238    UCPURegister x25;
     239    UCPURegister x26;
     240    UCPURegister x27;
     241    UCPURegister x28;
     242    UCPURegister x30; // lr
    238243};
    239244
    240 #define IN_X24_OFFSET (0 * PTR_SIZE)
    241 #define IN_X25_OFFSET (1 * PTR_SIZE)
    242 #define IN_X26_OFFSET (2 * PTR_SIZE)
    243 #define IN_X27_OFFSET (3 * PTR_SIZE)
    244 #define IN_X28_OFFSET (4 * PTR_SIZE)
    245 #define IN_X30_OFFSET (5 * PTR_SIZE)
    246 #define IN_SIZE       (6 * PTR_SIZE)
     245#define IN_X24_OFFSET (0 * GPREG_SIZE)
     246#define IN_X25_OFFSET (1 * GPREG_SIZE)
     247#define IN_X26_OFFSET (2 * GPREG_SIZE)
     248#define IN_X27_OFFSET (3 * GPREG_SIZE)
     249#define IN_X28_OFFSET (4 * GPREG_SIZE)
     250#define IN_X30_OFFSET (5 * GPREG_SIZE)
     251#define IN_SIZE       (6 * GPREG_SIZE)
    247252
    248253static_assert(IN_X24_OFFSET == offsetof(IncomingProbeRecord, x24), "IN_X24_OFFSET is incorrect");
     
    256261
    257262struct OutgoingProbeRecord {
    258     uintptr_t nzcv;
    259     uintptr_t fpsr;
    260     uintptr_t x27;
    261     uintptr_t x28;
    262     uintptr_t fp;
    263     uintptr_t lr;
     263    UCPURegister nzcv;
     264    UCPURegister fpsr;
     265    UCPURegister x27;
     266    UCPURegister x28;
     267    UCPURegister fp;
     268    UCPURegister lr;
    264269};
    265270
    266 #define OUT_NZCV_OFFSET (0 * PTR_SIZE)
    267 #define OUT_FPSR_OFFSET (1 * PTR_SIZE)
    268 #define OUT_X27_OFFSET  (2 * PTR_SIZE)
    269 #define OUT_X28_OFFSET  (3 * PTR_SIZE)
    270 #define OUT_FP_OFFSET   (4 * PTR_SIZE)
    271 #define OUT_LR_OFFSET   (5 * PTR_SIZE)
    272 #define OUT_SIZE        (6 * PTR_SIZE)
     271#define OUT_NZCV_OFFSET (0 * GPREG_SIZE)
     272#define OUT_FPSR_OFFSET (1 * GPREG_SIZE)
     273#define OUT_X27_OFFSET  (2 * GPREG_SIZE)
     274#define OUT_X28_OFFSET  (3 * GPREG_SIZE)
     275#define OUT_FP_OFFSET   (4 * GPREG_SIZE)
     276#define OUT_LR_OFFSET   (5 * GPREG_SIZE)
     277#define OUT_SIZE        (6 * GPREG_SIZE)
    273278
    274279static_assert(OUT_NZCV_OFFSET == offsetof(OutgoingProbeRecord, nzcv), "OUT_NZCV_OFFSET is incorrect");
     
    282287
    283288struct LRRestorationRecord {
    284     uintptr_t lr;
    285     uintptr_t unusedDummyToEnsureSizeIs16ByteAligned;
     289    UCPURegister lr;
     290    UCPURegister unusedDummyToEnsureSizeIs16ByteAligned;
    286291};
    287292
    288 #define LR_RESTORATION_LR_OFFSET (0 * PTR_SIZE)
    289 #define LR_RESTORATION_SIZE      (2 * PTR_SIZE)
     293#define LR_RESTORATION_LR_OFFSET (0 * GPREG_SIZE)
     294#define LR_RESTORATION_SIZE      (2 * GPREG_SIZE)
    290295
    291296static_assert(LR_RESTORATION_LR_OFFSET == offsetof(LRRestorationRecord, lr), "LR_RESTORATION_LR_OFFSET is incorrect");
     
    349354    "str       x30, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n" // Save a duplicate copy of return pc (in lr).
    350355
    351     "add       x30, x30, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point.
     356    "add       x30, x30, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point.
    352357    "str       x30, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n"
    353358
     
    473478    "ldr       x27, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n"
    474479    "ldr       x28, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n"
    475     "add       x27, x27, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
     480    "add       x27, x27, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
    476481    "cmp       x27, x28" "\n"
    477482    "bne     " LOCAL_LABEL_STRING(ctiMasmProbeTrampolineEnd) "\n"
     
    503508
    504509    // Restore the remaining registers and pop the OutgoingProbeRecord.
    505     "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
     510    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
    506511    "msr       nzcv, x27" "\n"
    507512    "msr       fpsr, x28" "\n"
    508     "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
    509     "ldp       x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
     513    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
     514    "ldp       x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
    510515    "ret" "\n"
    511516);
     
    545550        // that feature, the kernel does not tell it to users.), it is a stable approach.
    546551        // https://www.kernel.org/doc/Documentation/arm64/elf_hwcaps.txt
    547         unsigned long hwcaps = getauxval(AT_HWCAP);
     552        uint64_t hwcaps = getauxval(AT_HWCAP);
    548553
    549554#if !defined(HWCAP_JSCVT)
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h

    r237136 r237173  
    5454    static const ARM64Registers::FPRegisterID fpTempRegister = ARM64Registers::q31;
    5555    static const Assembler::SetFlags S = Assembler::S;
    56     static const intptr_t maskHalfWord0 = 0xffffl;
    57     static const intptr_t maskHalfWord1 = 0xffff0000l;
    58     static const intptr_t maskUpperWord = 0xffffffff00000000l;
     56    static const int64_t maskHalfWord0 = 0xffffl;
     57    static const int64_t maskHalfWord1 = 0xffff0000l;
     58    static const int64_t maskUpperWord = 0xffffffff00000000l;
    5959
    6060    static constexpr size_t INSTRUCTION_SIZE = 4;
     
    40084008        RELEASE_ASSERT(m_allowScratchRegister);
    40094009        return m_cachedMemoryTempRegister;
    4010     }
    4011 
    4012     ALWAYS_INLINE bool isInIntRange(intptr_t value)
    4013     {
    4014         return value == ((value << 32) >> 32);
    40154010    }
    40164011
     
    41494144                cachedMemoryTempRegister().invalidate();
    41504145
    4151             if (isInIntRange(addressDelta)) {
     4146            if (isInt<32>(addressDelta)) {
    41524147                if (Assembler::canEncodeSImmOffset(addressDelta)) {
    41534148                    m_assembler.ldur<datasize>(dest,  memoryTempRegister, addressDelta);
     
    41864181            intptr_t addressDelta = addressAsInt - currentRegisterContents;
    41874182
    4188             if (isInIntRange(addressDelta)) {
     4183            if (isInt<32>(addressDelta)) {
    41894184                if (Assembler::canEncodeSImmOffset(addressDelta)) {
    41904185                    m_assembler.stur<datasize>(src, memoryTempRegister, addressDelta);
  • trunk/Source/JavaScriptCore/assembler/Printer.h

    r220958 r237173  
    8080    const void* pointer;
    8181#if USE(JSVALUE64)
    82     uintptr_t buffer[4];
     82    UCPURegister buffer[4];
    8383#elif USE(JSVALUE32_64)
    84     uintptr_t buffer[6];
     84    UCPURegister buffer[6];
    8585#endif
    8686};
  • trunk/Source/JavaScriptCore/assembler/ProbeContext.h

    r231565 r237173  
    4242    static inline const char* sprName(SPRegisterID id) { return MacroAssembler::sprName(id); }
    4343    static inline const char* fprName(FPRegisterID id) { return MacroAssembler::fprName(id); }
    44     inline uintptr_t& gpr(RegisterID);
    45     inline uintptr_t& spr(SPRegisterID);
     44    inline UCPURegister& gpr(RegisterID);
     45    inline UCPURegister& spr(SPRegisterID);
    4646    inline double& fpr(FPRegisterID);
    4747
     
    5757    template<typename T> T sp() const;
    5858
    59     uintptr_t gprs[MacroAssembler::numberOfRegisters()];
    60     uintptr_t sprs[MacroAssembler::numberOfSPRegisters()];
     59    UCPURegister gprs[MacroAssembler::numberOfRegisters()];
     60    UCPURegister sprs[MacroAssembler::numberOfSPRegisters()];
    6161    double fprs[MacroAssembler::numberOfFPRegisters()];
    6262};
    6363
    64 inline uintptr_t& CPUState::gpr(RegisterID id)
     64inline UCPURegister& CPUState::gpr(RegisterID id)
    6565{
    6666    ASSERT(id >= MacroAssembler::firstRegister() && id <= MacroAssembler::lastRegister());
     
    6868}
    6969
    70 inline uintptr_t& CPUState::spr(SPRegisterID id)
     70inline UCPURegister& CPUState::spr(SPRegisterID id)
    7171{
    7272    ASSERT(id >= MacroAssembler::firstSPRegister() && id <= MacroAssembler::lastSPRegister());
     
    199199    T arg() { return reinterpret_cast<T>(m_state->arg); }
    200200
    201     uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
    202     uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
     201    UCPURegister& gpr(RegisterID id) { return cpu.gpr(id); }
     202    UCPURegister& spr(SPRegisterID id) { return cpu.spr(id); }
    203203    double& fpr(FPRegisterID id) { return cpu.fpr(id); }
    204204    const char* gprName(RegisterID id) { return cpu.gprName(id); }
  • trunk/Source/JavaScriptCore/b3/B3ConstPtrValue.h

    r206525 r237173  
    3737// Const64Value depending on platform.
    3838
    39 #if USE(JSVALUE64)
     39#if CPU(ADDRESS64)
    4040typedef Const64Value ConstPtrValueBase;
    4141#else
  • trunk/Source/JavaScriptCore/b3/B3StackmapSpecial.cpp

    r227617 r237173  
    264264        if ((arg.isAddr() || arg.isExtendedOffsetAddr()) && code.frameSize()) {
    265265            if (arg.base() == Tmp(GPRInfo::callFrameRegister)
    266                 && arg.offset() == rep.offsetFromSP() - code.frameSize())
     266                && arg.offset() == static_cast<int64_t>(rep.offsetFromSP()) - code.frameSize())
    267267                return true;
    268268            if (arg.base() == Tmp(MacroAssembler::stackPointerRegister)
  • trunk/Source/JavaScriptCore/b3/air/AirArg.h

    r235935 r237173  
    974974    {
    975975        ASSERT(kind() == Stack);
    976         return bitwise_cast<StackSlot*>(m_offset);
     976        return bitwise_cast<StackSlot*>(static_cast<uintptr_t>(m_offset));
    977977    }
    978978
     
    997997    {
    998998        ASSERT(kind() == Special);
    999         return bitwise_cast<Air::Special*>(m_offset);
     999        return bitwise_cast<Air::Special*>(static_cast<uintptr_t>(m_offset));
    10001000    }
    10011001
  • trunk/Source/JavaScriptCore/b3/air/testair.cpp

    r231317 r237173  
    139139}
    140140
    141 void loadConstant(BasicBlock* block, intptr_t value, Tmp tmp)
    142 {
    143     loadConstantImpl<intptr_t>(block, value, Move, tmp, tmp);
     141template<typename T>
     142void loadConstant(BasicBlock* block, T value, Tmp tmp)
     143{
     144    loadConstantImpl(block, value, Move, tmp, tmp);
    144145}
    145146
  • trunk/Source/JavaScriptCore/b3/testb3.cpp

    r232741 r237173  
    54745474    BasicBlock* root = proc.addBlock();
    54755475    intptr_t slot;
    5476     if (is64Bit())
    5477         slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef);
    5478     else
    5479         slot = 0xbaadbeef;
     5476#if CPU(ADDRESS64)
     5477    slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef);
     5478#else
     5479    slot = 0xbaadbeef;
     5480#endif
    54805481    root->appendNew<MemoryValue>(
    54815482        proc, Store, Origin(),
     
    1319513196    auto interpreter = compileProc(proc);
    1319613197   
    13197     Vector<intptr_t> data;
    13198     Vector<intptr_t> code;
    13199     Vector<intptr_t> stream;
     13198    Vector<uintptr_t> data;
     13199    Vector<uintptr_t> code;
     13200    Vector<uintptr_t> stream;
    1320013201   
    1320113202    data.append(1);
     
    1449814499   
    1449914500    auto code = compileProc(proc);
    14500     CHECK_EQ(invoke<intptr_t>(*code, 1, 2), 1 + (static_cast<intptr_t>(2) << static_cast<intptr_t>(32)));
     14501    CHECK_EQ(invoke<int64_t>(*code, 1, 2), 1 + (static_cast<int64_t>(2) << static_cast<int64_t>(32)));
    1450114502}
    1450214503
     
    1460814609void testLoadBaseIndexShift32()
    1460914610{
     14611#if CPU(ADDRESS64)
    1461014612    Procedure proc;
    1461114613    BasicBlock* root = proc.addBlock();
     
    1462614628    for (unsigned i = 0; i < 10; ++i)
    1462714629        CHECK_EQ(invoke<int32_t>(*code, ptr - (static_cast<intptr_t>(1) << static_cast<intptr_t>(32)) * i, i), 12341234);
     14630#endif
    1462814631}
    1462914632
  • trunk/Source/JavaScriptCore/bindings/ScriptFunctionCall.cpp

    r232337 r237173  
    7676}
    7777
    78 void ScriptCallArgumentHandler::appendArgument(unsigned long argument)
     78void ScriptCallArgumentHandler::appendArgument(uint64_t argument)
    7979{
    8080    JSLockHolder lock(m_exec);
  • trunk/Source/JavaScriptCore/bindings/ScriptFunctionCall.h

    r206525 r237173  
    5252    void appendArgument(long long);
    5353    void appendArgument(unsigned int);
    54     void appendArgument(unsigned long);
     54    void appendArgument(uint64_t);
    5555    void appendArgument(int);
    5656    void appendArgument(bool);
  • trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp

    r236901 r237173  
    21612161static size_t roundCalleeSaveSpaceAsVirtualRegisters(size_t calleeSaveRegisters)
    21622162{
    2163     static const unsigned cpuRegisterSize = sizeof(void*);
    2164     return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * cpuRegisterSize) / sizeof(Register));
     2163
     2164    return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * sizeof(CPURegister)) / sizeof(Register));
    21652165
    21662166}
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r236585 r237173  
    8787    unsigned registerCount = calleeSaves->size();
    8888
    89     uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
     89    UCPURegister* physicalStackFrame = context.fp<UCPURegister*>();
    9090    for (unsigned i = 0; i < registerCount; i++) {
    9191        RegisterAtOffset entry = calleeSaves->at(i);
     
    9595        // Hence, we read the values directly from the physical stack memory instead of
    9696        // going through context.stack().
    97         ASSERT(!(entry.offset() % sizeof(uintptr_t)));
    98         context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
     97        ASSERT(!(entry.offset() % sizeof(UCPURegister)));
     98        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(UCPURegister)];
    9999    }
    100100}
     
    114114        if (dontSaveRegisters.get(entry.reg()))
    115115            continue;
    116         stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     116        stack.set(context.fp(), entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
    117117    }
    118118}
     
    128128
    129129    VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
    130     uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
     130    UCPURegister* calleeSaveBuffer = reinterpret_cast<UCPURegister*>(entryRecord->calleeSaveRegistersBuffer);
    131131
    132132    // Restore all callee saves.
     
    135135        if (dontRestoreRegisters.get(entry.reg()))
    136136            continue;
    137         size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
     137        size_t uintptrOffset = entry.offset() / sizeof(UCPURegister);
    138138        if (entry.reg().isGPR())
    139139            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
     
    161161            continue;
    162162        if (entry.reg().isGPR())
    163             stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
     163            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
    164164        else
    165             stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
     165            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<UCPURegister>(entry.reg().fpr()));
    166166    }
    167167}
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

    r231607 r237173  
    231231            jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
    232232#if USE(JSVALUE64)
    233         jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
     233        jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
    234234        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
    235235        jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r236901 r237173  
    44084408        m_jit.add32(structureIDGPR, hashGPR);
    44094409        m_jit.and32(TrustedImm32(HasOwnPropertyCache::mask), hashGPR);
    4410         static_assert(sizeof(HasOwnPropertyCache::Entry) == 16, "Strong assumption of that here.");
    4411         m_jit.lshift32(TrustedImm32(4), hashGPR);
     4410        if (hasOneBitSet(sizeof(HasOwnPropertyCache::Entry))) // is a power of 2
     4411            m_jit.lshift32(TrustedImm32(getLSBSet(sizeof(HasOwnPropertyCache::Entry))), hashGPR);
     4412        else
     4413            m_jit.mul32(TrustedImm32(sizeof(HasOwnPropertyCache::Entry)), hashGPR, hashGPR);
    44124414        ASSERT(m_jit.vm()->hasOwnPropertyCache());
    44134415        m_jit.move(TrustedImmPtr(m_jit.vm()->hasOwnPropertyCache()), tempGPR);
  • trunk/Source/JavaScriptCore/disassembler/UDis86Disassembler.cpp

    r230748 r237173  
    5050    while (ud_disassemble(&disassembler)) {
    5151        char pcString[20];
    52         snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<unsigned long>(currentPC));
     52        snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<uintptr_t>(currentPC));
    5353        out.printf("%s%16s: %s\n", prefix, pcString, ud_insn_asm(&disassembler));
    5454        currentPC = disassembler.pc;
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r237136 r237173  
    96649664
    96659665        LValue bucket;
     9666
    96669667        if (m_node->child1().useKind() == WeakMapObjectUse) {
    9667             static_assert(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>) == 16, "");
    9668             bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(4)));
     9668            static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>)), "Should be a power of 2");
     9669            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>)))));
    96699670        } else {
    9670             static_assert(sizeof(WeakMapBucket<WeakMapBucketDataKey>) == 8, "");
    9671             bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(3)));
     9671            static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>)), "Should be a power of 2");
     9672            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>)))));
    96729673        }
    96739674
  • trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp

    r230303 r237173  
    8888    size_t dstAsSize = reinterpret_cast<size_t>(dst);
    8989    size_t srcAsSize = reinterpret_cast<size_t>(src);
    90     RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(dstAsSize));
    91     RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(srcAsSize));
    92     RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(size));
    93 
    94     intptr_t* dstPtr = reinterpret_cast<intptr_t*>(dst);
    95     const intptr_t* srcPtr = reinterpret_cast<const intptr_t*>(src);
    96     size /= sizeof(intptr_t);
     90    RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(dstAsSize));
     91    RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(srcAsSize));
     92    RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(size));
     93
     94    CPURegister* dstPtr = reinterpret_cast<CPURegister*>(dst);
     95    const CPURegister* srcPtr = reinterpret_cast<const CPURegister*>(src);
     96    size /= sizeof(CPURegister);
    9797    while (size--)
    9898        *dstPtr++ = *srcPtr++;
  • trunk/Source/JavaScriptCore/interpreter/CallFrame.h

    r235603 r237173  
    6868    };
    6969
     70    // arm64_32 expects caller frame and return pc to use 8 bytes
    7071    struct CallerFrameAndPC {
    71         CallFrame* callerFrame;
    72         Instruction* pc;
    73         static const int sizeInRegisters = 2 * sizeof(void*) / sizeof(Register);
     72        alignas(CPURegister) CallFrame* callerFrame;
     73        alignas(CPURegister) Instruction* returnPC;
     74        static const int sizeInRegisters = 2 * sizeof(CPURegister) / sizeof(Register);
    7475    };
    7576    static_assert(CallerFrameAndPC::sizeInRegisters == sizeof(CallerFrameAndPC) / sizeof(Register), "CallerFrameAndPC::sizeInRegisters is incorrect.");
     
    148149        static ptrdiff_t callerFrameOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, callerFrame); }
    149150
    150         ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC().pc); }
    151         bool hasReturnPC() const { return !!callerFrameAndPC().pc; }
    152         void clearReturnPC() { callerFrameAndPC().pc = 0; }
    153         static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, pc); }
     151        ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC().returnPC); }
     152        bool hasReturnPC() const { return !!callerFrameAndPC().returnPC; }
     153        void clearReturnPC() { callerFrameAndPC().returnPC = 0; }
     154        static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, returnPC); }
    154155        AbstractPC abstractReturnPC(VM& vm) { return AbstractPC(vm, this); }
    155156
     
    254255        bool isGlobalExec() const
    255256        {
    256             return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC().pc == nullptr;
     257            return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC().returnPC == nullptr;
    257258        }
    258259
     
    264265        void setCallee(JSObject* callee) { static_cast<Register*>(this)[CallFrameSlot::callee] = callee; }
    265266        void setCodeBlock(CodeBlock* codeBlock) { static_cast<Register*>(this)[CallFrameSlot::codeBlock] = codeBlock; }
    266         void setReturnPC(void* value) { callerFrameAndPC().pc = reinterpret_cast<Instruction*>(value); }
     267        void setReturnPC(void* value) { callerFrameAndPC().returnPC = reinterpret_cast<Instruction*>(value); }
    267268
    268269        String friendlyFunctionName();
  • trunk/Source/JavaScriptCore/interpreter/CalleeBits.h

    r214979 r237173  
    5252    static void* boxWasm(Wasm::Callee* callee)
    5353    {
    54         CalleeBits result(bitwise_cast<void*>(bitwise_cast<uintptr_t>(callee) | TagBitsWasm));
     54        CalleeBits result(reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(callee) | TagBitsWasm));
    5555        ASSERT(result.isWasm());
    5656        return result.rawPtr();
     
    6161    {
    6262#if ENABLE(WEBASSEMBLY)
    63         return (bitwise_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm;
     63        return (reinterpret_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm;
    6464#else
    6565        return false;
     
    7878    {
    7979        ASSERT(isWasm());
    80         return bitwise_cast<Wasm::Callee*>(bitwise_cast<uintptr_t>(m_ptr) & ~TagBitsWasm);
     80        return reinterpret_cast<Wasm::Callee*>(reinterpret_cast<uintptr_t>(m_ptr) & ~TagBitsWasm);
    8181    }
    8282#endif
  • trunk/Source/JavaScriptCore/interpreter/Interpreter.cpp

    r237080 r237173  
    570570        RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets();
    571571        RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
    572         intptr_t* frame = reinterpret_cast<intptr_t*>(m_callFrame->registers());
     572        CPURegister* frame = reinterpret_cast<CPURegister*>(m_callFrame->registers());
    573573
    574574        unsigned registerCount = currentCalleeSaves->size();
  • trunk/Source/JavaScriptCore/interpreter/VMEntryRecord.h

    r236381 r237173  
    4848
    4949#if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
    50     intptr_t calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
     50    CPURegister calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
    5151#endif
    5252
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h

    r236734 r237173  
    462462        ASSERT(frameSize % stackAlignmentBytes() == 0);
    463463        if (frameSize <= 128) {
    464             for (unsigned offset = 0; offset < frameSize; offset += sizeof(intptr_t))
     464            for (unsigned offset = 0; offset < frameSize; offset += sizeof(CPURegister))
    465465                storePtr(TrustedImm32(0), Address(currentTop, -8 - offset));
    466466        } else {
    467467            constexpr unsigned storeBytesPerIteration = stackAlignmentBytes();
    468             constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof(intptr_t);
     468            constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof(CPURegister);
    469469
    470470            move(currentTop, temp);
     
    476476#else
    477477            for (unsigned i = storesPerIteration; i-- != 0;)
    478                 storePtr(TrustedImm32(0), Address(temp, sizeof(intptr_t) * i));
     478                storePtr(TrustedImm32(0), Address(temp, sizeof(CPURegister) * i));
    479479#endif
    480480            branchPtr(NotEqual, temp, newTop).linkTo(zeroLoop, this);
  • trunk/Source/JavaScriptCore/jit/RegisterAtOffset.h

    r236381 r237173  
    5050    Reg reg() const { return m_reg; }
    5151    ptrdiff_t offset() const { return m_offset; }
    52     int offsetAsIndex() const { return offset() / sizeof(void*); }
     52    int offsetAsIndex() const { ASSERT(!(offset() % sizeof(CPURegister))); return offset() / static_cast<int>(sizeof(CPURegister)); }
    5353   
    5454    bool operator==(const RegisterAtOffset& other) const
     
    7070private:
    7171    Reg m_reg;
    72     ptrdiff_t m_offset : sizeof(ptrdiff_t) * 8 - sizeof(Reg) * 8;
     72    ptrdiff_t m_offset : (sizeof(ptrdiff_t) - sizeof(Reg)) * CHAR_BIT;
    7373};
    7474
  • trunk/Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp

    r236381 r237173  
    4141   
    4242    if (offsetBaseType == FramePointerBased)
    43         offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(void*));
     43        offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(CPURegister));
    4444
    4545    m_registers.reserveInitialCapacity(numberOfRegisters);
    4646    registerSet.forEach([&] (Reg reg) {
    4747        m_registers.append(RegisterAtOffset(reg, offset));
    48         offset += sizeof(void*);
     48        offset += sizeof(CPURegister);
    4949    });
    5050}
  • trunk/Source/JavaScriptCore/llint/LLIntData.cpp

    r236381 r237173  
    7777
    7878#if USE(JSVALUE64)
    79     const ptrdiff_t PtrSize = 8;
    8079    const ptrdiff_t CallFrameHeaderSlots = 5;
    8180#else // USE(JSVALUE64) // i.e. 32-bit version
    82     const ptrdiff_t PtrSize = 4;
    8381    const ptrdiff_t CallFrameHeaderSlots = 4;
    8482#endif
     83    const ptrdiff_t MachineRegisterSize = sizeof(CPURegister);
    8584    const ptrdiff_t SlotSize = 8;
    8685
    87     STATIC_ASSERT(sizeof(void*) == PtrSize);
    8886    STATIC_ASSERT(sizeof(Register) == SlotSize);
    8987    STATIC_ASSERT(CallFrame::headerSizeInRegisters == CallFrameHeaderSlots);
    9088
    9189    ASSERT(!CallFrame::callerFrameOffset());
    92     STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == (PtrSize * 2) / SlotSize);
    93     ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + PtrSize);
    94     ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + PtrSize);
     90    STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == (MachineRegisterSize * 2) / SlotSize);
     91    ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + MachineRegisterSize);
     92    ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + MachineRegisterSize);
    9593    STATIC_ASSERT(CallFrameSlot::callee * sizeof(Register) == CallFrameSlot::codeBlock * sizeof(Register) + SlotSize);
    9694    STATIC_ASSERT(CallFrameSlot::argumentCount * sizeof(Register) == CallFrameSlot::callee * sizeof(Register) + SlotSize);
  • trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h

    r236381 r237173  
    146146#endif
    147147
     148#if CPU(ADDRESS64)
     149#define OFFLINE_ASM_ADDRESS64 1
     150#else
     151#define OFFLINE_ASM_ADDRESS64 0
     152#endif
     153
    148154#if ENABLE(POISON)
    149155#define OFFLINE_ASM_POISON 1
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm

    r235419 r237173  
    158158if JSVALUE64
    159159    const CallFrameHeaderSlots = 5
     160    const MachineRegisterSize = 8
    160161else
    161162    const CallFrameHeaderSlots = 4
    162163    const CallFrameAlignSlots = 1
     164    const MachineRegisterSize = 4
    163165end
    164166const SlotSize = 8
     
    171173const StackAlignmentMask = StackAlignment - 1
    172174
    173 const CallerFrameAndPCSize = 2 * PtrSize
     175const CallerFrameAndPCSize = constexpr (sizeof(CallerFrameAndPC))
    174176
    175177const CallerFrame = 0
    176 const ReturnPC = CallerFrame + PtrSize
    177 const CodeBlock = ReturnPC + PtrSize
     178const ReturnPC = CallerFrame + MachineRegisterSize
     179const CodeBlock = ReturnPC + MachineRegisterSize
    178180const Callee = CodeBlock + SlotSize
    179181const ArgumentCount = Callee + SlotSize
     
    295297
    296298    macro loadisFromInstruction(offset, dest)
    297         loadis offset * 8[PB, PC, 8], dest
     299        loadis offset * PtrSize[PB, PC, PtrSize], dest
    298300    end
    299301   
    300302    macro loadpFromInstruction(offset, dest)
    301         loadp offset * 8[PB, PC, 8], dest
     303        loadp offset * PtrSize[PB, PC, PtrSize], dest
    302304    end
    303305
    304306    macro loadisFromStruct(offset, dest)
    305         loadis offset[PB, PC, 8], dest
     307        loadis offset[PB, PC, PtrSize], dest
    306308    end
    307309
    308310    macro loadpFromStruct(offset, dest)
    309         loadp offset[PB, PC, 8], dest
     311        loadp offset[PB, PC, PtrSize], dest
    310312    end
    311313
    312314    macro storeisToInstruction(value, offset)
    313         storei value, offset * 8[PB, PC, 8]
     315        storei value, offset * PtrSize[PB, PC, PtrSize]
    314316    end
    315317
    316318    macro storepToInstruction(value, offset)
    317         storep value, offset * 8[PB, PC, 8]
     319        storep value, offset * PtrSize[PB, PC, PtrSize]
    318320    end
    319321
    320322    macro storeisFromStruct(value, offset)
    321         storei value, offset[PB, PC, 8]
     323        storei value, offset[PB, PC, PtrSize]
    322324    end
    323325
    324326    macro storepFromStruct(value, offset)
    325         storep value, offset[PB, PC, 8]
     327        storep value, offset[PB, PC, PtrSize]
    326328    end
    327329
     
    575577end
    576578
    577 const CalleeRegisterSaveSize = CalleeSaveRegisterCount * PtrSize
     579const CalleeRegisterSaveSize = CalleeSaveRegisterCount * MachineRegisterSize
    578580
    579581# VMEntryTotalFrameSize includes the space for struct VMEntryRecord and the
     
    698700        leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp
    699701        if ARM64 or ARM64E
    700             storep csr0, [temp]
    701             storep csr1, 8[temp]
    702             storep csr2, 16[temp]
    703             storep csr3, 24[temp]
    704             storep csr4, 32[temp]
    705             storep csr5, 40[temp]
    706             storep csr6, 48[temp]
    707             storep csr7, 56[temp]
    708             storep csr8, 64[temp]
    709             storep csr9, 72[temp]
     702            storeq csr0, [temp]
     703            storeq csr1, 8[temp]
     704            storeq csr2, 16[temp]
     705            storeq csr3, 24[temp]
     706            storeq csr4, 32[temp]
     707            storeq csr5, 40[temp]
     708            storeq csr6, 48[temp]
     709            storeq csr7, 56[temp]
     710            storeq csr8, 64[temp]
     711            storeq csr9, 72[temp]
    710712            stored csfr0, 80[temp]
    711713            stored csfr1, 88[temp]
     
    717719            stored csfr7, 136[temp]
    718720        elsif X86_64
    719             storep csr0, [temp]
    720             storep csr1, 8[temp]
    721             storep csr2, 16[temp]
    722             storep csr3, 24[temp]
    723             storep csr4, 32[temp]
     721            storeq csr0, [temp]
     722            storeq csr1, 8[temp]
     723            storeq csr2, 16[temp]
     724            storeq csr3, 24[temp]
     725            storeq csr4, 32[temp]
    724726        elsif X86_64_WIN
    725             storep csr0, [temp]
    726             storep csr1, 8[temp]
    727             storep csr2, 16[temp]
    728             storep csr3, 24[temp]
    729             storep csr4, 32[temp]
    730             storep csr5, 40[temp]
    731             storep csr6, 48[temp]
     727            storeq csr0, [temp]
     728            storeq csr1, 8[temp]
     729            storeq csr2, 16[temp]
     730            storeq csr3, 24[temp]
     731            storeq csr4, 32[temp]
     732            storeq csr5, 40[temp]
     733            storeq csr6, 48[temp]
    732734        end
    733735    end
     
    740742        leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp
    741743        if ARM64 or ARM64E
    742             loadp [temp], csr0
    743             loadp 8[temp], csr1
    744             loadp 16[temp], csr2
    745             loadp 24[temp], csr3
    746             loadp 32[temp], csr4
    747             loadp 40[temp], csr5
    748             loadp 48[temp], csr6
    749             loadp 56[temp], csr7
    750             loadp 64[temp], csr8
    751             loadp 72[temp], csr9
     744            loadq [temp], csr0
     745            loadq 8[temp], csr1
     746            loadq 16[temp], csr2
     747            loadq 24[temp], csr3
     748            loadq 32[temp], csr4
     749            loadq 40[temp], csr5
     750            loadq 48[temp], csr6
     751            loadq 56[temp], csr7
     752            loadq 64[temp], csr8
     753            loadq 72[temp], csr9
    752754            loadd 80[temp], csfr0
    753755            loadd 88[temp], csfr1
     
    759761            loadd 136[temp], csfr7
    760762        elsif X86_64
    761             loadp [temp], csr0
    762             loadp 8[temp], csr1
    763             loadp 16[temp], csr2
    764             loadp 24[temp], csr3
    765             loadp 32[temp], csr4
     763            loadq [temp], csr0
     764            loadq 8[temp], csr1
     765            loadq 16[temp], csr2
     766            loadq 24[temp], csr3
     767            loadq 32[temp], csr4
    766768        elsif X86_64_WIN
    767             loadp [temp], csr0
    768             loadp 8[temp], csr1
    769             loadp 16[temp], csr2
    770             loadp 24[temp], csr3
    771             loadp 32[temp], csr4
    772             loadp 40[temp], csr5
    773             loadp 48[temp], csr6
     769            loadq [temp], csr0
     770            loadq 8[temp], csr1
     771            loadq 16[temp], csr2
     772            loadq 24[temp], csr3
     773            loadq 32[temp], csr4
     774            loadq 40[temp], csr5
     775            loadq 48[temp], csr6
    774776        end
    775777    end
     
    885887
    886888    if ARM or ARMv7_TRADITIONAL or ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS
    887         addp 2 * PtrSize, sp
    888         subi 2 * PtrSize, temp2
    889         loadp PtrSize[cfr], lr
     889        addp CallerFrameAndPCSize, sp
     890        subi CallerFrameAndPCSize, temp2
     891        loadp CallerFrameAndPC::returnPC[cfr], lr
    890892    else
    891893        addp PtrSize, sp
     
    904906
    905907.copyLoop:
    906     subi PtrSize, temp2
    907     loadp [sp, temp2, 1], temp3
    908     storep temp3, [temp1, temp2, 1]
    909     btinz temp2, .copyLoop
     908    if ARM64 and not ADDRESS64
     909        subi MachineRegisterSize, temp2
     910        loadq [sp, temp2, 1], temp3
     911        storeq temp3, [temp1, temp2, 1]
     912        btinz temp2, .copyLoop
     913    else
     914        subi PtrSize, temp2
     915        loadp [sp, temp2, 1], temp3
     916        storep temp3, [temp1, temp2, 1]
     917        btinz temp2, .copyLoop
     918    end
    910919
    911920    move temp1, sp
     
    11101119    if JSVALUE64
    11111120        move TagTypeNumber, tagTypeNumber
    1112         addp TagBitTypeOther, tagTypeNumber, tagMask
     1121        addq TagBitTypeOther, tagTypeNumber, tagMask
    11131122    end
    11141123end
     
    12641273        pcrtoaddr label, t1
    12651274        move index, t4
    1266         storep t1, [a0, t4, 8]
     1275        storep t1, [a0, t4, PtrSize]
    12671276    elsif ARM or ARMv7 or ARMv7_TRADITIONAL
    12681277        mvlbl (label - _relativePCBase), t4
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

    r236901 r237173  
    2525# Utilities.
    2626macro jumpToInstruction()
    27     jmp [PB, PC, 8], BytecodePtrTag
     27    jmp [PB, PC, PtrSize], BytecodePtrTag
    2828end
    2929
     
    3939
    4040macro dispatchIntIndirect(offset)
    41     dispatchInt(offset * 8[PB, PC, 8])
     41    dispatchInt(offset * PtrSize[PB, PC, PtrSize])
    4242end
    4343
     
    301301
    302302macro prepareStateForCCall()
    303     leap [PB, PC, 8], PC
     303    leap [PB, PC, PtrSize], PC
    304304end
    305305
     
    307307    move r0, PC
    308308    subp PB, PC
    309     rshiftp 3, PC
     309    rshiftp constexpr (getLSBSet(sizeof(void*))), PC
    310310end
    311311
     
    488488    unpoison(_g_CodeBlockPoison, scratch, scratch2)
    489489    loadp VM::heap + Heap::m_structureIDTable + StructureIDTable::m_table[scratch], scratch
    490     loadp [scratch, structureIDThenStructure, 8], structureIDThenStructure
     490    loadp [scratch, structureIDThenStructure, PtrSize], structureIDThenStructure
    491491end
    492492
     
    550550    addi CalleeSaveSpaceAsVirtualRegisters, t2
    551551    move t1, t0
    552     lshiftp 3, t0
     552    # Adds to sp are always 64-bit on arm64 so we need maintain t0's high bits.
     553    lshiftq 3, t0
    553554    addp t0, cfr
    554555    addp t0, sp
     
    589590    andp MarkedBlockMask, t3
    590591    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    591     btqz VM::m_exception[t3], .noException
     592    btpz VM::m_exception[t3], .noException
    592593    jmp label
    593594.noException:
     
    15481549    # Now, t1 has the Structure* and t2 has the StructureID that we want that Structure* to have.
    15491550    bineq t2, Structure::m_blob + StructureIDBlob::u.fields.structureID[t1], .opPutByIdSlow
    1550     addp 8, t3
     1551    addp PtrSize, t3
    15511552    loadq Structure::m_prototype[t1], t2
    15521553    bqneq t2, ValueNull, .opPutByIdTransitionChainLoop
     
    17471748.outOfBounds:
    17481749    biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.vectorLength[t0], .opPutByValOutOfBounds
    1749     loadp 32[PB, PC, 8], t2
     1750    loadpFromInstruction(4, t2)
    17501751    storeb 1, ArrayProfile::m_mayStoreToHole[t2]
    17511752    addi 1, t3, t2
     
    17711772        macro (operand, scratch, address)
    17721773            loadConstantOrVariable(operand, scratch)
    1773             bpb scratch, tagTypeNumber, .opPutByValSlow
    1774             storep scratch, address
     1774            bqb scratch, tagTypeNumber, .opPutByValSlow
     1775            storeq scratch, address
    17751776            writeBarrierOnOperands(1, 3)
    17761777        end)
     
    17851786            jmp .ready
    17861787        .notInt:
    1787             addp tagTypeNumber, scratch
     1788            addq tagTypeNumber, scratch
    17881789            fq2d scratch, ft0
    17891790            bdnequn ft0, ft0, .opPutByValSlow
     
    17981799        macro (operand, scratch, address)
    17991800            loadConstantOrVariable(operand, scratch)
    1800             storep scratch, address
     1801            storeq scratch, address
    18011802            writeBarrierOnOperands(1, 3)
    18021803        end)
     
    19071908    loadp CodeBlock[cfr], t2
    19081909    loadp CodeBlock::m_globalObject[t2], t2
    1909     loadp JSGlobalObject::m_specialPointers[t2, t1, 8], t1
     1910    loadp JSGlobalObject::m_specialPointers[t2, t1, PtrSize], t1
    19101911    bpneq t1, [cfr, t0, 8], .opJneqPtrTarget
    19111912    dispatch(5)
    19121913
    19131914.opJneqPtrTarget:
    1914     storei 1, 32[PB, PC, 8]
     1915    storeisToInstruction(1, 4)
    19151916    dispatchIntIndirect(3)
    19161917
     
    21352136    loadp VM::targetInterpreterPCForThrow[t3], PC
    21362137    subp PB, PC
    2137     rshiftp 3, PC
     2138    rshiftp constexpr (getLSBSet(sizeof(void*))), PC
    21382139
    21392140    callSlowPath(_llint_slow_path_check_if_exception_is_uncatchable_and_notify_profiler)
     
    21462147    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    21472148
    2148     loadq VM::m_exception[t3], t0
    2149     storeq 0, VM::m_exception[t3]
     2149    loadp VM::m_exception[t3], t0
     2150    storep 0, VM::m_exception[t3]
    21502151    loadisFromInstruction(1, t2)
    21512152    storeq t0, [cfr, t2, 8]
     
    22292230    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    22302231
    2231     btqnz VM::m_exception[t3], .handleException
     2232    btpnz VM::m_exception[t3], .handleException
    22322233
    22332234    functionEpilogue()
     
    22722273    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
    22732274
    2274     btqnz VM::m_exception[t3], .handleException
     2275    btpnz VM::m_exception[t3], .handleException
    22752276
    22762277    functionEpilogue()
     
    22982299    loadisFromInstruction(5, t2)
    22992300    loadisFromInstruction(2, t0)
    2300     loadp [cfr, t0, 8], t0
     2301    loadq [cfr, t0, 8], t0
    23012302    btiz t2, .resolveScopeLoopEnd
    23022303
     
    25942595    traceExecution()
    25952596    loadVariable(2, t0)
    2596     loadi 24[PB, PC, 8], t1
     2597    loadi 3 * PtrSize[PB, PC, PtrSize], t1
    25972598    loadq DirectArguments_storage[t0, t1, 8], t0
    25982599    valueProfile(t0, 4, t1)
     
    26052606    traceExecution()
    26062607    loadVariable(1, t0)
    2607     loadi 16[PB, PC, 8], t1
     2608    loadi 2 * PtrSize[PB, PC, PtrSize], t1
    26082609    loadisFromInstruction(3, t3)
    26092610    loadConstantOrVariable(t3, t2)
  • trunk/Source/JavaScriptCore/offlineasm/arm64.rb

    r230273 r237173  
    7676    number = name[1..-1]
    7777    case kind
    78     when :int
     78    when :word
    7979        "w" + number
    8080    when :ptr
     81        prefix = $currentSettings["ADDRESS64"] ? "x" : "w"
     82        prefix + number
     83    when :quad
    8184        "x" + number
    8285    else
     
    200203    def arm64Operand(kind)
    201204        raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value < -255 or offset.value > 4095
    202         "[#{base.arm64Operand(:ptr)}, \##{offset.value}]"
     205        "[#{base.arm64Operand(:quad)}, \##{offset.value}]"
    203206    end
    204207   
     
    211214    def arm64Operand(kind)
    212215        raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value != 0
    213         "[#{base.arm64Operand(:ptr)}, #{index.arm64Operand(:ptr)}, lsl \##{scaleShift}]"
     216        "[#{base.arm64Operand(:quad)}, #{index.arm64Operand(:quad)}, lsl \##{scaleShift}]"
    214217    end
    215218
     
    234237    newList = []
    235238
    236     def isAddressMalformed(operand)
    237         operand.is_a? Address and not (-255..4095).include? operand.offset.value
     239    def isAddressMalformed(opcode, operand)
     240        malformed = false
     241        if operand.is_a? Address
     242            malformed ||= (not (-255..4095).include? operand.offset.value)
     243            if opcode =~ /q$/ and $currentSettings["ADDRESS64"]
     244                malformed ||= operand.offset.value % 8
     245            end
     246        end
     247        malformed
    238248    end
    239249
     
    241251        | node |
    242252        if node.is_a? Instruction
    243             if node.opcode =~ /^store/ and isAddressMalformed(node.operands[1])
     253            if node.opcode =~ /^store/ and isAddressMalformed(node.opcode, node.operands[1])
    244254                address = node.operands[1]
    245255                tmp = Tmp.new(codeOrigin, :gpr)
    246256                newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
    247                 newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0))], node.annotation)
    248             elsif node.opcode =~ /^load/ and isAddressMalformed(node.operands[0])
     257                newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0))], node.annotation)
     258            elsif node.opcode =~ /^load/ and isAddressMalformed(node.opcode, node.operands[0])
    249259                address = node.operands[0]
    250260                tmp = Tmp.new(codeOrigin, :gpr)
    251261                newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
    252                 newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)
     262                newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)
    253263            else
    254264                newList << node
     
    286296end
    287297
     298def arm64FixSpecialRegisterArithmeticMode(list)
     299    newList = []
     300    def usesSpecialRegister(node)
     301        node.children.any? {
     302            |operand|
     303            if operand.is_a? RegisterID and operand.name =~ /sp/
     304                true
     305            elsif operand.is_a? Address or operand.is_a? BaseIndex
     306                usesSpecialRegister(operand)
     307            else
     308                false
     309            end
     310        }
     311    end
     312
     313
     314    list.each {
     315        | node |
     316        if node.is_a? Instruction
     317            case node.opcode
     318            when "addp", "subp", "mulp", "divp", "leap"
     319                if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node)
     320                    newOpcode = node.opcode.sub(/(.*)p/, '\1q')
     321                    node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation)
     322                end
     323            when /^bp/
     324                if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node)
     325                    newOpcode = node.opcode.sub(/^bp(.*)/, 'bq\1')
     326                    node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation)
     327                end
     328            end
     329        end
     330        newList << node
     331    }
     332    newList
     333end
     334
    288335# Workaround for Cortex-A53 erratum (835769)
    289336def arm64CortexA53Fix835769(list)
     
    319366        result = riscLowerNot(result)
    320367        result = riscLowerSimpleBranchOps(result)
    321         result = riscLowerHardBranchOps64(result)
     368
     369        result = $currentSettings["ADDRESS64"] ? riscLowerHardBranchOps64(result) : riscLowerHardBranchOps(result)
    322370        result = riscLowerShiftOps(result)
    323371        result = arm64LowerMalformedLoadStoreAddresses(result)
     
    338386                "divd", "subd", "muld", "sqrtd", /^bp/, /^bq/, /^btp/, /^btq/, /^cp/, /^cq/, /^tp/, /^tq/, /^bd/,
    339387                "jmp", "call", "leap", "leaq"
    340                 size = 8
     388                size = $currentSettings["ADDRESS64"] ? 8 : 4
    341389            else
    342390                raise "Bad instruction #{node.opcode} for heap access at #{node.codeOriginString}"
     
    369417        }
    370418        result = riscLowerTest(result)
     419        result = arm64FixSpecialRegisterArithmeticMode(result)
    371420        result = assignRegistersToTemporaries(result, :gpr, ARM64_EXTRA_GPRS)
    372421        result = assignRegistersToTemporaries(result, :fpr, ARM64_EXTRA_FPRS)
     
    450499def emitARM64Access(opcode, opcodeNegativeOffset, register, memory, kind)
    451500    if memory.is_a? Address and memory.offset.value < 0
     501        raise unless -256 <= memory.offset.value
    452502        $asm.puts "#{opcodeNegativeOffset} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}"
    453503        return
    454504    end
    455    
     505
    456506    $asm.puts "#{opcode} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}"
    457507end
     
    480530def emitARM64Compare(operands, kind, compareCode)
    481531    emitARM64Unflipped("subs #{arm64GPRName('xzr', kind)}, ", operands[0..-2], kind)
    482     $asm.puts "csinc #{operands[-1].arm64Operand(:int)}, wzr, wzr, #{compareCode}"
     532    $asm.puts "csinc #{operands[-1].arm64Operand(:word)}, wzr, wzr, #{compareCode}"
    483533end
    484534
     
    492542        if first
    493543            if isNegative
    494                 $asm.puts "movn #{target.arm64Operand(:ptr)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}"
    495             else
    496                 $asm.puts "movz #{target.arm64Operand(:ptr)}, \##{currentValue}, lsl \##{shift}"
     544                $asm.puts "movn #{target.arm64Operand(:quad)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}"
     545            else
     546                $asm.puts "movz #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}"
    497547            end
    498548            first = false
    499549        else
    500             $asm.puts "movk #{target.arm64Operand(:ptr)}, \##{currentValue}, lsl \##{shift}"
     550            $asm.puts "movk #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}"
    501551        end
    502552    }
     
    507557        case opcode
    508558        when 'addi'
    509             emitARM64Add("add", operands, :int)
     559            emitARM64Add("add", operands, :word)
    510560        when 'addis'
    511             emitARM64Add("adds", operands, :int)
     561            emitARM64Add("adds", operands, :word)
    512562        when 'addp'
    513563            emitARM64Add("add", operands, :ptr)
     
    515565            emitARM64Add("adds", operands, :ptr)
    516566        when 'addq'
    517             emitARM64Add("add", operands, :ptr)
     567            emitARM64Add("add", operands, :quad)
    518568        when "andi"
    519             emitARM64TAC("and", operands, :int)
     569            emitARM64TAC("and", operands, :word)
    520570        when "andp"
    521571            emitARM64TAC("and", operands, :ptr)
    522572        when "andq"
    523             emitARM64TAC("and", operands, :ptr)
     573            emitARM64TAC("and", operands, :quad)
    524574        when "ori"
    525             emitARM64TAC("orr", operands, :int)
     575            emitARM64TAC("orr", operands, :word)
    526576        when "orp"
    527577            emitARM64TAC("orr", operands, :ptr)
    528578        when "orq"
    529             emitARM64TAC("orr", operands, :ptr)
     579            emitARM64TAC("orr", operands, :quad)
    530580        when "xori"
    531             emitARM64TAC("eor", operands, :int)
     581            emitARM64TAC("eor", operands, :word)
    532582        when "xorp"
    533583            emitARM64TAC("eor", operands, :ptr)
    534584        when "xorq"
    535             emitARM64TAC("eor", operands, :ptr)
     585            emitARM64TAC("eor", operands, :quad)
    536586        when "lshifti"
    537             emitARM64Shift("lslv", "ubfm", operands, :int) {
     587            emitARM64Shift("lslv", "ubfm", operands, :word) {
    538588                | value |
    539589                [32 - value, 31 - value]
     
    542592            emitARM64Shift("lslv", "ubfm", operands, :ptr) {
    543593                | value |
    544                 [64 - value, 63 - value]
     594                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
     595                [bitSize - value, bitSize - 1 - value]
    545596            }
    546597        when "lshiftq"
    547             emitARM64Shift("lslv", "ubfm", operands, :ptr) {
     598            emitARM64Shift("lslv", "ubfm", operands, :quad) {
    548599                | value |
    549600                [64 - value, 63 - value]
    550601            }
    551602        when "rshifti"
    552             emitARM64Shift("asrv", "sbfm", operands, :int) {
     603            emitARM64Shift("asrv", "sbfm", operands, :word) {
    553604                | value |
    554605                [value, 31]
     
    557608            emitARM64Shift("asrv", "sbfm", operands, :ptr) {
    558609                | value |
    559                 [value, 63]
     610                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
     611                [value, bitSize - 1]
    560612            }
    561613        when "rshiftq"
    562             emitARM64Shift("asrv", "sbfm", operands, :ptr) {
     614            emitARM64Shift("asrv", "sbfm", operands, :quad) {
    563615                | value |
    564616                [value, 63]
    565617            }
    566618        when "urshifti"
    567             emitARM64Shift("lsrv", "ubfm", operands, :int) {
     619            emitARM64Shift("lsrv", "ubfm", operands, :word) {
    568620                | value |
    569621                [value, 31]
     
    572624            emitARM64Shift("lsrv", "ubfm", operands, :ptr) {
    573625                | value |
    574                 [value, 63]
     626                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
     627                [value, bitSize - 1]
    575628            }
    576629        when "urshiftq"
    577             emitARM64Shift("lsrv", "ubfm", operands, :ptr) {
     630            emitARM64Shift("lsrv", "ubfm", operands, :quad) {
    578631                | value |
    579632                [value, 63]
    580633            }
    581634        when "muli"
    582             $asm.puts "madd #{arm64TACOperands(operands, :int)}, wzr"
     635            $asm.puts "madd #{arm64TACOperands(operands, :word)}, wzr"
    583636        when "mulp"
    584             $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, xzr"
     637            $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, #{arm64GPRName('xzr', :ptr)}"
    585638        when "mulq"
    586             $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, xzr"
     639            $asm.puts "madd #{arm64TACOperands(operands, :quad)}, xzr"
    587640        when "subi"
    588             emitARM64TAC("sub", operands, :int)
     641            emitARM64TAC("sub", operands, :word)
    589642        when "subp"
    590643            emitARM64TAC("sub", operands, :ptr)
    591644        when "subq"
    592             emitARM64TAC("sub", operands, :ptr)
     645            emitARM64TAC("sub", operands, :quad)
    593646        when "subis"
    594             emitARM64TAC("subs", operands, :int)
     647            emitARM64TAC("subs", operands, :word)
    595648        when "negi"
    596             $asm.puts "sub #{operands[0].arm64Operand(:int)}, wzr, #{operands[0].arm64Operand(:int)}"
     649            $asm.puts "sub #{operands[0].arm64Operand(:word)}, wzr, #{operands[0].arm64Operand(:word)}"
    597650        when "negp"
    598             $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"
     651            $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, #{arm64GPRName('xzr', :ptr)}, #{operands[0].arm64Operand(:ptr)}"
    599652        when "negq"
    600             $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"
     653            $asm.puts "sub #{operands[0].arm64Operand(:quad)}, xzr, #{operands[0].arm64Operand(:quad)}"
    601654        when "loadi"
    602             emitARM64Access("ldr", "ldur", operands[1], operands[0], :int)
     655            emitARM64Access("ldr", "ldur", operands[1], operands[0], :word)
    603656        when "loadis"
    604             emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], :ptr)
     657            emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], :quad)
    605658        when "loadp"
    606659            emitARM64Access("ldr", "ldur", operands[1], operands[0], :ptr)
    607660        when "loadq"
    608             emitARM64Access("ldr", "ldur", operands[1], operands[0], :ptr)
     661            emitARM64Access("ldr", "ldur", operands[1], operands[0], :quad)
    609662        when "storei"
    610             emitARM64Unflipped("str", operands, :int)
     663            emitARM64Unflipped("str", operands, :word)
    611664        when "storep"
    612665            emitARM64Unflipped("str", operands, :ptr)
    613666        when "storeq"
    614             emitARM64Unflipped("str", operands, :ptr)
     667            emitARM64Unflipped("str", operands, :quad)
    615668        when "loadb"
    616             emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :int)
     669            emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word)
    617670        when "loadbs"
    618             emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :int)
     671            emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word)
    619672        when "storeb"
    620             emitARM64Unflipped("strb", operands, :int)
     673            emitARM64Unflipped("strb", operands, :word)
    621674        when "loadh"
    622             emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :int)
     675            emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word)
    623676        when "loadhs"
    624             emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :int)
     677            emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word)
    625678        when "storeh"
    626             emitARM64Unflipped("strh", operands, :int)
     679            emitARM64Unflipped("strh", operands, :word)
    627680        when "loadd"
    628681            emitARM64Access("ldr", "ldur", operands[1], operands[0], :double)
     
    640693            emitARM64("fsqrt", operands, :double)
    641694        when "ci2d"
    642             emitARM64("scvtf", operands, [:int, :double])
     695            emitARM64("scvtf", operands, [:word, :double])
    643696        when "bdeq"
    644697            emitARM64Branch("fcmp", operands, :double, "b.eq")
     
    676729            raise "ARM64 does not support this opcode yet, #{codeOriginString}"
    677730        when "td2i"
    678             emitARM64("fcvtzs", operands, [:double, :int])
     731            emitARM64("fcvtzs", operands, [:double, :word])
    679732        when "bcd2i"
    680733            # FIXME: Remove this instruction, or use it and implement it. Currently it's not
     
    696749                # But since the ordering of arguments doesn't change on arm64 between the stp and ldp
    697750                # instructions we need to flip flop the argument positions that were passed to us.
    698                 $asm.puts "ldp #{ops[1].arm64Operand(:ptr)}, #{ops[0].arm64Operand(:ptr)}, [sp], #16"
     751                $asm.puts "ldp #{ops[1].arm64Operand(:quad)}, #{ops[0].arm64Operand(:quad)}, [sp], #16"
    699752            }
    700753        when "push"
    701754            operands.each_slice(2) {
    702755                | ops |
    703                 $asm.puts "stp #{ops[0].arm64Operand(:ptr)}, #{ops[1].arm64Operand(:ptr)}, [sp, #-16]!"
     756                $asm.puts "stp #{ops[0].arm64Operand(:quad)}, #{ops[1].arm64Operand(:quad)}, [sp, #-16]!"
    704757            }
    705758        when "move"
     
    707760                emitARM64MoveImmediate(operands[0].value, operands[1])
    708761            else
    709                 emitARM64("mov", operands, :ptr)
     762                emitARM64("mov", operands, :quad)
    710763            end
    711764        when "sxi2p"
    712             emitARM64("sxtw", operands, [:int, :ptr])
     765            emitARM64("sxtw", operands, [:word, :ptr])
    713766        when "sxi2q"
    714             emitARM64("sxtw", operands, [:int, :ptr])
     767            emitARM64("sxtw", operands, [:word, :quad])
    715768        when "zxi2p"
    716             emitARM64("uxtw", operands, [:int, :ptr])
     769            emitARM64("uxtw", operands, [:word, :ptr])
    717770        when "zxi2q"
    718             emitARM64("uxtw", operands, [:int, :ptr])
     771            emitARM64("uxtw", operands, [:word, :quad])
    719772        when "nop"
    720773            $asm.puts "nop"
    721774        when "bieq", "bbeq"
    722775            if operands[0].immediate? and operands[0].value == 0
    723                 $asm.puts "cbz #{operands[1].arm64Operand(:int)}, #{operands[2].asmLabel}"
     776                $asm.puts "cbz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}"
    724777            elsif operands[1].immediate? and operands[1].value == 0
    725                 $asm.puts "cbz #{operands[0].arm64Operand(:int)}, #{operands[2].asmLabel}"
    726             else
    727                 emitARM64Branch("subs wzr, ", operands, :int, "b.eq")
     778                $asm.puts "cbz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}"
     779            else
     780                emitARM64Branch("subs wzr, ", operands, :word, "b.eq")
    728781            end
    729782        when "bpeq"
     
    733786                $asm.puts "cbz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
    734787            else
    735                 emitARM64Branch("subs xzr, ", operands, :ptr, "b.eq")
     788                emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.eq")
    736789            end
    737790        when "bqeq"
    738791            if operands[0].immediate? and operands[0].value == 0
    739                 $asm.puts "cbz #{operands[1].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
     792                $asm.puts "cbz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}"
    740793            elsif operands[1].immediate? and operands[1].value == 0
    741                 $asm.puts "cbz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
    742             else
    743                 emitARM64Branch("subs xzr, ", operands, :ptr, "b.eq")
     794                $asm.puts "cbz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}"
     795            else
     796                emitARM64Branch("subs xzr, ", operands, :quad, "b.eq")
    744797            end
    745798        when "bineq", "bbneq"
    746799            if operands[0].immediate? and operands[0].value == 0
    747                 $asm.puts "cbnz #{operands[1].arm64Operand(:int)}, #{operands[2].asmLabel}"
     800                $asm.puts "cbnz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}"
    748801            elsif operands[1].immediate? and operands[1].value == 0
    749                 $asm.puts "cbnz #{operands[0].arm64Operand(:int)}, #{operands[2].asmLabel}"
    750             else
    751                 emitARM64Branch("subs wzr, ", operands, :int, "b.ne")
     802                $asm.puts "cbnz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}"
     803            else
     804                emitARM64Branch("subs wzr, ", operands, :word, "b.ne")
    752805            end
    753806        when "bpneq"
     
    757810                $asm.puts "cbnz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
    758811            else
    759                 emitARM64Branch("subs xzr, ", operands, :ptr, "b.ne")
     812                emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ne")
    760813            end
    761814        when "bqneq"
    762815            if operands[0].immediate? and operands[0].value == 0
    763                 $asm.puts "cbnz #{operands[1].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
     816                $asm.puts "cbnz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}"
    764817            elsif operands[1].immediate? and operands[1].value == 0
    765                 $asm.puts "cbnz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
    766             else
    767                 emitARM64Branch("subs xzr, ", operands, :ptr, "b.ne")
     818                $asm.puts "cbnz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}"
     819            else
     820                emitARM64Branch("subs xzr, ", operands, :quad, "b.ne")
    768821            end
    769822        when "bia", "bba"
    770             emitARM64Branch("subs wzr, ", operands, :int, "b.hi")
     823            emitARM64Branch("subs wzr, ", operands, :word, "b.hi")
    771824        when "bpa"
    772             emitARM64Branch("subs xzr, ", operands, :ptr, "b.hi")
     825            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hi")
    773826        when "bqa"
    774             emitARM64Branch("subs xzr, ", operands, :ptr, "b.hi")
     827            emitARM64Branch("subs xzr, ", operands, :quad, "b.hi")
    775828        when "biaeq", "bbaeq"
    776             emitARM64Branch("subs wzr, ", operands, :int, "b.hs")
     829            emitARM64Branch("subs wzr, ", operands, :word, "b.hs")
    777830        when "bpaeq"
    778             emitARM64Branch("subs xzr, ", operands, :ptr, "b.hs")
     831            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hs")
    779832        when "bqaeq"
    780             emitARM64Branch("subs xzr, ", operands, :ptr, "b.hs")
     833            emitARM64Branch("subs xzr, ", operands, :quad, "b.hs")
    781834        when "bib", "bbb"
    782             emitARM64Branch("subs wzr, ", operands, :int, "b.lo")
     835            emitARM64Branch("subs wzr, ", operands, :word, "b.lo")
    783836        when "bpb"
    784             emitARM64Branch("subs xzr, ", operands, :ptr, "b.lo")
     837            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lo")
    785838        when "bqb"
    786             emitARM64Branch("subs xzr, ", operands, :ptr, "b.lo")
     839            emitARM64Branch("subs xzr, ", operands, :quad, "b.lo")
    787840        when "bibeq", "bbbeq"
    788             emitARM64Branch("subs wzr, ", operands, :int, "b.ls")
     841            emitARM64Branch("subs wzr, ", operands, :word, "b.ls")
    789842        when "bpbeq"
    790             emitARM64Branch("subs xzr, ", operands, :ptr, "b.ls")
     843            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ls")
    791844        when "bqbeq"
    792             emitARM64Branch("subs xzr, ", operands, :ptr, "b.ls")
     845            emitARM64Branch("subs xzr, ", operands, :quad, "b.ls")
    793846        when "bigt", "bbgt"
    794             emitARM64Branch("subs wzr, ", operands, :int, "b.gt")
     847            emitARM64Branch("subs wzr, ", operands, :word, "b.gt")
    795848        when "bpgt"
    796             emitARM64Branch("subs xzr, ", operands, :ptr, "b.gt")
     849            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.gt")
    797850        when "bqgt"
    798             emitARM64Branch("subs xzr, ", operands, :ptr, "b.gt")
     851            emitARM64Branch("subs xzr, ", operands, :quad, "b.gt")
    799852        when "bigteq", "bbgteq"
    800             emitARM64Branch("subs wzr, ", operands, :int, "b.ge")
     853            emitARM64Branch("subs wzr, ", operands, :word, "b.ge")
    801854        when "bpgteq"
    802             emitARM64Branch("subs xzr, ", operands, :ptr, "b.ge")
     855            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ge")
    803856        when "bqgteq"
    804             emitARM64Branch("subs xzr, ", operands, :ptr, "b.ge")
     857            emitARM64Branch("subs xzr, ", operands, :quad, "b.ge")
    805858        when "bilt", "bblt"
    806             emitARM64Branch("subs wzr, ", operands, :int, "b.lt")
     859            emitARM64Branch("subs wzr, ", operands, :word, "b.lt")
    807860        when "bplt"
    808             emitARM64Branch("subs xzr, ", operands, :ptr, "b.lt")
     861            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lt")
    809862        when "bqlt"
    810             emitARM64Branch("subs xzr, ", operands, :ptr, "b.lt")
     863            emitARM64Branch("subs xzr, ", operands, :quad, "b.lt")
    811864        when "bilteq", "bblteq"
    812             emitARM64Branch("subs wzr, ", operands, :int, "b.le")
     865            emitARM64Branch("subs wzr, ", operands, :word, "b.le")
    813866        when "bplteq"
    814             emitARM64Branch("subs xzr, ", operands, :ptr, "b.le")
     867            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.le")
    815868        when "bqlteq"
    816             emitARM64Branch("subs xzr, ", operands, :ptr, "b.le")
     869            emitARM64Branch("subs xzr, ", operands, :quad, "b.le")
    817870        when "jmp"
    818871            if operands[0].label?
    819872                $asm.puts "b #{operands[0].asmLabel}"
    820873            else
    821                 emitARM64Unflipped("br", operands, :ptr)
     874                emitARM64Unflipped("br", operands, :quad)
    822875            end
    823876        when "call"
     
    825878                $asm.puts "bl #{operands[0].asmLabel}"
    826879            else
    827                 emitARM64Unflipped("blr", operands, :ptr)
     880                emitARM64Unflipped("blr", operands, :quad)
    828881            end
    829882        when "break"
     
    832885            $asm.puts "ret"
    833886        when "cieq", "cbeq"
    834             emitARM64Compare(operands, :int, "ne")
     887            emitARM64Compare(operands, :word, "ne")
    835888        when "cpeq"
    836889            emitARM64Compare(operands, :ptr, "ne")
    837890        when "cqeq"
    838             emitARM64Compare(operands, :ptr, "ne")
     891            emitARM64Compare(operands, :quad, "ne")
    839892        when "cineq", "cbneq"
    840             emitARM64Compare(operands, :int, "eq")
     893            emitARM64Compare(operands, :word, "eq")
    841894        when "cpneq"
    842895            emitARM64Compare(operands, :ptr, "eq")
    843896        when "cqneq"
    844             emitARM64Compare(operands, :ptr, "eq")
     897            emitARM64Compare(operands, :quad, "eq")
    845898        when "cia", "cba"
    846             emitARM64Compare(operands, :int, "ls")
     899            emitARM64Compare(operands, :word, "ls")
    847900        when "cpa"
    848901            emitARM64Compare(operands, :ptr, "ls")
    849902        when "cqa"
    850             emitARM64Compare(operands, :ptr, "ls")
     903            emitARM64Compare(operands, :quad, "ls")
    851904        when "ciaeq", "cbaeq"
    852             emitARM64Compare(operands, :int, "lo")
     905            emitARM64Compare(operands, :word, "lo")
    853906        when "cpaeq"
    854907            emitARM64Compare(operands, :ptr, "lo")
    855908        when "cqaeq"
    856             emitARM64Compare(operands, :ptr, "lo")
     909            emitARM64Compare(operands, :quad, "lo")
    857910        when "cib", "cbb"
    858             emitARM64Compare(operands, :int, "hs")
     911            emitARM64Compare(operands, :word, "hs")
    859912        when "cpb"
    860913            emitARM64Compare(operands, :ptr, "hs")
    861914        when "cqb"
    862             emitARM64Compare(operands, :ptr, "hs")
     915            emitARM64Compare(operands, :quad, "hs")
    863916        when "cibeq", "cbbeq"
    864             emitARM64Compare(operands, :int, "hi")
     917            emitARM64Compare(operands, :word, "hi")
    865918        when "cpbeq"
    866919            emitARM64Compare(operands, :ptr, "hi")
    867920        when "cqbeq"
    868             emitARM64Compare(operands, :ptr, "hi")
     921            emitARM64Compare(operands, :quad, "hi")
    869922        when "cilt", "cblt"
    870             emitARM64Compare(operands, :int, "ge")
     923            emitARM64Compare(operands, :word, "ge")
    871924        when "cplt"
    872925            emitARM64Compare(operands, :ptr, "ge")
    873926        when "cqlt"
    874             emitARM64Compare(operands, :ptr, "ge")
     927            emitARM64Compare(operands, :quad, "ge")
    875928        when "cilteq", "cblteq"
    876             emitARM64Compare(operands, :int, "gt")
     929            emitARM64Compare(operands, :word, "gt")
    877930        when "cplteq"
    878931            emitARM64Compare(operands, :ptr, "gt")
    879932        when "cqlteq"
    880             emitARM64Compare(operands, :ptr, "gt")
     933            emitARM64Compare(operands, :quad, "gt")
    881934        when "cigt", "cbgt"
    882             emitARM64Compare(operands, :int, "le")
     935            emitARM64Compare(operands, :word, "le")
    883936        when "cpgt"
    884937            emitARM64Compare(operands, :ptr, "le")
    885938        when "cqgt"
    886             emitARM64Compare(operands, :ptr, "le")
     939            emitARM64Compare(operands, :quad, "le")
    887940        when "cigteq", "cbgteq"
    888             emitARM64Compare(operands, :int, "lt")
     941            emitARM64Compare(operands, :word, "lt")
    889942        when "cpgteq"
    890943            emitARM64Compare(operands, :ptr, "lt")
    891944        when "cqgteq"
    892             emitARM64Compare(operands, :ptr, "lt")
     945            emitARM64Compare(operands, :quad, "lt")
    893946        when "peek"
    894             $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [sp, \##{operands[0].value * 8}]"
     947            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]"
    895948        when "poke"
    896             $asm.puts "str #{operands[1].arm64Operand(:ptr)}, [sp, \##{operands[0].value * 8}]"
     949            $asm.puts "str #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]"
    897950        when "fp2d"
    898951            emitARM64("fmov", operands, [:ptr, :double])
    899952        when "fq2d"
    900             emitARM64("fmov", operands, [:ptr, :double])
     953            emitARM64("fmov", operands, [:quad, :double])
    901954        when "fd2p"
    902955            emitARM64("fmov", operands, [:double, :ptr])
    903956        when "fd2q"
    904             emitARM64("fmov", operands, [:double, :ptr])
     957            emitARM64("fmov", operands, [:double, :quad])
    905958        when "bo"
    906959            $asm.puts "b.vs #{operands[0].asmLabel}"
     
    912965            $asm.puts "b.ne #{operands[0].asmLabel}"
    913966        when "leai"
    914             operands[0].arm64EmitLea(operands[1], :int)
     967            operands[0].arm64EmitLea(operands[1], :word)
    915968        when "leap"
    916969            operands[0].arm64EmitLea(operands[1], :ptr)
    917970        when "leaq"
    918             operands[0].arm64EmitLea(operands[1], :ptr)
     971            operands[0].arm64EmitLea(operands[1], :quad)
    919972        when "smulli"
    920             $asm.puts "smaddl #{operands[2].arm64Operand(:ptr)}, #{operands[0].arm64Operand(:int)}, #{operands[1].arm64Operand(:int)}, xzr"
     973            $asm.puts "smaddl #{operands[2].arm64Operand(:quad)}, #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, xzr"
    921974        when "memfence"
    922975            $asm.puts "dmb sy"
    923976        when "pcrtoaddr"
    924             $asm.puts "adr #{operands[1].arm64Operand(:ptr)}, #{operands[0].value}"
     977            $asm.puts "adr #{operands[1].arm64Operand(:quad)}, #{operands[0].value}"
    925978        when "nopCortexA53Fix835769"
    926979            $asm.putStr("#if CPU(ARM64_CORTEXA53)")
     
    934987            $asm.putStr("#if OS(DARWIN)")
    935988            $asm.puts "L_offlineasm_loh_adrp_#{uid}:"
    936             $asm.puts "adrp #{operands[1].arm64Operand(:ptr)}, #{operands[0].asmLabel}@GOTPAGE"
     989            $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGE"
    937990            $asm.puts "L_offlineasm_loh_ldr_#{uid}:"
    938             $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [#{operands[1].arm64Operand(:ptr)}, #{operands[0].asmLabel}@GOTPAGEOFF]"
     991            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGEOFF]"
    939992
    940993            # On Linux, use ELF GOT relocation specifiers.
    941994            $asm.putStr("#elif OS(LINUX)")
    942             $asm.puts "adrp #{operands[1].arm64Operand(:ptr)}, :got:#{operands[0].asmLabel}"
    943             $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [#{operands[1].arm64Operand(:ptr)}, :got_lo12:#{operands[0].asmLabel}]"
     995            $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, :got:#{operands[0].asmLabel}"
     996            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, :got_lo12:#{operands[0].asmLabel}]"
    944997
    945998            # Throw a compiler error everywhere else.
  • trunk/Source/JavaScriptCore/offlineasm/asm.rb

    r229506 r237173  
    390390            lowLevelAST.validate
    391391            emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) {
     392                 $currentSettings = concreteSettings
    392393                $asm.inAsm {
    393394                    lowLevelAST.lower(backend)
  • trunk/Source/JavaScriptCore/offlineasm/ast.rb

    r236434 r237173  
    812812        @index = index
    813813        @scale = scale
    814         raise unless [1, 2, 4, 8].member? @scale
    815814        @offset = offset
    816815    end
    817    
     816
     817    def scaleValue
     818        raise unless [1, 2, 4, 8].member? scale.value
     819        scale.value
     820    end
     821
    818822    def scaleShift
    819         case scale
     823        case scaleValue
    820824        when 1
    821825            0
     
    827831            3
    828832        else
    829             raise "Bad scale at #{codeOriginString}"
     833            raise "Bad scale: #{scale.value} at #{codeOriginString}"
    830834        end
    831835    end
     
    840844   
    841845    def mapChildren
    842         BaseIndex.new(codeOrigin, (yield @base), (yield @index), @scale, (yield @offset))
    843     end
    844    
    845     def dump
    846         "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale}]"
     846        BaseIndex.new(codeOrigin, (yield @base), (yield @index), (yield @scale), (yield @offset))
     847    end
     848   
     849    def dump
     850        "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale.value}]"
    847851    end
    848852   
  • trunk/Source/JavaScriptCore/offlineasm/backends.rb

    r229482 r237173  
    8787        if backendName =~ /ARM.*/
    8888            backendName.sub!(/ARMV7(S?)(.*)/) { | _ | 'ARMv7' + $1.downcase + $2 }
     89            backendName = "ARM64" if backendName == "ARM64_32"
    8990        end
    9091        backendName = "X86" if backendName == "I386"
  • trunk/Source/JavaScriptCore/offlineasm/parser.rb

    r236499 r237173  
    364364        result
    365365    end
     366
     367    def parseConstExpr
     368        if @tokens[@idx] == "constexpr"
     369            @idx += 1
     370            skipNewLine
     371            if @tokens[@idx] == "("
     372                codeOrigin, text = parseTextInParens
     373                text = text.join
     374            else
     375                codeOrigin, text = parseColonColon
     376                text = text.join("::")
     377            end
     378            ConstExpr.forName(codeOrigin, text)
     379        else
     380            parseError
     381        end
     382    end
    366383   
    367384    def parseAddress(offset)
     
    388405            b = parseVariable
    389406            if @tokens[@idx] == "]"
    390                 result = BaseIndex.new(codeOrigin, a, b, 1, offset)
     407                result = BaseIndex.new(codeOrigin, a, b, Immediate.new(codeOrigin, 1), offset)
    391408            else
    392409                parseError unless @tokens[@idx] == ","
    393410                @idx += 1
    394                 parseError unless ["1", "2", "4", "8"].member? @tokens[@idx].string
    395                 c = @tokens[@idx].string.to_i
    396                 @idx += 1
     411                if ["1", "2", "4", "8"].member? @tokens[@idx].string
     412                    c = Immediate.new(codeOrigin, @tokens[@idx].string.to_i)
     413                    @idx += 1
     414                elsif @tokens[@idx] == "constexpr"
     415                    c = parseConstExpr
     416                else
     417                    c = parseVariable
     418                end
    397419                parseError unless @tokens[@idx] == "]"
    398420                result = BaseIndex.new(codeOrigin, a, b, c, offset)
     
    479501            Sizeof.forName(codeOrigin, names.join('::'))
    480502        elsif @tokens[@idx] == "constexpr"
    481             @idx += 1
    482             skipNewLine
    483             if @tokens[@idx] == "("
    484                 codeOrigin, text = parseTextInParens
    485                 text = text.join
    486             else
    487                 codeOrigin, text = parseColonColon
    488                 text = text.join("::")
    489             end
    490             ConstExpr.forName(codeOrigin, text)
     503            parseConstExpr
    491504        elsif isLabel @tokens[@idx]
    492505            result = LabelReference.new(@tokens[@idx].codeOrigin, Label.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string))
  • trunk/Source/JavaScriptCore/offlineasm/x86.rb

    r230273 r237173  
    422422    def x86AddressOperand(addressKind)
    423423        if !isIntelSyntax
    424             "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scale})"
    425         else
    426             "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scale}]"
     424            "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scaleValue})"
     425        else
     426            "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scaleValue}]"
    427427        end
    428428    end
     
    432432            x86AddressOperand(:ptr)
    433433        else
    434             "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scale}]"
     434            "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scaleValue}]"
    435435        end
    436436    end
  • trunk/Source/JavaScriptCore/runtime/BasicBlockLocation.cpp

    r192125 r237173  
    7575{
    7676    Vector<Gap> executedRanges = getExecutedRanges();
    77     for (Gap gap : executedRanges)
    78         dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:%zu\n", gap.first, gap.second, hasExecuted() ? "true" : "false", m_executionCount);
     77    for (Gap gap : executedRanges) {
     78        dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:", gap.first, gap.second, hasExecuted() ? "true" : "false");
     79        dataLogLn(m_executionCount);
     80    }
    7981}
    8082
     
    8385void BasicBlockLocation::emitExecuteCode(CCallHelpers& jit) const
    8486{
    85     static_assert(sizeof(size_t) == 8, "Assuming size_t is 64 bits on 64 bit platforms.");
     87    static_assert(sizeof(UCPURegister) == 8, "Assuming size_t is 64 bits on 64 bit platforms.");
    8688    jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(&m_executionCount));
    8789}
  • trunk/Source/JavaScriptCore/runtime/BasicBlockLocation.h

    r218794 r237173  
    6363    int m_startOffset;
    6464    int m_endOffset;
    65     size_t m_executionCount;
    6665    Vector<Gap> m_gaps;
     66    UCPURegister m_executionCount;
    6767};
    6868
  • trunk/Source/JavaScriptCore/runtime/HasOwnPropertyCache.h

    r230856 r237173  
    3434class HasOwnPropertyCache {
    3535    static const uint32_t size = 2 * 1024;
    36     static_assert(!(size & (size - 1)), "size should be a power of two.");
     36    static_assert(hasOneBitSet(size), "size should be a power of two.");
    3737public:
    3838    static const uint32_t mask = size - 1;
  • trunk/Source/JavaScriptCore/runtime/JSBigInt.cpp

    r237080 r237173  
    227227
    228228// Multiplies {this} with {factor} and adds {summand} to the result.
    229 inline void JSBigInt::inplaceMultiplyAdd(uintptr_t factor, uintptr_t summand)
    230 {
    231     STATIC_ASSERT(sizeof(factor) == sizeof(Digit));
    232     STATIC_ASSERT(sizeof(summand) == sizeof(Digit));
    233 
     229void JSBigInt::inplaceMultiplyAdd(Digit factor, Digit summand)
     230{
    234231    internalMultiplyAdd(this, factor, summand, length(), this);
    235232}
     
    564561    // This shifting produces a value which covers 0 < {s} <= (digitBits - 1) cases. {s} == digitBits never happen as we asserted.
    565562    // Since {sZeroMask} clears the value in the case of {s} == 0, {s} == 0 case is also covered.
    566     STATIC_ASSERT(sizeof(intptr_t) == sizeof(Digit));
    567     Digit sZeroMask = static_cast<Digit>((-static_cast<intptr_t>(s)) >> (digitBits - 1));
     563    STATIC_ASSERT(sizeof(CPURegister) == sizeof(Digit));
     564    Digit sZeroMask = static_cast<Digit>((-static_cast<CPURegister>(s)) >> (digitBits - 1));
    568565    static constexpr unsigned shiftMask = digitBits - 1;
    569566    Digit un32 = (high << s) | ((low >> ((digitBits - s) & shiftMask)) & sZeroMask);
  • trunk/Source/JavaScriptCore/runtime/JSBigInt.h

    r236901 r237173  
    2626#pragma once
    2727
     28#include "CPU.h"
    2829#include "ExceptionHelpers.h"
    2930#include "JSObject.h"
     
    119120private:
    120121
    121     using Digit = uintptr_t;
     122    using Digit = UCPURegister;
    122123    static constexpr unsigned bitsPerByte = 8;
    123124    static constexpr unsigned digitBits = sizeof(Digit) * bitsPerByte;
  • trunk/Source/JavaScriptCore/runtime/JSObject.h

    r236697 r237173  
    10751075
    10761076    AuxiliaryBarrier<Butterfly*> m_butterfly;
    1077 #if USE(JSVALUE32_64)
     1077#if CPU(ADDRESS32)
    10781078    unsigned m_32BitPadding;
    10791079#endif
  • trunk/Source/JavaScriptCore/runtime/Options.cpp

    r236883 r237173  
    218218
    219219    return multiCorePriorityDelta;
     220}
     221
     222static bool jitEnabledByDefault()
     223{
     224    return is32Bit() || isAddress64Bit();
    220225}
    221226
  • trunk/Source/JavaScriptCore/runtime/Options.h

    r237108 r237173  
    131131    \
    132132    v(bool, useLLInt,  true, Normal, "allows the LLINT to be used if true") \
    133     v(bool, useJIT,    true, Normal, "allows the executable pages to be allocated for JIT and thunks if true") \
     133    v(bool, useJIT, jitEnabledByDefault(), Normal, "allows the executable pages to be allocated for JIT and thunks if true") \
    134134    v(bool, useBaselineJIT, true, Normal, "allows the baseline JIT to be used if true") \
    135135    v(bool, useDFGJIT, true, Normal, "allows the DFG JIT to be used if true") \
  • trunk/Source/JavaScriptCore/runtime/RegExp.cpp

    r235636 r237173  
    495495            snprintf(jit16BitMatchAddr, jitAddrSize, "----      ");
    496496        } else {
    497             snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get8BitMatchOnlyAddr()));
    498             snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get16BitMatchOnlyAddr()));
    499             snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get8BitMatchAddr()));
    500             snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get16BitMatchAddr()));
     497            snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchOnlyAddr()));
     498            snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchOnlyAddr()));
     499            snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchAddr()));
     500            snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchAddr()));
    501501        }
    502502#else
  • trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp

    r236382 r237173  
    229229            if (isCFrame()) {
    230230                RELEASE_ASSERT(!LLInt::isLLIntPC(frame()->callerFrame));
    231                 stackTrace[m_depth] = UnprocessedStackFrame(frame()->pc);
     231                stackTrace[m_depth] = UnprocessedStackFrame(frame()->returnPC);
    232232                m_depth++;
    233233            } else
  • trunk/Source/JavaScriptCore/runtime/SlowPathReturnType.h

    r206525 r237173  
    2626#pragma once
    2727
     28#include "CPU.h"
    2829#include <wtf/StdLibExtras.h>
    2930
     
    3536// warnings, or worse, a change in the ABI used to return these types.
    3637struct SlowPathReturnType {
    37     void* a;
    38     void* b;
     38    CPURegister a;
     39    CPURegister b;
    3940};
     41static_assert(sizeof(SlowPathReturnType) >= sizeof(void*) * 2, "SlowPathReturnType should fit in two machine registers");
    4042
    4143inline SlowPathReturnType encodeResult(void* a, void* b)
    4244{
    4345    SlowPathReturnType result;
    44     result.a = a;
    45     result.b = b;
     46    result.a = reinterpret_cast<CPURegister>(a);
     47    result.b = reinterpret_cast<CPURegister>(b);
    4648    return result;
    4749}
     
    4951inline void decodeResult(SlowPathReturnType result, void*& a, void*& b)
    5052{
    51     a = result.a;
    52     b = result.b;
     53    a = reinterpret_cast<void*>(result.a);
     54    b = reinterpret_cast<void*>(result.b);
    5355}
    5456
  • trunk/Source/JavaScriptCore/tools/SigillCrashAnalyzer.cpp

    r234649 r237173  
    322322        char pcString[24];
    323323        if (currentPC == machinePC) {
    324             snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<unsigned long>(currentPC));
     324            snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<uintptr_t>(currentPC));
    325325            log("%20s: %s    <=========================", pcString, m_arm64Opcode.disassemble(currentPC));
    326326        } else {
    327             snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<unsigned long>(currentPC));
     327            snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<uintptr_t>(currentPC));
    328328            log("%20s: %s", pcString, m_arm64Opcode.disassemble(currentPC));
    329329        }
  • trunk/Source/WTF/ChangeLog

    r237156 r237173  
     12018-10-15  Keith Miller  <keith_miller@apple.com>
     2
     3        Support arm64 CPUs with a 32-bit address space
     4        https://bugs.webkit.org/show_bug.cgi?id=190273
     5
     6        Reviewed by Michael Saboff.
     7
     8        Use WTF_CPU_ADDRESS64/32 to decide if the system is running on arm64_32.
     9
     10        * wtf/MathExtras.h:
     11        (getLSBSet):
     12        * wtf/Platform.h:
     13
    1142018-10-15  Timothy Hatcher  <timothy@apple.com>
    215
  • trunk/Source/WTF/wtf/MathExtras.h

    r237099 r237173  
    207207}
    208208
    209 template <typename T> inline unsigned getLSBSet(T value)
     209template <typename T> constexpr unsigned getLSBSet(T value)
    210210{
    211211    typedef typename std::make_unsigned<T>::type UnsignedT;
  • trunk/Source/WTF/wtf/Platform.h

    r237136 r237173  
    737737
    738738#if !defined(USE_JSVALUE64) && !defined(USE_JSVALUE32_64)
    739 #if CPU(ADDRESS64)
     739#if CPU(ADDRESS64) || CPU(ARM64)
    740740#define USE_JSVALUE64 1
    741741#else
     
    746746/* The JIT is enabled by default on all x86, x86-64, ARM & MIPS platforms except ARMv7k. */
    747747#if !defined(ENABLE_JIT) \
    748     && (CPU(X86) || CPU(X86_64) || CPU(ARM) || (CPU(ARM64) && !defined(__ILP32__)) || CPU(MIPS)) \
     748    && (CPU(X86) || CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)) \
    749749    && !CPU(APPLE_ARMV7K)
    750750#define ENABLE_JIT 1
     
    866866
    867867#if !defined(ENABLE_WEBASSEMBLY)
    868 #if ENABLE(B3_JIT) && PLATFORM(COCOA)
     868#if ENABLE(B3_JIT) && PLATFORM(COCOA) && CPU(ADDRESS64)
    869869#define ENABLE_WEBASSEMBLY 1
    870870#else
  • trunk/Source/WebCore/ChangeLog

    r237170 r237173  
     12018-10-15  Keith Miller  <keith_miller@apple.com>
     2
     3        Support arm64 CPUs with a 32-bit address space
     4        https://bugs.webkit.org/show_bug.cgi?id=190273
     5
     6        Reviewed by Michael Saboff.
     7
     8        Fix missing namespace annotation.
     9
     10        * cssjit/SelectorCompiler.cpp:
     11        (WebCore::SelectorCompiler::SelectorCodeGenerator::generateAddStyleRelation):
     12
    1132018-10-15  Justin Fan  <justin_fan@apple.com>
    214
  • trunk/Source/WebCore/cssjit/SelectorCompiler.cpp

    r236228 r237173  
    22332233        m_assembler.lshiftPtr(Assembler::TrustedImm32(4), sizeAndTarget);
    22342234#else
    2235         m_assembler.mul32(TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget);
     2235        m_assembler.mul32(Assembler::TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget);
    22362236#endif
    22372237        m_assembler.addPtr(dataAddress, sizeAndTarget);
Note: See TracChangeset for help on using the changeset viewer.