Changeset 237173 in webkit
- Timestamp:
- Oct 16, 2018, 12:19:13 AM (7 years ago)
- Location:
- trunk/Source
- Files:
-
- 57 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r237170 r237173 1 2018-10-15 Keith Miller <keith_miller@apple.com> 2 3 Support arm64 CPUs with a 32-bit address space 4 https://bugs.webkit.org/show_bug.cgi?id=190273 5 6 Reviewed by Michael Saboff. 7 8 This patch adds support for arm64_32 in the LLInt. In order to 9 make this work we needed to add a new type that reflects the size 10 of a cpu register. This type is called CPURegister or UCPURegister 11 for the unsigned version. Most places that used void* or intptr_t 12 to refer to a register have been changed to use this new type. 13 14 * JavaScriptCore.xcodeproj/project.pbxproj: 15 * assembler/ARM64Assembler.h: 16 (JSC::isInt): 17 (JSC::is4ByteAligned): 18 (JSC::PairPostIndex::PairPostIndex): 19 (JSC::PairPreIndex::PairPreIndex): 20 (JSC::ARM64Assembler::readPointer): 21 (JSC::ARM64Assembler::readCallTarget): 22 (JSC::ARM64Assembler::computeJumpType): 23 (JSC::ARM64Assembler::linkCompareAndBranch): 24 (JSC::ARM64Assembler::linkConditionalBranch): 25 (JSC::ARM64Assembler::linkTestAndBranch): 26 (JSC::ARM64Assembler::loadRegisterLiteral): 27 (JSC::ARM64Assembler::loadStoreRegisterPairPostIndex): 28 (JSC::ARM64Assembler::loadStoreRegisterPairPreIndex): 29 (JSC::ARM64Assembler::loadStoreRegisterPairOffset): 30 (JSC::ARM64Assembler::loadStoreRegisterPairNonTemporal): 31 (JSC::isInt7): Deleted. 32 (JSC::isInt11): Deleted. 33 * assembler/CPU.h: 34 (JSC::isAddress64Bit): 35 (JSC::isAddress32Bit): 36 * assembler/MacroAssembler.h: 37 (JSC::MacroAssembler::shouldBlind): 38 * assembler/MacroAssemblerARM64.cpp: 39 (JSC::MacroAssemblerARM64::collectCPUFeatures): 40 * assembler/MacroAssemblerARM64.h: 41 (JSC::MacroAssemblerARM64::load): 42 (JSC::MacroAssemblerARM64::store): 43 (JSC::MacroAssemblerARM64::isInIntRange): Deleted. 44 * assembler/Printer.h: 45 * assembler/ProbeContext.h: 46 (JSC::Probe::CPUState::gpr): 47 (JSC::Probe::CPUState::spr): 48 (JSC::Probe::Context::gpr): 49 (JSC::Probe::Context::spr): 50 * b3/B3ConstPtrValue.h: 51 * b3/B3StackmapSpecial.cpp: 52 (JSC::B3::StackmapSpecial::isArgValidForRep): 53 * b3/air/AirArg.h: 54 (JSC::B3::Air::Arg::stackSlot const): 55 (JSC::B3::Air::Arg::special const): 56 * b3/air/testair.cpp: 57 * b3/testb3.cpp: 58 (JSC::B3::testStoreConstantPtr): 59 (JSC::B3::testInterpreter): 60 (JSC::B3::testAddShl32): 61 (JSC::B3::testLoadBaseIndexShift32): 62 * bindings/ScriptFunctionCall.cpp: 63 (Deprecated::ScriptCallArgumentHandler::appendArgument): 64 * bindings/ScriptFunctionCall.h: 65 * bytecode/CodeBlock.cpp: 66 (JSC::roundCalleeSaveSpaceAsVirtualRegisters): 67 * dfg/DFGOSRExit.cpp: 68 (JSC::DFG::restoreCalleeSavesFor): 69 (JSC::DFG::saveCalleeSavesFor): 70 (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): 71 (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): 72 * dfg/DFGOSRExitCompilerCommon.cpp: 73 (JSC::DFG::reifyInlinedCallFrames): 74 * dfg/DFGSpeculativeJIT64.cpp: 75 (JSC::DFG::SpeculativeJIT::compile): 76 * disassembler/UDis86Disassembler.cpp: 77 (JSC::tryToDisassembleWithUDis86): 78 * ftl/FTLLowerDFGToB3.cpp: 79 (JSC::FTL::DFG::LowerDFGToB3::compileWeakMapGet): 80 * heap/MachineStackMarker.cpp: 81 (JSC::copyMemory): 82 * interpreter/CallFrame.h: 83 (JSC::ExecState::returnPC const): 84 (JSC::ExecState::hasReturnPC const): 85 (JSC::ExecState::clearReturnPC): 86 (JSC::ExecState::returnPCOffset): 87 (JSC::ExecState::isGlobalExec const): 88 (JSC::ExecState::setReturnPC): 89 * interpreter/CalleeBits.h: 90 (JSC::CalleeBits::boxWasm): 91 (JSC::CalleeBits::isWasm const): 92 (JSC::CalleeBits::asWasmCallee const): 93 * interpreter/Interpreter.cpp: 94 (JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const): 95 * interpreter/VMEntryRecord.h: 96 * jit/AssemblyHelpers.h: 97 (JSC::AssemblyHelpers::clearStackFrame): 98 * jit/RegisterAtOffset.h: 99 (JSC::RegisterAtOffset::offsetAsIndex const): 100 * jit/RegisterAtOffsetList.cpp: 101 (JSC::RegisterAtOffsetList::RegisterAtOffsetList): 102 * llint/LLIntData.cpp: 103 (JSC::LLInt::Data::performAssertions): 104 * llint/LLIntOfflineAsmConfig.h: 105 * llint/LowLevelInterpreter.asm: 106 * llint/LowLevelInterpreter64.asm: 107 * offlineasm/arm64.rb: 108 * offlineasm/asm.rb: 109 * offlineasm/ast.rb: 110 * offlineasm/backends.rb: 111 * offlineasm/parser.rb: 112 * offlineasm/x86.rb: 113 * runtime/BasicBlockLocation.cpp: 114 (JSC::BasicBlockLocation::dumpData const): 115 (JSC::BasicBlockLocation::emitExecuteCode const): 116 * runtime/BasicBlockLocation.h: 117 * runtime/HasOwnPropertyCache.h: 118 * runtime/JSBigInt.cpp: 119 (JSC::JSBigInt::inplaceMultiplyAdd): 120 (JSC::JSBigInt::digitDiv): 121 * runtime/JSBigInt.h: 122 * runtime/JSObject.h: 123 * runtime/Options.cpp: 124 (JSC::jitEnabledByDefault): 125 * runtime/Options.h: 126 * runtime/RegExp.cpp: 127 (JSC::RegExp::printTraceData): 128 * runtime/SamplingProfiler.cpp: 129 (JSC::CFrameWalker::walk): 130 * runtime/SlowPathReturnType.h: 131 (JSC::encodeResult): 132 (JSC::decodeResult): 133 * tools/SigillCrashAnalyzer.cpp: 134 (JSC::SigillCrashAnalyzer::dumpCodeBlock): 135 1 136 2018-10-15 Justin Fan <justin_fan@apple.com> 2 137 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r237147 r237173 10143 10143 runOnlyForDeploymentPostprocessing = 0; 10144 10144 shellPath = /bin/sh; 10145 shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1 ";10145 shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1\n"; 10146 10146 }; 10147 10147 65FB3F6509D11E9100F49DEB /* Generate Derived Sources */ = { -
trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h
r237136 r237173 30 30 #include "AssemblerBuffer.h" 31 31 #include "AssemblerCommon.h" 32 #include "CPU.h" 32 33 #include "JSCPtrTag.h" 33 34 #include <limits.h> … … 55 56 namespace JSC { 56 57 57 ALWAYS_INLINE bool isInt7(int32_t value) 58 template<size_t bits, typename Type> 59 ALWAYS_INLINE constexpr bool isInt(Type t) 58 60 { 59 return value == ((value << 25) >> 25); 61 constexpr size_t shift = sizeof(Type) * CHAR_BIT - bits; 62 static_assert(sizeof(Type) * CHAR_BIT > shift, "shift is larger than the size of the value"); 63 return ((t << shift) >> shift) == t; 60 64 } 61 65 62 ALWAYS_INLINE bool isInt11(int32_t value)66 static ALWAYS_INLINE bool is4ByteAligned(const void* ptr) 63 67 { 64 return value == ((value << 21) >> 21);68 return !(reinterpret_cast<intptr_t>(ptr) & 0x3); 65 69 } 66 70 … … 131 135 : m_value(value) 132 136 { 133 ASSERT(isInt 11(value));137 ASSERT(isInt<11>(value)); 134 138 } 135 139 … … 145 149 : m_value(value) 146 150 { 147 ASSERT(isInt 11(value));151 ASSERT(isInt<11>(value)); 148 152 } 149 153 … … 461 465 union { 462 466 struct RealTypes { 463 int ptr_t m_from : 48;464 int ptr_t m_to : 48;467 int64_t m_from; 468 int64_t m_to; 465 469 JumpType m_type : 8; 466 470 JumpLinkType m_linkType : 8; … … 2806 2810 result |= static_cast<uintptr_t>(imm16) << 16; 2807 2811 2812 #if CPU(ADDRESS64) 2808 2813 expected = disassembleMoveWideImediate(address + 2, sf, opc, hw, imm16, rd); 2809 2814 ASSERT_UNUSED(expected, expected && sf && opc == MoveWideOp_K && hw == 2 && rd == rdFirst); 2810 2815 result |= static_cast<uintptr_t>(imm16) << 32; 2816 #endif 2811 2817 2812 2818 return reinterpret_cast<void*>(result); … … 2815 2821 static void* readCallTarget(void* from) 2816 2822 { 2817 return readPointer(reinterpret_cast<int*>(from) - 4);2823 return readPointer(reinterpret_cast<int*>(from) - (isAddress64Bit() ? 4 : 3)); 2818 2824 } 2819 2825 … … 2932 2938 return LinkJumpNoCondition; 2933 2939 case JumpCondition: { 2934 ASSERT( !(reinterpret_cast<intptr_t>(from) & 0x3));2935 ASSERT( !(reinterpret_cast<intptr_t>(to) & 0x3));2940 ASSERT(is4ByteAligned(from)); 2941 ASSERT(is4ByteAligned(to)); 2936 2942 intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from)); 2937 2943 2938 if ( ((relative << 43) >> 43) == relative)2944 if (isInt<21>(relative)) 2939 2945 return LinkJumpConditionDirect; 2940 2946 … … 2942 2948 } 2943 2949 case JumpCompareAndBranch: { 2944 ASSERT( !(reinterpret_cast<intptr_t>(from) & 0x3));2945 ASSERT( !(reinterpret_cast<intptr_t>(to) & 0x3));2950 ASSERT(is4ByteAligned(from)); 2951 ASSERT(is4ByteAligned(to)); 2946 2952 intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from)); 2947 2953 2948 if ( ((relative << 43) >> 43) == relative)2954 if (isInt<21>(relative)) 2949 2955 return LinkJumpCompareAndBranchDirect; 2950 2956 … … 2952 2958 } 2953 2959 case JumpTestBit: { 2954 ASSERT( !(reinterpret_cast<intptr_t>(from) & 0x3));2955 ASSERT( !(reinterpret_cast<intptr_t>(to) & 0x3));2960 ASSERT(is4ByteAligned(from)); 2961 ASSERT(is4ByteAligned(to)); 2956 2962 intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from)); 2957 2963 2958 if ( ((relative << 50) >> 50) == relative)2964 if (isInt<14>(relative)) 2959 2965 return LinkJumpTestBitDirect; 2960 2966 … … 3074 3080 ASSERT(!(reinterpret_cast<intptr_t>(to) & 3)); 3075 3081 intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2; 3076 ASSERT( ((offset << 38) >> 38) == offset);3077 3078 bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits3082 ASSERT(isInt<26>(offset)); 3083 3084 bool useDirect = isInt<19>(offset); 3079 3085 ASSERT(!isDirect || useDirect); 3080 3086 … … 3102 3108 ASSERT(!(reinterpret_cast<intptr_t>(to) & 3)); 3103 3109 intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2; 3104 ASSERT( ((offset << 38) >> 38) == offset);3105 3106 bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits3110 ASSERT(isInt<26>(offset)); 3111 3112 bool useDirect = isInt<19>(offset); 3107 3113 ASSERT(!isDirect || useDirect); 3108 3114 … … 3131 3137 intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2; 3132 3138 ASSERT(static_cast<int>(offset) == offset); 3133 ASSERT( ((offset << 38) >> 38) == offset);3134 3135 bool useDirect = ((offset << 50) >> 50) == offset; // Fits in 14 bits3139 ASSERT(isInt<26>(offset)); 3140 3141 bool useDirect = isInt<14>(offset); 3136 3142 ASSERT(!isDirect || useDirect); 3137 3143 … … 3512 3518 ALWAYS_INLINE static int loadRegisterLiteral(LdrLiteralOp opc, bool V, int imm19, FPRegisterID rt) 3513 3519 { 3514 ASSERT( ((imm19 << 13) >> 13) == imm19);3520 ASSERT(isInt<19>(imm19)); 3515 3521 return (0x18000000 | opc << 30 | V << 26 | (imm19 & 0x7ffff) << 5 | rt); 3516 3522 } … … 3543 3549 unsigned immedShiftAmount = memPairOffsetShift(V, size); 3544 3550 int imm7 = immediate >> immedShiftAmount; 3545 ASSERT((imm7 << immedShiftAmount) == immediate && isInt 7(imm7));3551 ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7)); 3546 3552 return (0x28800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt); 3547 3553 } … … 3574 3580 unsigned immedShiftAmount = memPairOffsetShift(V, size); 3575 3581 int imm7 = immediate >> immedShiftAmount; 3576 ASSERT((imm7 << immedShiftAmount) == immediate && isInt 7(imm7));3582 ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7)); 3577 3583 return (0x29800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt); 3578 3584 } … … 3591 3597 unsigned immedShiftAmount = memPairOffsetShift(V, size); 3592 3598 int imm7 = immediate >> immedShiftAmount; 3593 ASSERT((imm7 << immedShiftAmount) == immediate && isInt 7(imm7));3599 ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7)); 3594 3600 return (0x29000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt); 3595 3601 } … … 3608 3614 unsigned immedShiftAmount = memPairOffsetShift(V, size); 3609 3615 int imm7 = immediate >> immedShiftAmount; 3610 ASSERT((imm7 << immedShiftAmount) == immediate && isInt 7(imm7));3616 ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7)); 3611 3617 return (0x28000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt); 3612 3618 } -
trunk/Source/JavaScriptCore/assembler/CPU.h
r218137 r237173 29 29 30 30 namespace JSC { 31 32 #if USE(JSVALUE64) 33 using CPURegister = int64_t; 34 using UCPURegister = uint64_t; 35 #else 36 using CPURegister = int32_t; 37 using UCPURegister = uint32_t; 38 #endif 31 39 32 40 constexpr bool isARMv7IDIVSupported() … … 80 88 } 81 89 90 constexpr bool isAddress64Bit() 91 { 92 return sizeof(void*) == 8; 93 } 94 95 constexpr bool isAddress32Bit() 96 { 97 return !isAddress64Bit(); 98 } 99 82 100 constexpr bool isMIPS() 83 101 { -
trunk/Source/JavaScriptCore/assembler/MacroAssembler.h
r235517 r237173 1273 1273 // First off we'll special case common, "safe" values to avoid hurting 1274 1274 // performance too much 1275 uint ptr_t value = imm.asTrustedImmPtr().asIntptr();1275 uint64_t value = imm.asTrustedImmPtr().asIntptr(); 1276 1276 switch (value) { 1277 1277 case 0xffff: … … 1294 1294 return false; 1295 1295 1296 return shouldBlindPointerForSpecificArch( value);1296 return shouldBlindPointerForSpecificArch(static_cast<uintptr_t>(value)); 1297 1297 } 1298 1298 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.cpp
r237136 r237173 49 49 // The following are offsets for Probe::State fields accessed 50 50 // by the ctiMasmProbeTrampoline stub. 51 #if CPU(ADDRESS64) 51 52 #define PTR_SIZE 8 53 #else 54 #define PTR_SIZE 4 55 #endif 56 52 57 #define PROBE_PROBE_FUNCTION_OFFSET (0 * PTR_SIZE) 53 58 #define PROBE_ARG_OFFSET (1 * PTR_SIZE) … … 132 137 #define PROBE_SIZE (PROBE_FIRST_FPREG_OFFSET + (32 * FPREG_SIZE)) 133 138 134 #define SAVED_PROBE_RETURN_PC_OFFSET (PROBE_SIZE + (0 * PTR_SIZE))135 #define PROBE_SIZE_PLUS_EXTRAS (PROBE_SIZE + (3 * PTR_SIZE))139 #define SAVED_PROBE_RETURN_PC_OFFSET (PROBE_SIZE + (0 * GPREG_SIZE)) 140 #define PROBE_SIZE_PLUS_EXTRAS (PROBE_SIZE + (3 * GPREG_SIZE)) 136 141 137 142 // These ASSERTs remind you that if you change the layout of Probe::State, … … 222 227 223 228 // Conditions for using ldp and stp. 224 static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + PTR_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent");229 static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + GPREG_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent"); 225 230 static_assert(!(PROBE_SIZE_PLUS_EXTRAS & 0xf), "PROBE_SIZE_PLUS_EXTRAS should be 16 byte aligned"); // the Probe::State copying code relies on this. 226 231 … … 230 235 231 236 struct IncomingProbeRecord { 232 uintptr_tx24;233 uintptr_tx25;234 uintptr_tx26;235 uintptr_tx27;236 uintptr_tx28;237 uintptr_tx30; // lr237 UCPURegister x24; 238 UCPURegister x25; 239 UCPURegister x26; 240 UCPURegister x27; 241 UCPURegister x28; 242 UCPURegister x30; // lr 238 243 }; 239 244 240 #define IN_X24_OFFSET (0 * PTR_SIZE)241 #define IN_X25_OFFSET (1 * PTR_SIZE)242 #define IN_X26_OFFSET (2 * PTR_SIZE)243 #define IN_X27_OFFSET (3 * PTR_SIZE)244 #define IN_X28_OFFSET (4 * PTR_SIZE)245 #define IN_X30_OFFSET (5 * PTR_SIZE)246 #define IN_SIZE (6 * PTR_SIZE)245 #define IN_X24_OFFSET (0 * GPREG_SIZE) 246 #define IN_X25_OFFSET (1 * GPREG_SIZE) 247 #define IN_X26_OFFSET (2 * GPREG_SIZE) 248 #define IN_X27_OFFSET (3 * GPREG_SIZE) 249 #define IN_X28_OFFSET (4 * GPREG_SIZE) 250 #define IN_X30_OFFSET (5 * GPREG_SIZE) 251 #define IN_SIZE (6 * GPREG_SIZE) 247 252 248 253 static_assert(IN_X24_OFFSET == offsetof(IncomingProbeRecord, x24), "IN_X24_OFFSET is incorrect"); … … 256 261 257 262 struct OutgoingProbeRecord { 258 uintptr_tnzcv;259 uintptr_tfpsr;260 uintptr_tx27;261 uintptr_tx28;262 uintptr_tfp;263 uintptr_tlr;263 UCPURegister nzcv; 264 UCPURegister fpsr; 265 UCPURegister x27; 266 UCPURegister x28; 267 UCPURegister fp; 268 UCPURegister lr; 264 269 }; 265 270 266 #define OUT_NZCV_OFFSET (0 * PTR_SIZE)267 #define OUT_FPSR_OFFSET (1 * PTR_SIZE)268 #define OUT_X27_OFFSET (2 * PTR_SIZE)269 #define OUT_X28_OFFSET (3 * PTR_SIZE)270 #define OUT_FP_OFFSET (4 * PTR_SIZE)271 #define OUT_LR_OFFSET (5 * PTR_SIZE)272 #define OUT_SIZE (6 * PTR_SIZE)271 #define OUT_NZCV_OFFSET (0 * GPREG_SIZE) 272 #define OUT_FPSR_OFFSET (1 * GPREG_SIZE) 273 #define OUT_X27_OFFSET (2 * GPREG_SIZE) 274 #define OUT_X28_OFFSET (3 * GPREG_SIZE) 275 #define OUT_FP_OFFSET (4 * GPREG_SIZE) 276 #define OUT_LR_OFFSET (5 * GPREG_SIZE) 277 #define OUT_SIZE (6 * GPREG_SIZE) 273 278 274 279 static_assert(OUT_NZCV_OFFSET == offsetof(OutgoingProbeRecord, nzcv), "OUT_NZCV_OFFSET is incorrect"); … … 282 287 283 288 struct LRRestorationRecord { 284 uintptr_tlr;285 uintptr_tunusedDummyToEnsureSizeIs16ByteAligned;289 UCPURegister lr; 290 UCPURegister unusedDummyToEnsureSizeIs16ByteAligned; 286 291 }; 287 292 288 #define LR_RESTORATION_LR_OFFSET (0 * PTR_SIZE)289 #define LR_RESTORATION_SIZE (2 * PTR_SIZE)293 #define LR_RESTORATION_LR_OFFSET (0 * GPREG_SIZE) 294 #define LR_RESTORATION_SIZE (2 * GPREG_SIZE) 290 295 291 296 static_assert(LR_RESTORATION_LR_OFFSET == offsetof(LRRestorationRecord, lr), "LR_RESTORATION_LR_OFFSET is incorrect"); … … 349 354 "str x30, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n" // Save a duplicate copy of return pc (in lr). 350 355 351 "add x30, x30, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point.356 "add x30, x30, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point. 352 357 "str x30, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n" 353 358 … … 473 478 "ldr x27, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n" 474 479 "ldr x28, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n" 475 "add x27, x27, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"480 "add x27, x27, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" 476 481 "cmp x27, x28" "\n" 477 482 "bne " LOCAL_LABEL_STRING(ctiMasmProbeTrampolineEnd) "\n" … … 503 508 504 509 // Restore the remaining registers and pop the OutgoingProbeRecord. 505 "ldp x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"510 "ldp x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" 506 511 "msr nzcv, x27" "\n" 507 512 "msr fpsr, x28" "\n" 508 "ldp x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"509 "ldp x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"513 "ldp x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" 514 "ldp x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" 510 515 "ret" "\n" 511 516 ); … … 545 550 // that feature, the kernel does not tell it to users.), it is a stable approach. 546 551 // https://www.kernel.org/doc/Documentation/arm64/elf_hwcaps.txt 547 u nsigned longhwcaps = getauxval(AT_HWCAP);552 uint64_t hwcaps = getauxval(AT_HWCAP); 548 553 549 554 #if !defined(HWCAP_JSCVT) -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r237136 r237173 54 54 static const ARM64Registers::FPRegisterID fpTempRegister = ARM64Registers::q31; 55 55 static const Assembler::SetFlags S = Assembler::S; 56 static const int ptr_t maskHalfWord0 = 0xffffl;57 static const int ptr_t maskHalfWord1 = 0xffff0000l;58 static const int ptr_t maskUpperWord = 0xffffffff00000000l;56 static const int64_t maskHalfWord0 = 0xffffl; 57 static const int64_t maskHalfWord1 = 0xffff0000l; 58 static const int64_t maskUpperWord = 0xffffffff00000000l; 59 59 60 60 static constexpr size_t INSTRUCTION_SIZE = 4; … … 4008 4008 RELEASE_ASSERT(m_allowScratchRegister); 4009 4009 return m_cachedMemoryTempRegister; 4010 }4011 4012 ALWAYS_INLINE bool isInIntRange(intptr_t value)4013 {4014 return value == ((value << 32) >> 32);4015 4010 } 4016 4011 … … 4149 4144 cachedMemoryTempRegister().invalidate(); 4150 4145 4151 if (isIn IntRange(addressDelta)) {4146 if (isInt<32>(addressDelta)) { 4152 4147 if (Assembler::canEncodeSImmOffset(addressDelta)) { 4153 4148 m_assembler.ldur<datasize>(dest, memoryTempRegister, addressDelta); … … 4186 4181 intptr_t addressDelta = addressAsInt - currentRegisterContents; 4187 4182 4188 if (isIn IntRange(addressDelta)) {4183 if (isInt<32>(addressDelta)) { 4189 4184 if (Assembler::canEncodeSImmOffset(addressDelta)) { 4190 4185 m_assembler.stur<datasize>(src, memoryTempRegister, addressDelta); -
trunk/Source/JavaScriptCore/assembler/Printer.h
r220958 r237173 80 80 const void* pointer; 81 81 #if USE(JSVALUE64) 82 uintptr_tbuffer[4];82 UCPURegister buffer[4]; 83 83 #elif USE(JSVALUE32_64) 84 uintptr_tbuffer[6];84 UCPURegister buffer[6]; 85 85 #endif 86 86 }; -
trunk/Source/JavaScriptCore/assembler/ProbeContext.h
r231565 r237173 42 42 static inline const char* sprName(SPRegisterID id) { return MacroAssembler::sprName(id); } 43 43 static inline const char* fprName(FPRegisterID id) { return MacroAssembler::fprName(id); } 44 inline uintptr_t& gpr(RegisterID);45 inline uintptr_t& spr(SPRegisterID);44 inline UCPURegister& gpr(RegisterID); 45 inline UCPURegister& spr(SPRegisterID); 46 46 inline double& fpr(FPRegisterID); 47 47 … … 57 57 template<typename T> T sp() const; 58 58 59 uintptr_tgprs[MacroAssembler::numberOfRegisters()];60 uintptr_tsprs[MacroAssembler::numberOfSPRegisters()];59 UCPURegister gprs[MacroAssembler::numberOfRegisters()]; 60 UCPURegister sprs[MacroAssembler::numberOfSPRegisters()]; 61 61 double fprs[MacroAssembler::numberOfFPRegisters()]; 62 62 }; 63 63 64 inline uintptr_t& CPUState::gpr(RegisterID id)64 inline UCPURegister& CPUState::gpr(RegisterID id) 65 65 { 66 66 ASSERT(id >= MacroAssembler::firstRegister() && id <= MacroAssembler::lastRegister()); … … 68 68 } 69 69 70 inline uintptr_t& CPUState::spr(SPRegisterID id)70 inline UCPURegister& CPUState::spr(SPRegisterID id) 71 71 { 72 72 ASSERT(id >= MacroAssembler::firstSPRegister() && id <= MacroAssembler::lastSPRegister()); … … 199 199 T arg() { return reinterpret_cast<T>(m_state->arg); } 200 200 201 uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }202 uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }201 UCPURegister& gpr(RegisterID id) { return cpu.gpr(id); } 202 UCPURegister& spr(SPRegisterID id) { return cpu.spr(id); } 203 203 double& fpr(FPRegisterID id) { return cpu.fpr(id); } 204 204 const char* gprName(RegisterID id) { return cpu.gprName(id); } -
trunk/Source/JavaScriptCore/b3/B3ConstPtrValue.h
r206525 r237173 37 37 // Const64Value depending on platform. 38 38 39 #if USE(JSVALUE64)39 #if CPU(ADDRESS64) 40 40 typedef Const64Value ConstPtrValueBase; 41 41 #else -
trunk/Source/JavaScriptCore/b3/B3StackmapSpecial.cpp
r227617 r237173 264 264 if ((arg.isAddr() || arg.isExtendedOffsetAddr()) && code.frameSize()) { 265 265 if (arg.base() == Tmp(GPRInfo::callFrameRegister) 266 && arg.offset() == rep.offsetFromSP() - code.frameSize())266 && arg.offset() == static_cast<int64_t>(rep.offsetFromSP()) - code.frameSize()) 267 267 return true; 268 268 if (arg.base() == Tmp(MacroAssembler::stackPointerRegister) -
trunk/Source/JavaScriptCore/b3/air/AirArg.h
r235935 r237173 974 974 { 975 975 ASSERT(kind() == Stack); 976 return bitwise_cast<StackSlot*>( m_offset);976 return bitwise_cast<StackSlot*>(static_cast<uintptr_t>(m_offset)); 977 977 } 978 978 … … 997 997 { 998 998 ASSERT(kind() == Special); 999 return bitwise_cast<Air::Special*>( m_offset);999 return bitwise_cast<Air::Special*>(static_cast<uintptr_t>(m_offset)); 1000 1000 } 1001 1001 -
trunk/Source/JavaScriptCore/b3/air/testair.cpp
r231317 r237173 139 139 } 140 140 141 void loadConstant(BasicBlock* block, intptr_t value, Tmp tmp) 142 { 143 loadConstantImpl<intptr_t>(block, value, Move, tmp, tmp); 141 template<typename T> 142 void loadConstant(BasicBlock* block, T value, Tmp tmp) 143 { 144 loadConstantImpl(block, value, Move, tmp, tmp); 144 145 } 145 146 -
trunk/Source/JavaScriptCore/b3/testb3.cpp
r232741 r237173 5474 5474 BasicBlock* root = proc.addBlock(); 5475 5475 intptr_t slot; 5476 if (is64Bit()) 5477 slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef); 5478 else 5479 slot = 0xbaadbeef; 5476 #if CPU(ADDRESS64) 5477 slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef); 5478 #else 5479 slot = 0xbaadbeef; 5480 #endif 5480 5481 root->appendNew<MemoryValue>( 5481 5482 proc, Store, Origin(), … … 13195 13196 auto interpreter = compileProc(proc); 13196 13197 13197 Vector< intptr_t> data;13198 Vector< intptr_t> code;13199 Vector< intptr_t> stream;13198 Vector<uintptr_t> data; 13199 Vector<uintptr_t> code; 13200 Vector<uintptr_t> stream; 13200 13201 13201 13202 data.append(1); … … 14498 14499 14499 14500 auto code = compileProc(proc); 14500 CHECK_EQ(invoke<int ptr_t>(*code, 1, 2), 1 + (static_cast<intptr_t>(2) << static_cast<intptr_t>(32)));14501 CHECK_EQ(invoke<int64_t>(*code, 1, 2), 1 + (static_cast<int64_t>(2) << static_cast<int64_t>(32))); 14501 14502 } 14502 14503 … … 14608 14609 void testLoadBaseIndexShift32() 14609 14610 { 14611 #if CPU(ADDRESS64) 14610 14612 Procedure proc; 14611 14613 BasicBlock* root = proc.addBlock(); … … 14626 14628 for (unsigned i = 0; i < 10; ++i) 14627 14629 CHECK_EQ(invoke<int32_t>(*code, ptr - (static_cast<intptr_t>(1) << static_cast<intptr_t>(32)) * i, i), 12341234); 14630 #endif 14628 14631 } 14629 14632 -
trunk/Source/JavaScriptCore/bindings/ScriptFunctionCall.cpp
r232337 r237173 76 76 } 77 77 78 void ScriptCallArgumentHandler::appendArgument(u nsigned longargument)78 void ScriptCallArgumentHandler::appendArgument(uint64_t argument) 79 79 { 80 80 JSLockHolder lock(m_exec); -
trunk/Source/JavaScriptCore/bindings/ScriptFunctionCall.h
r206525 r237173 52 52 void appendArgument(long long); 53 53 void appendArgument(unsigned int); 54 void appendArgument(u nsigned long);54 void appendArgument(uint64_t); 55 55 void appendArgument(int); 56 56 void appendArgument(bool); -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r236901 r237173 2161 2161 static size_t roundCalleeSaveSpaceAsVirtualRegisters(size_t calleeSaveRegisters) 2162 2162 { 2163 static const unsigned cpuRegisterSize = sizeof(void*); 2164 return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * cpuRegisterSize) / sizeof(Register));2163 2164 return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * sizeof(CPURegister)) / sizeof(Register)); 2165 2165 2166 2166 } -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
r236585 r237173 87 87 unsigned registerCount = calleeSaves->size(); 88 88 89 uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();89 UCPURegister* physicalStackFrame = context.fp<UCPURegister*>(); 90 90 for (unsigned i = 0; i < registerCount; i++) { 91 91 RegisterAtOffset entry = calleeSaves->at(i); … … 95 95 // Hence, we read the values directly from the physical stack memory instead of 96 96 // going through context.stack(). 97 ASSERT(!(entry.offset() % sizeof( uintptr_t)));98 context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof( uintptr_t)];97 ASSERT(!(entry.offset() % sizeof(UCPURegister))); 98 context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(UCPURegister)]; 99 99 } 100 100 } … … 114 114 if (dontSaveRegisters.get(entry.reg())) 115 115 continue; 116 stack.set(context.fp(), entry.offset(), context.gpr< uintptr_t>(entry.reg().gpr()));116 stack.set(context.fp(), entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr())); 117 117 } 118 118 } … … 128 128 129 129 VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame); 130 uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);130 UCPURegister* calleeSaveBuffer = reinterpret_cast<UCPURegister*>(entryRecord->calleeSaveRegistersBuffer); 131 131 132 132 // Restore all callee saves. … … 135 135 if (dontRestoreRegisters.get(entry.reg())) 136 136 continue; 137 size_t uintptrOffset = entry.offset() / sizeof( uintptr_t);137 size_t uintptrOffset = entry.offset() / sizeof(UCPURegister); 138 138 if (entry.reg().isGPR()) 139 139 context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset]; … … 161 161 continue; 162 162 if (entry.reg().isGPR()) 163 stack.set(calleeSaveBuffer, entry.offset(), context.gpr< uintptr_t>(entry.reg().gpr()));163 stack.set(calleeSaveBuffer, entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr())); 164 164 else 165 stack.set(calleeSaveBuffer, entry.offset(), context.fpr< uintptr_t>(entry.reg().fpr()));165 stack.set(calleeSaveBuffer, entry.offset(), context.fpr<UCPURegister>(entry.reg().fpr())); 166 166 } 167 167 } -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r231607 r237173 231 231 jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount))); 232 232 #if USE(JSVALUE64) 233 jit.store 64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));233 jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset())); 234 234 uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits(); 235 235 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount))); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r236901 r237173 4408 4408 m_jit.add32(structureIDGPR, hashGPR); 4409 4409 m_jit.and32(TrustedImm32(HasOwnPropertyCache::mask), hashGPR); 4410 static_assert(sizeof(HasOwnPropertyCache::Entry) == 16, "Strong assumption of that here."); 4411 m_jit.lshift32(TrustedImm32(4), hashGPR); 4410 if (hasOneBitSet(sizeof(HasOwnPropertyCache::Entry))) // is a power of 2 4411 m_jit.lshift32(TrustedImm32(getLSBSet(sizeof(HasOwnPropertyCache::Entry))), hashGPR); 4412 else 4413 m_jit.mul32(TrustedImm32(sizeof(HasOwnPropertyCache::Entry)), hashGPR, hashGPR); 4412 4414 ASSERT(m_jit.vm()->hasOwnPropertyCache()); 4413 4415 m_jit.move(TrustedImmPtr(m_jit.vm()->hasOwnPropertyCache()), tempGPR); -
trunk/Source/JavaScriptCore/disassembler/UDis86Disassembler.cpp
r230748 r237173 50 50 while (ud_disassemble(&disassembler)) { 51 51 char pcString[20]; 52 snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<u nsigned long>(currentPC));52 snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<uintptr_t>(currentPC)); 53 53 out.printf("%s%16s: %s\n", prefix, pcString, ud_insn_asm(&disassembler)); 54 54 currentPC = disassembler.pc; -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r237136 r237173 9664 9664 9665 9665 LValue bucket; 9666 9666 9667 if (m_node->child1().useKind() == WeakMapObjectUse) { 9667 static_assert( sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>) == 16, "");9668 bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32( 4)));9668 static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>)), "Should be a power of 2"); 9669 bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>))))); 9669 9670 } else { 9670 static_assert( sizeof(WeakMapBucket<WeakMapBucketDataKey>) == 8, "");9671 bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32( 3)));9671 static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>)), "Should be a power of 2"); 9672 bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>))))); 9672 9673 } 9673 9674 -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.cpp
r230303 r237173 88 88 size_t dstAsSize = reinterpret_cast<size_t>(dst); 89 89 size_t srcAsSize = reinterpret_cast<size_t>(src); 90 RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof( intptr_t)>(dstAsSize));91 RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof( intptr_t)>(srcAsSize));92 RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof( intptr_t)>(size));93 94 intptr_t* dstPtr = reinterpret_cast<intptr_t*>(dst);95 const intptr_t* srcPtr = reinterpret_cast<const intptr_t*>(src);96 size /= sizeof( intptr_t);90 RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(dstAsSize)); 91 RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(srcAsSize)); 92 RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(size)); 93 94 CPURegister* dstPtr = reinterpret_cast<CPURegister*>(dst); 95 const CPURegister* srcPtr = reinterpret_cast<const CPURegister*>(src); 96 size /= sizeof(CPURegister); 97 97 while (size--) 98 98 *dstPtr++ = *srcPtr++; -
trunk/Source/JavaScriptCore/interpreter/CallFrame.h
r235603 r237173 68 68 }; 69 69 70 // arm64_32 expects caller frame and return pc to use 8 bytes 70 71 struct CallerFrameAndPC { 71 CallFrame* callerFrame;72 Instruction* pc;73 static const int sizeInRegisters = 2 * sizeof( void*) / sizeof(Register);72 alignas(CPURegister) CallFrame* callerFrame; 73 alignas(CPURegister) Instruction* returnPC; 74 static const int sizeInRegisters = 2 * sizeof(CPURegister) / sizeof(Register); 74 75 }; 75 76 static_assert(CallerFrameAndPC::sizeInRegisters == sizeof(CallerFrameAndPC) / sizeof(Register), "CallerFrameAndPC::sizeInRegisters is incorrect."); … … 148 149 static ptrdiff_t callerFrameOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, callerFrame); } 149 150 150 ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC(). pc); }151 bool hasReturnPC() const { return !!callerFrameAndPC(). pc; }152 void clearReturnPC() { callerFrameAndPC(). pc= 0; }153 static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, pc); }151 ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC().returnPC); } 152 bool hasReturnPC() const { return !!callerFrameAndPC().returnPC; } 153 void clearReturnPC() { callerFrameAndPC().returnPC = 0; } 154 static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, returnPC); } 154 155 AbstractPC abstractReturnPC(VM& vm) { return AbstractPC(vm, this); } 155 156 … … 254 255 bool isGlobalExec() const 255 256 { 256 return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC(). pc== nullptr;257 return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC().returnPC == nullptr; 257 258 } 258 259 … … 264 265 void setCallee(JSObject* callee) { static_cast<Register*>(this)[CallFrameSlot::callee] = callee; } 265 266 void setCodeBlock(CodeBlock* codeBlock) { static_cast<Register*>(this)[CallFrameSlot::codeBlock] = codeBlock; } 266 void setReturnPC(void* value) { callerFrameAndPC(). pc= reinterpret_cast<Instruction*>(value); }267 void setReturnPC(void* value) { callerFrameAndPC().returnPC = reinterpret_cast<Instruction*>(value); } 267 268 268 269 String friendlyFunctionName(); -
trunk/Source/JavaScriptCore/interpreter/CalleeBits.h
r214979 r237173 52 52 static void* boxWasm(Wasm::Callee* callee) 53 53 { 54 CalleeBits result( bitwise_cast<void*>(bitwise_cast<uintptr_t>(callee) | TagBitsWasm));54 CalleeBits result(reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(callee) | TagBitsWasm)); 55 55 ASSERT(result.isWasm()); 56 56 return result.rawPtr(); … … 61 61 { 62 62 #if ENABLE(WEBASSEMBLY) 63 return ( bitwise_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm;63 return (reinterpret_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm; 64 64 #else 65 65 return false; … … 78 78 { 79 79 ASSERT(isWasm()); 80 return bitwise_cast<Wasm::Callee*>(bitwise_cast<uintptr_t>(m_ptr) & ~TagBitsWasm);80 return reinterpret_cast<Wasm::Callee*>(reinterpret_cast<uintptr_t>(m_ptr) & ~TagBitsWasm); 81 81 } 82 82 #endif -
trunk/Source/JavaScriptCore/interpreter/Interpreter.cpp
r237080 r237173 570 570 RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets(); 571 571 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 572 intptr_t* frame = reinterpret_cast<intptr_t*>(m_callFrame->registers());572 CPURegister* frame = reinterpret_cast<CPURegister*>(m_callFrame->registers()); 573 573 574 574 unsigned registerCount = currentCalleeSaves->size(); -
trunk/Source/JavaScriptCore/interpreter/VMEntryRecord.h
r236381 r237173 48 48 49 49 #if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 50 intptr_tcalleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];50 CPURegister calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS]; 51 51 #endif 52 52 -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r236734 r237173 462 462 ASSERT(frameSize % stackAlignmentBytes() == 0); 463 463 if (frameSize <= 128) { 464 for (unsigned offset = 0; offset < frameSize; offset += sizeof( intptr_t))464 for (unsigned offset = 0; offset < frameSize; offset += sizeof(CPURegister)) 465 465 storePtr(TrustedImm32(0), Address(currentTop, -8 - offset)); 466 466 } else { 467 467 constexpr unsigned storeBytesPerIteration = stackAlignmentBytes(); 468 constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof( intptr_t);468 constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof(CPURegister); 469 469 470 470 move(currentTop, temp); … … 476 476 #else 477 477 for (unsigned i = storesPerIteration; i-- != 0;) 478 storePtr(TrustedImm32(0), Address(temp, sizeof( intptr_t) * i));478 storePtr(TrustedImm32(0), Address(temp, sizeof(CPURegister) * i)); 479 479 #endif 480 480 branchPtr(NotEqual, temp, newTop).linkTo(zeroLoop, this); -
trunk/Source/JavaScriptCore/jit/RegisterAtOffset.h
r236381 r237173 50 50 Reg reg() const { return m_reg; } 51 51 ptrdiff_t offset() const { return m_offset; } 52 int offsetAsIndex() const { return offset() / sizeof(void*); }52 int offsetAsIndex() const { ASSERT(!(offset() % sizeof(CPURegister))); return offset() / static_cast<int>(sizeof(CPURegister)); } 53 53 54 54 bool operator==(const RegisterAtOffset& other) const … … 70 70 private: 71 71 Reg m_reg; 72 ptrdiff_t m_offset : sizeof(ptrdiff_t) * 8 - sizeof(Reg) * 8;72 ptrdiff_t m_offset : (sizeof(ptrdiff_t) - sizeof(Reg)) * CHAR_BIT; 73 73 }; 74 74 -
trunk/Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp
r236381 r237173 41 41 42 42 if (offsetBaseType == FramePointerBased) 43 offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof( void*));43 offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(CPURegister)); 44 44 45 45 m_registers.reserveInitialCapacity(numberOfRegisters); 46 46 registerSet.forEach([&] (Reg reg) { 47 47 m_registers.append(RegisterAtOffset(reg, offset)); 48 offset += sizeof( void*);48 offset += sizeof(CPURegister); 49 49 }); 50 50 } -
trunk/Source/JavaScriptCore/llint/LLIntData.cpp
r236381 r237173 77 77 78 78 #if USE(JSVALUE64) 79 const ptrdiff_t PtrSize = 8;80 79 const ptrdiff_t CallFrameHeaderSlots = 5; 81 80 #else // USE(JSVALUE64) // i.e. 32-bit version 82 const ptrdiff_t PtrSize = 4;83 81 const ptrdiff_t CallFrameHeaderSlots = 4; 84 82 #endif 83 const ptrdiff_t MachineRegisterSize = sizeof(CPURegister); 85 84 const ptrdiff_t SlotSize = 8; 86 85 87 STATIC_ASSERT(sizeof(void*) == PtrSize);88 86 STATIC_ASSERT(sizeof(Register) == SlotSize); 89 87 STATIC_ASSERT(CallFrame::headerSizeInRegisters == CallFrameHeaderSlots); 90 88 91 89 ASSERT(!CallFrame::callerFrameOffset()); 92 STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == ( PtrSize * 2) / SlotSize);93 ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + PtrSize);94 ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + PtrSize);90 STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == (MachineRegisterSize * 2) / SlotSize); 91 ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + MachineRegisterSize); 92 ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + MachineRegisterSize); 95 93 STATIC_ASSERT(CallFrameSlot::callee * sizeof(Register) == CallFrameSlot::codeBlock * sizeof(Register) + SlotSize); 96 94 STATIC_ASSERT(CallFrameSlot::argumentCount * sizeof(Register) == CallFrameSlot::callee * sizeof(Register) + SlotSize); -
trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
r236381 r237173 146 146 #endif 147 147 148 #if CPU(ADDRESS64) 149 #define OFFLINE_ASM_ADDRESS64 1 150 #else 151 #define OFFLINE_ASM_ADDRESS64 0 152 #endif 153 148 154 #if ENABLE(POISON) 149 155 #define OFFLINE_ASM_POISON 1 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r235419 r237173 158 158 if JSVALUE64 159 159 const CallFrameHeaderSlots = 5 160 const MachineRegisterSize = 8 160 161 else 161 162 const CallFrameHeaderSlots = 4 162 163 const CallFrameAlignSlots = 1 164 const MachineRegisterSize = 4 163 165 end 164 166 const SlotSize = 8 … … 171 173 const StackAlignmentMask = StackAlignment - 1 172 174 173 const CallerFrameAndPCSize = 2 * PtrSize175 const CallerFrameAndPCSize = constexpr (sizeof(CallerFrameAndPC)) 174 176 175 177 const CallerFrame = 0 176 const ReturnPC = CallerFrame + PtrSize177 const CodeBlock = ReturnPC + PtrSize178 const ReturnPC = CallerFrame + MachineRegisterSize 179 const CodeBlock = ReturnPC + MachineRegisterSize 178 180 const Callee = CodeBlock + SlotSize 179 181 const ArgumentCount = Callee + SlotSize … … 295 297 296 298 macro loadisFromInstruction(offset, dest) 297 loadis offset * 8[PB, PC, 8], dest299 loadis offset * PtrSize[PB, PC, PtrSize], dest 298 300 end 299 301 300 302 macro loadpFromInstruction(offset, dest) 301 loadp offset * 8[PB, PC, 8], dest303 loadp offset * PtrSize[PB, PC, PtrSize], dest 302 304 end 303 305 304 306 macro loadisFromStruct(offset, dest) 305 loadis offset[PB, PC, 8], dest307 loadis offset[PB, PC, PtrSize], dest 306 308 end 307 309 308 310 macro loadpFromStruct(offset, dest) 309 loadp offset[PB, PC, 8], dest311 loadp offset[PB, PC, PtrSize], dest 310 312 end 311 313 312 314 macro storeisToInstruction(value, offset) 313 storei value, offset * 8[PB, PC, 8]315 storei value, offset * PtrSize[PB, PC, PtrSize] 314 316 end 315 317 316 318 macro storepToInstruction(value, offset) 317 storep value, offset * 8[PB, PC, 8]319 storep value, offset * PtrSize[PB, PC, PtrSize] 318 320 end 319 321 320 322 macro storeisFromStruct(value, offset) 321 storei value, offset[PB, PC, 8]323 storei value, offset[PB, PC, PtrSize] 322 324 end 323 325 324 326 macro storepFromStruct(value, offset) 325 storep value, offset[PB, PC, 8]327 storep value, offset[PB, PC, PtrSize] 326 328 end 327 329 … … 575 577 end 576 578 577 const CalleeRegisterSaveSize = CalleeSaveRegisterCount * PtrSize579 const CalleeRegisterSaveSize = CalleeSaveRegisterCount * MachineRegisterSize 578 580 579 581 # VMEntryTotalFrameSize includes the space for struct VMEntryRecord and the … … 698 700 leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp 699 701 if ARM64 or ARM64E 700 store pcsr0, [temp]701 store pcsr1, 8[temp]702 store pcsr2, 16[temp]703 store pcsr3, 24[temp]704 store pcsr4, 32[temp]705 store pcsr5, 40[temp]706 store pcsr6, 48[temp]707 store pcsr7, 56[temp]708 store pcsr8, 64[temp]709 store pcsr9, 72[temp]702 storeq csr0, [temp] 703 storeq csr1, 8[temp] 704 storeq csr2, 16[temp] 705 storeq csr3, 24[temp] 706 storeq csr4, 32[temp] 707 storeq csr5, 40[temp] 708 storeq csr6, 48[temp] 709 storeq csr7, 56[temp] 710 storeq csr8, 64[temp] 711 storeq csr9, 72[temp] 710 712 stored csfr0, 80[temp] 711 713 stored csfr1, 88[temp] … … 717 719 stored csfr7, 136[temp] 718 720 elsif X86_64 719 store pcsr0, [temp]720 store pcsr1, 8[temp]721 store pcsr2, 16[temp]722 store pcsr3, 24[temp]723 store pcsr4, 32[temp]721 storeq csr0, [temp] 722 storeq csr1, 8[temp] 723 storeq csr2, 16[temp] 724 storeq csr3, 24[temp] 725 storeq csr4, 32[temp] 724 726 elsif X86_64_WIN 725 store pcsr0, [temp]726 store pcsr1, 8[temp]727 store pcsr2, 16[temp]728 store pcsr3, 24[temp]729 store pcsr4, 32[temp]730 store pcsr5, 40[temp]731 store pcsr6, 48[temp]727 storeq csr0, [temp] 728 storeq csr1, 8[temp] 729 storeq csr2, 16[temp] 730 storeq csr3, 24[temp] 731 storeq csr4, 32[temp] 732 storeq csr5, 40[temp] 733 storeq csr6, 48[temp] 732 734 end 733 735 end … … 740 742 leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp 741 743 if ARM64 or ARM64E 742 load p[temp], csr0743 load p8[temp], csr1744 load p16[temp], csr2745 load p24[temp], csr3746 load p32[temp], csr4747 load p40[temp], csr5748 load p48[temp], csr6749 load p56[temp], csr7750 load p64[temp], csr8751 load p72[temp], csr9744 loadq [temp], csr0 745 loadq 8[temp], csr1 746 loadq 16[temp], csr2 747 loadq 24[temp], csr3 748 loadq 32[temp], csr4 749 loadq 40[temp], csr5 750 loadq 48[temp], csr6 751 loadq 56[temp], csr7 752 loadq 64[temp], csr8 753 loadq 72[temp], csr9 752 754 loadd 80[temp], csfr0 753 755 loadd 88[temp], csfr1 … … 759 761 loadd 136[temp], csfr7 760 762 elsif X86_64 761 load p[temp], csr0762 load p8[temp], csr1763 load p16[temp], csr2764 load p24[temp], csr3765 load p32[temp], csr4763 loadq [temp], csr0 764 loadq 8[temp], csr1 765 loadq 16[temp], csr2 766 loadq 24[temp], csr3 767 loadq 32[temp], csr4 766 768 elsif X86_64_WIN 767 load p[temp], csr0768 load p8[temp], csr1769 load p16[temp], csr2770 load p24[temp], csr3771 load p32[temp], csr4772 load p40[temp], csr5773 load p48[temp], csr6769 loadq [temp], csr0 770 loadq 8[temp], csr1 771 loadq 16[temp], csr2 772 loadq 24[temp], csr3 773 loadq 32[temp], csr4 774 loadq 40[temp], csr5 775 loadq 48[temp], csr6 774 776 end 775 777 end … … 885 887 886 888 if ARM or ARMv7_TRADITIONAL or ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS 887 addp 2 * PtrSize, sp888 subi 2 * PtrSize, temp2889 loadp PtrSize[cfr], lr889 addp CallerFrameAndPCSize, sp 890 subi CallerFrameAndPCSize, temp2 891 loadp CallerFrameAndPC::returnPC[cfr], lr 890 892 else 891 893 addp PtrSize, sp … … 904 906 905 907 .copyLoop: 906 subi PtrSize, temp2 907 loadp [sp, temp2, 1], temp3 908 storep temp3, [temp1, temp2, 1] 909 btinz temp2, .copyLoop 908 if ARM64 and not ADDRESS64 909 subi MachineRegisterSize, temp2 910 loadq [sp, temp2, 1], temp3 911 storeq temp3, [temp1, temp2, 1] 912 btinz temp2, .copyLoop 913 else 914 subi PtrSize, temp2 915 loadp [sp, temp2, 1], temp3 916 storep temp3, [temp1, temp2, 1] 917 btinz temp2, .copyLoop 918 end 910 919 911 920 move temp1, sp … … 1110 1119 if JSVALUE64 1111 1120 move TagTypeNumber, tagTypeNumber 1112 add pTagBitTypeOther, tagTypeNumber, tagMask1121 addq TagBitTypeOther, tagTypeNumber, tagMask 1113 1122 end 1114 1123 end … … 1264 1273 pcrtoaddr label, t1 1265 1274 move index, t4 1266 storep t1, [a0, t4, 8]1275 storep t1, [a0, t4, PtrSize] 1267 1276 elsif ARM or ARMv7 or ARMv7_TRADITIONAL 1268 1277 mvlbl (label - _relativePCBase), t4 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r236901 r237173 25 25 # Utilities. 26 26 macro jumpToInstruction() 27 jmp [PB, PC, 8], BytecodePtrTag27 jmp [PB, PC, PtrSize], BytecodePtrTag 28 28 end 29 29 … … 39 39 40 40 macro dispatchIntIndirect(offset) 41 dispatchInt(offset * 8[PB, PC, 8])41 dispatchInt(offset * PtrSize[PB, PC, PtrSize]) 42 42 end 43 43 … … 301 301 302 302 macro prepareStateForCCall() 303 leap [PB, PC, 8], PC303 leap [PB, PC, PtrSize], PC 304 304 end 305 305 … … 307 307 move r0, PC 308 308 subp PB, PC 309 rshiftp 3, PC309 rshiftp constexpr (getLSBSet(sizeof(void*))), PC 310 310 end 311 311 … … 488 488 unpoison(_g_CodeBlockPoison, scratch, scratch2) 489 489 loadp VM::heap + Heap::m_structureIDTable + StructureIDTable::m_table[scratch], scratch 490 loadp [scratch, structureIDThenStructure, 8], structureIDThenStructure490 loadp [scratch, structureIDThenStructure, PtrSize], structureIDThenStructure 491 491 end 492 492 … … 550 550 addi CalleeSaveSpaceAsVirtualRegisters, t2 551 551 move t1, t0 552 lshiftp 3, t0 552 # Adds to sp are always 64-bit on arm64 so we need maintain t0's high bits. 553 lshiftq 3, t0 553 554 addp t0, cfr 554 555 addp t0, sp … … 589 590 andp MarkedBlockMask, t3 590 591 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 591 bt qz VM::m_exception[t3], .noException592 btpz VM::m_exception[t3], .noException 592 593 jmp label 593 594 .noException: … … 1548 1549 # Now, t1 has the Structure* and t2 has the StructureID that we want that Structure* to have. 1549 1550 bineq t2, Structure::m_blob + StructureIDBlob::u.fields.structureID[t1], .opPutByIdSlow 1550 addp 8, t31551 addp PtrSize, t3 1551 1552 loadq Structure::m_prototype[t1], t2 1552 1553 bqneq t2, ValueNull, .opPutByIdTransitionChainLoop … … 1747 1748 .outOfBounds: 1748 1749 biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.vectorLength[t0], .opPutByValOutOfBounds 1749 loadp 32[PB, PC, 8], t21750 loadpFromInstruction(4, t2) 1750 1751 storeb 1, ArrayProfile::m_mayStoreToHole[t2] 1751 1752 addi 1, t3, t2 … … 1771 1772 macro (operand, scratch, address) 1772 1773 loadConstantOrVariable(operand, scratch) 1773 b pb scratch, tagTypeNumber, .opPutByValSlow1774 store pscratch, address1774 bqb scratch, tagTypeNumber, .opPutByValSlow 1775 storeq scratch, address 1775 1776 writeBarrierOnOperands(1, 3) 1776 1777 end) … … 1785 1786 jmp .ready 1786 1787 .notInt: 1787 add ptagTypeNumber, scratch1788 addq tagTypeNumber, scratch 1788 1789 fq2d scratch, ft0 1789 1790 bdnequn ft0, ft0, .opPutByValSlow … … 1798 1799 macro (operand, scratch, address) 1799 1800 loadConstantOrVariable(operand, scratch) 1800 store pscratch, address1801 storeq scratch, address 1801 1802 writeBarrierOnOperands(1, 3) 1802 1803 end) … … 1907 1908 loadp CodeBlock[cfr], t2 1908 1909 loadp CodeBlock::m_globalObject[t2], t2 1909 loadp JSGlobalObject::m_specialPointers[t2, t1, 8], t11910 loadp JSGlobalObject::m_specialPointers[t2, t1, PtrSize], t1 1910 1911 bpneq t1, [cfr, t0, 8], .opJneqPtrTarget 1911 1912 dispatch(5) 1912 1913 1913 1914 .opJneqPtrTarget: 1914 storei 1, 32[PB, PC, 8]1915 storeisToInstruction(1, 4) 1915 1916 dispatchIntIndirect(3) 1916 1917 … … 2135 2136 loadp VM::targetInterpreterPCForThrow[t3], PC 2136 2137 subp PB, PC 2137 rshiftp 3, PC2138 rshiftp constexpr (getLSBSet(sizeof(void*))), PC 2138 2139 2139 2140 callSlowPath(_llint_slow_path_check_if_exception_is_uncatchable_and_notify_profiler) … … 2146 2147 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 2147 2148 2148 load qVM::m_exception[t3], t02149 store q0, VM::m_exception[t3]2149 loadp VM::m_exception[t3], t0 2150 storep 0, VM::m_exception[t3] 2150 2151 loadisFromInstruction(1, t2) 2151 2152 storeq t0, [cfr, t2, 8] … … 2229 2230 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 2230 2231 2231 bt qnz VM::m_exception[t3], .handleException2232 btpnz VM::m_exception[t3], .handleException 2232 2233 2233 2234 functionEpilogue() … … 2272 2273 loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 2273 2274 2274 bt qnz VM::m_exception[t3], .handleException2275 btpnz VM::m_exception[t3], .handleException 2275 2276 2276 2277 functionEpilogue() … … 2298 2299 loadisFromInstruction(5, t2) 2299 2300 loadisFromInstruction(2, t0) 2300 load p[cfr, t0, 8], t02301 loadq [cfr, t0, 8], t0 2301 2302 btiz t2, .resolveScopeLoopEnd 2302 2303 … … 2594 2595 traceExecution() 2595 2596 loadVariable(2, t0) 2596 loadi 24[PB, PC, 8], t12597 loadi 3 * PtrSize[PB, PC, PtrSize], t1 2597 2598 loadq DirectArguments_storage[t0, t1, 8], t0 2598 2599 valueProfile(t0, 4, t1) … … 2605 2606 traceExecution() 2606 2607 loadVariable(1, t0) 2607 loadi 16[PB, PC, 8], t12608 loadi 2 * PtrSize[PB, PC, PtrSize], t1 2608 2609 loadisFromInstruction(3, t3) 2609 2610 loadConstantOrVariable(t3, t2) -
trunk/Source/JavaScriptCore/offlineasm/arm64.rb
r230273 r237173 76 76 number = name[1..-1] 77 77 case kind 78 when : int78 when :word 79 79 "w" + number 80 80 when :ptr 81 prefix = $currentSettings["ADDRESS64"] ? "x" : "w" 82 prefix + number 83 when :quad 81 84 "x" + number 82 85 else … … 200 203 def arm64Operand(kind) 201 204 raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value < -255 or offset.value > 4095 202 "[#{base.arm64Operand(: ptr)}, \##{offset.value}]"205 "[#{base.arm64Operand(:quad)}, \##{offset.value}]" 203 206 end 204 207 … … 211 214 def arm64Operand(kind) 212 215 raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value != 0 213 "[#{base.arm64Operand(: ptr)}, #{index.arm64Operand(:ptr)}, lsl \##{scaleShift}]"216 "[#{base.arm64Operand(:quad)}, #{index.arm64Operand(:quad)}, lsl \##{scaleShift}]" 214 217 end 215 218 … … 234 237 newList = [] 235 238 236 def isAddressMalformed(operand) 237 operand.is_a? Address and not (-255..4095).include? operand.offset.value 239 def isAddressMalformed(opcode, operand) 240 malformed = false 241 if operand.is_a? Address 242 malformed ||= (not (-255..4095).include? operand.offset.value) 243 if opcode =~ /q$/ and $currentSettings["ADDRESS64"] 244 malformed ||= operand.offset.value % 8 245 end 246 end 247 malformed 238 248 end 239 249 … … 241 251 | node | 242 252 if node.is_a? Instruction 243 if node.opcode =~ /^store/ and isAddressMalformed(node.op erands[1])253 if node.opcode =~ /^store/ and isAddressMalformed(node.opcode, node.operands[1]) 244 254 address = node.operands[1] 245 255 tmp = Tmp.new(codeOrigin, :gpr) 246 256 newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp]) 247 newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0))], node.annotation)248 elsif node.opcode =~ /^load/ and isAddressMalformed(node.op erands[0])257 newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0))], node.annotation) 258 elsif node.opcode =~ /^load/ and isAddressMalformed(node.opcode, node.operands[0]) 249 259 address = node.operands[0] 250 260 tmp = Tmp.new(codeOrigin, :gpr) 251 261 newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp]) 252 newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)262 newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation) 253 263 else 254 264 newList << node … … 286 296 end 287 297 298 def arm64FixSpecialRegisterArithmeticMode(list) 299 newList = [] 300 def usesSpecialRegister(node) 301 node.children.any? { 302 |operand| 303 if operand.is_a? RegisterID and operand.name =~ /sp/ 304 true 305 elsif operand.is_a? Address or operand.is_a? BaseIndex 306 usesSpecialRegister(operand) 307 else 308 false 309 end 310 } 311 end 312 313 314 list.each { 315 | node | 316 if node.is_a? Instruction 317 case node.opcode 318 when "addp", "subp", "mulp", "divp", "leap" 319 if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node) 320 newOpcode = node.opcode.sub(/(.*)p/, '\1q') 321 node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation) 322 end 323 when /^bp/ 324 if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node) 325 newOpcode = node.opcode.sub(/^bp(.*)/, 'bq\1') 326 node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation) 327 end 328 end 329 end 330 newList << node 331 } 332 newList 333 end 334 288 335 # Workaround for Cortex-A53 erratum (835769) 289 336 def arm64CortexA53Fix835769(list) … … 319 366 result = riscLowerNot(result) 320 367 result = riscLowerSimpleBranchOps(result) 321 result = riscLowerHardBranchOps64(result) 368 369 result = $currentSettings["ADDRESS64"] ? riscLowerHardBranchOps64(result) : riscLowerHardBranchOps(result) 322 370 result = riscLowerShiftOps(result) 323 371 result = arm64LowerMalformedLoadStoreAddresses(result) … … 338 386 "divd", "subd", "muld", "sqrtd", /^bp/, /^bq/, /^btp/, /^btq/, /^cp/, /^cq/, /^tp/, /^tq/, /^bd/, 339 387 "jmp", "call", "leap", "leaq" 340 size = 8388 size = $currentSettings["ADDRESS64"] ? 8 : 4 341 389 else 342 390 raise "Bad instruction #{node.opcode} for heap access at #{node.codeOriginString}" … … 369 417 } 370 418 result = riscLowerTest(result) 419 result = arm64FixSpecialRegisterArithmeticMode(result) 371 420 result = assignRegistersToTemporaries(result, :gpr, ARM64_EXTRA_GPRS) 372 421 result = assignRegistersToTemporaries(result, :fpr, ARM64_EXTRA_FPRS) … … 450 499 def emitARM64Access(opcode, opcodeNegativeOffset, register, memory, kind) 451 500 if memory.is_a? Address and memory.offset.value < 0 501 raise unless -256 <= memory.offset.value 452 502 $asm.puts "#{opcodeNegativeOffset} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}" 453 503 return 454 504 end 455 505 456 506 $asm.puts "#{opcode} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}" 457 507 end … … 480 530 def emitARM64Compare(operands, kind, compareCode) 481 531 emitARM64Unflipped("subs #{arm64GPRName('xzr', kind)}, ", operands[0..-2], kind) 482 $asm.puts "csinc #{operands[-1].arm64Operand(: int)}, wzr, wzr, #{compareCode}"532 $asm.puts "csinc #{operands[-1].arm64Operand(:word)}, wzr, wzr, #{compareCode}" 483 533 end 484 534 … … 492 542 if first 493 543 if isNegative 494 $asm.puts "movn #{target.arm64Operand(: ptr)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}"495 else 496 $asm.puts "movz #{target.arm64Operand(: ptr)}, \##{currentValue}, lsl \##{shift}"544 $asm.puts "movn #{target.arm64Operand(:quad)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}" 545 else 546 $asm.puts "movz #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}" 497 547 end 498 548 first = false 499 549 else 500 $asm.puts "movk #{target.arm64Operand(: ptr)}, \##{currentValue}, lsl \##{shift}"550 $asm.puts "movk #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}" 501 551 end 502 552 } … … 507 557 case opcode 508 558 when 'addi' 509 emitARM64Add("add", operands, : int)559 emitARM64Add("add", operands, :word) 510 560 when 'addis' 511 emitARM64Add("adds", operands, : int)561 emitARM64Add("adds", operands, :word) 512 562 when 'addp' 513 563 emitARM64Add("add", operands, :ptr) … … 515 565 emitARM64Add("adds", operands, :ptr) 516 566 when 'addq' 517 emitARM64Add("add", operands, : ptr)567 emitARM64Add("add", operands, :quad) 518 568 when "andi" 519 emitARM64TAC("and", operands, : int)569 emitARM64TAC("and", operands, :word) 520 570 when "andp" 521 571 emitARM64TAC("and", operands, :ptr) 522 572 when "andq" 523 emitARM64TAC("and", operands, : ptr)573 emitARM64TAC("and", operands, :quad) 524 574 when "ori" 525 emitARM64TAC("orr", operands, : int)575 emitARM64TAC("orr", operands, :word) 526 576 when "orp" 527 577 emitARM64TAC("orr", operands, :ptr) 528 578 when "orq" 529 emitARM64TAC("orr", operands, : ptr)579 emitARM64TAC("orr", operands, :quad) 530 580 when "xori" 531 emitARM64TAC("eor", operands, : int)581 emitARM64TAC("eor", operands, :word) 532 582 when "xorp" 533 583 emitARM64TAC("eor", operands, :ptr) 534 584 when "xorq" 535 emitARM64TAC("eor", operands, : ptr)585 emitARM64TAC("eor", operands, :quad) 536 586 when "lshifti" 537 emitARM64Shift("lslv", "ubfm", operands, : int) {587 emitARM64Shift("lslv", "ubfm", operands, :word) { 538 588 | value | 539 589 [32 - value, 31 - value] … … 542 592 emitARM64Shift("lslv", "ubfm", operands, :ptr) { 543 593 | value | 544 [64 - value, 63 - value] 594 bitSize = $currentSettings["ADDRESS64"] ? 64 : 32 595 [bitSize - value, bitSize - 1 - value] 545 596 } 546 597 when "lshiftq" 547 emitARM64Shift("lslv", "ubfm", operands, : ptr) {598 emitARM64Shift("lslv", "ubfm", operands, :quad) { 548 599 | value | 549 600 [64 - value, 63 - value] 550 601 } 551 602 when "rshifti" 552 emitARM64Shift("asrv", "sbfm", operands, : int) {603 emitARM64Shift("asrv", "sbfm", operands, :word) { 553 604 | value | 554 605 [value, 31] … … 557 608 emitARM64Shift("asrv", "sbfm", operands, :ptr) { 558 609 | value | 559 [value, 63] 610 bitSize = $currentSettings["ADDRESS64"] ? 64 : 32 611 [value, bitSize - 1] 560 612 } 561 613 when "rshiftq" 562 emitARM64Shift("asrv", "sbfm", operands, : ptr) {614 emitARM64Shift("asrv", "sbfm", operands, :quad) { 563 615 | value | 564 616 [value, 63] 565 617 } 566 618 when "urshifti" 567 emitARM64Shift("lsrv", "ubfm", operands, : int) {619 emitARM64Shift("lsrv", "ubfm", operands, :word) { 568 620 | value | 569 621 [value, 31] … … 572 624 emitARM64Shift("lsrv", "ubfm", operands, :ptr) { 573 625 | value | 574 [value, 63] 626 bitSize = $currentSettings["ADDRESS64"] ? 64 : 32 627 [value, bitSize - 1] 575 628 } 576 629 when "urshiftq" 577 emitARM64Shift("lsrv", "ubfm", operands, : ptr) {630 emitARM64Shift("lsrv", "ubfm", operands, :quad) { 578 631 | value | 579 632 [value, 63] 580 633 } 581 634 when "muli" 582 $asm.puts "madd #{arm64TACOperands(operands, : int)}, wzr"635 $asm.puts "madd #{arm64TACOperands(operands, :word)}, wzr" 583 636 when "mulp" 584 $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, xzr"637 $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, #{arm64GPRName('xzr', :ptr)}" 585 638 when "mulq" 586 $asm.puts "madd #{arm64TACOperands(operands, : ptr)}, xzr"639 $asm.puts "madd #{arm64TACOperands(operands, :quad)}, xzr" 587 640 when "subi" 588 emitARM64TAC("sub", operands, : int)641 emitARM64TAC("sub", operands, :word) 589 642 when "subp" 590 643 emitARM64TAC("sub", operands, :ptr) 591 644 when "subq" 592 emitARM64TAC("sub", operands, : ptr)645 emitARM64TAC("sub", operands, :quad) 593 646 when "subis" 594 emitARM64TAC("subs", operands, : int)647 emitARM64TAC("subs", operands, :word) 595 648 when "negi" 596 $asm.puts "sub #{operands[0].arm64Operand(: int)}, wzr, #{operands[0].arm64Operand(:int)}"649 $asm.puts "sub #{operands[0].arm64Operand(:word)}, wzr, #{operands[0].arm64Operand(:word)}" 597 650 when "negp" 598 $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"651 $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, #{arm64GPRName('xzr', :ptr)}, #{operands[0].arm64Operand(:ptr)}" 599 652 when "negq" 600 $asm.puts "sub #{operands[0].arm64Operand(: ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"653 $asm.puts "sub #{operands[0].arm64Operand(:quad)}, xzr, #{operands[0].arm64Operand(:quad)}" 601 654 when "loadi" 602 emitARM64Access("ldr", "ldur", operands[1], operands[0], : int)655 emitARM64Access("ldr", "ldur", operands[1], operands[0], :word) 603 656 when "loadis" 604 emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], : ptr)657 emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], :quad) 605 658 when "loadp" 606 659 emitARM64Access("ldr", "ldur", operands[1], operands[0], :ptr) 607 660 when "loadq" 608 emitARM64Access("ldr", "ldur", operands[1], operands[0], : ptr)661 emitARM64Access("ldr", "ldur", operands[1], operands[0], :quad) 609 662 when "storei" 610 emitARM64Unflipped("str", operands, : int)663 emitARM64Unflipped("str", operands, :word) 611 664 when "storep" 612 665 emitARM64Unflipped("str", operands, :ptr) 613 666 when "storeq" 614 emitARM64Unflipped("str", operands, : ptr)667 emitARM64Unflipped("str", operands, :quad) 615 668 when "loadb" 616 emitARM64Access("ldrb", "ldurb", operands[1], operands[0], : int)669 emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word) 617 670 when "loadbs" 618 emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], : int)671 emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word) 619 672 when "storeb" 620 emitARM64Unflipped("strb", operands, : int)673 emitARM64Unflipped("strb", operands, :word) 621 674 when "loadh" 622 emitARM64Access("ldrh", "ldurh", operands[1], operands[0], : int)675 emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word) 623 676 when "loadhs" 624 emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], : int)677 emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word) 625 678 when "storeh" 626 emitARM64Unflipped("strh", operands, : int)679 emitARM64Unflipped("strh", operands, :word) 627 680 when "loadd" 628 681 emitARM64Access("ldr", "ldur", operands[1], operands[0], :double) … … 640 693 emitARM64("fsqrt", operands, :double) 641 694 when "ci2d" 642 emitARM64("scvtf", operands, [: int, :double])695 emitARM64("scvtf", operands, [:word, :double]) 643 696 when "bdeq" 644 697 emitARM64Branch("fcmp", operands, :double, "b.eq") … … 676 729 raise "ARM64 does not support this opcode yet, #{codeOriginString}" 677 730 when "td2i" 678 emitARM64("fcvtzs", operands, [:double, : int])731 emitARM64("fcvtzs", operands, [:double, :word]) 679 732 when "bcd2i" 680 733 # FIXME: Remove this instruction, or use it and implement it. Currently it's not … … 696 749 # But since the ordering of arguments doesn't change on arm64 between the stp and ldp 697 750 # instructions we need to flip flop the argument positions that were passed to us. 698 $asm.puts "ldp #{ops[1].arm64Operand(: ptr)}, #{ops[0].arm64Operand(:ptr)}, [sp], #16"751 $asm.puts "ldp #{ops[1].arm64Operand(:quad)}, #{ops[0].arm64Operand(:quad)}, [sp], #16" 699 752 } 700 753 when "push" 701 754 operands.each_slice(2) { 702 755 | ops | 703 $asm.puts "stp #{ops[0].arm64Operand(: ptr)}, #{ops[1].arm64Operand(:ptr)}, [sp, #-16]!"756 $asm.puts "stp #{ops[0].arm64Operand(:quad)}, #{ops[1].arm64Operand(:quad)}, [sp, #-16]!" 704 757 } 705 758 when "move" … … 707 760 emitARM64MoveImmediate(operands[0].value, operands[1]) 708 761 else 709 emitARM64("mov", operands, : ptr)762 emitARM64("mov", operands, :quad) 710 763 end 711 764 when "sxi2p" 712 emitARM64("sxtw", operands, [: int, :ptr])765 emitARM64("sxtw", operands, [:word, :ptr]) 713 766 when "sxi2q" 714 emitARM64("sxtw", operands, [: int, :ptr])767 emitARM64("sxtw", operands, [:word, :quad]) 715 768 when "zxi2p" 716 emitARM64("uxtw", operands, [: int, :ptr])769 emitARM64("uxtw", operands, [:word, :ptr]) 717 770 when "zxi2q" 718 emitARM64("uxtw", operands, [: int, :ptr])771 emitARM64("uxtw", operands, [:word, :quad]) 719 772 when "nop" 720 773 $asm.puts "nop" 721 774 when "bieq", "bbeq" 722 775 if operands[0].immediate? and operands[0].value == 0 723 $asm.puts "cbz #{operands[1].arm64Operand(: int)}, #{operands[2].asmLabel}"776 $asm.puts "cbz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}" 724 777 elsif operands[1].immediate? and operands[1].value == 0 725 $asm.puts "cbz #{operands[0].arm64Operand(: int)}, #{operands[2].asmLabel}"726 else 727 emitARM64Branch("subs wzr, ", operands, : int, "b.eq")778 $asm.puts "cbz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}" 779 else 780 emitARM64Branch("subs wzr, ", operands, :word, "b.eq") 728 781 end 729 782 when "bpeq" … … 733 786 $asm.puts "cbz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}" 734 787 else 735 emitARM64Branch("subs xzr, ", operands, :ptr, "b.eq")788 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.eq") 736 789 end 737 790 when "bqeq" 738 791 if operands[0].immediate? and operands[0].value == 0 739 $asm.puts "cbz #{operands[1].arm64Operand(: ptr)}, #{operands[2].asmLabel}"792 $asm.puts "cbz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}" 740 793 elsif operands[1].immediate? and operands[1].value == 0 741 $asm.puts "cbz #{operands[0].arm64Operand(: ptr)}, #{operands[2].asmLabel}"742 else 743 emitARM64Branch("subs xzr, ", operands, : ptr, "b.eq")794 $asm.puts "cbz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}" 795 else 796 emitARM64Branch("subs xzr, ", operands, :quad, "b.eq") 744 797 end 745 798 when "bineq", "bbneq" 746 799 if operands[0].immediate? and operands[0].value == 0 747 $asm.puts "cbnz #{operands[1].arm64Operand(: int)}, #{operands[2].asmLabel}"800 $asm.puts "cbnz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}" 748 801 elsif operands[1].immediate? and operands[1].value == 0 749 $asm.puts "cbnz #{operands[0].arm64Operand(: int)}, #{operands[2].asmLabel}"750 else 751 emitARM64Branch("subs wzr, ", operands, : int, "b.ne")802 $asm.puts "cbnz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}" 803 else 804 emitARM64Branch("subs wzr, ", operands, :word, "b.ne") 752 805 end 753 806 when "bpneq" … … 757 810 $asm.puts "cbnz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}" 758 811 else 759 emitARM64Branch("subs xzr, ", operands, :ptr, "b.ne")812 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ne") 760 813 end 761 814 when "bqneq" 762 815 if operands[0].immediate? and operands[0].value == 0 763 $asm.puts "cbnz #{operands[1].arm64Operand(: ptr)}, #{operands[2].asmLabel}"816 $asm.puts "cbnz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}" 764 817 elsif operands[1].immediate? and operands[1].value == 0 765 $asm.puts "cbnz #{operands[0].arm64Operand(: ptr)}, #{operands[2].asmLabel}"766 else 767 emitARM64Branch("subs xzr, ", operands, : ptr, "b.ne")818 $asm.puts "cbnz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}" 819 else 820 emitARM64Branch("subs xzr, ", operands, :quad, "b.ne") 768 821 end 769 822 when "bia", "bba" 770 emitARM64Branch("subs wzr, ", operands, : int, "b.hi")823 emitARM64Branch("subs wzr, ", operands, :word, "b.hi") 771 824 when "bpa" 772 emitARM64Branch("subs xzr, ", operands, :ptr, "b.hi")825 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hi") 773 826 when "bqa" 774 emitARM64Branch("subs xzr, ", operands, : ptr, "b.hi")827 emitARM64Branch("subs xzr, ", operands, :quad, "b.hi") 775 828 when "biaeq", "bbaeq" 776 emitARM64Branch("subs wzr, ", operands, : int, "b.hs")829 emitARM64Branch("subs wzr, ", operands, :word, "b.hs") 777 830 when "bpaeq" 778 emitARM64Branch("subs xzr, ", operands, :ptr, "b.hs")831 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hs") 779 832 when "bqaeq" 780 emitARM64Branch("subs xzr, ", operands, : ptr, "b.hs")833 emitARM64Branch("subs xzr, ", operands, :quad, "b.hs") 781 834 when "bib", "bbb" 782 emitARM64Branch("subs wzr, ", operands, : int, "b.lo")835 emitARM64Branch("subs wzr, ", operands, :word, "b.lo") 783 836 when "bpb" 784 emitARM64Branch("subs xzr, ", operands, :ptr, "b.lo")837 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lo") 785 838 when "bqb" 786 emitARM64Branch("subs xzr, ", operands, : ptr, "b.lo")839 emitARM64Branch("subs xzr, ", operands, :quad, "b.lo") 787 840 when "bibeq", "bbbeq" 788 emitARM64Branch("subs wzr, ", operands, : int, "b.ls")841 emitARM64Branch("subs wzr, ", operands, :word, "b.ls") 789 842 when "bpbeq" 790 emitARM64Branch("subs xzr, ", operands, :ptr, "b.ls")843 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ls") 791 844 when "bqbeq" 792 emitARM64Branch("subs xzr, ", operands, : ptr, "b.ls")845 emitARM64Branch("subs xzr, ", operands, :quad, "b.ls") 793 846 when "bigt", "bbgt" 794 emitARM64Branch("subs wzr, ", operands, : int, "b.gt")847 emitARM64Branch("subs wzr, ", operands, :word, "b.gt") 795 848 when "bpgt" 796 emitARM64Branch("subs xzr, ", operands, :ptr, "b.gt")849 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.gt") 797 850 when "bqgt" 798 emitARM64Branch("subs xzr, ", operands, : ptr, "b.gt")851 emitARM64Branch("subs xzr, ", operands, :quad, "b.gt") 799 852 when "bigteq", "bbgteq" 800 emitARM64Branch("subs wzr, ", operands, : int, "b.ge")853 emitARM64Branch("subs wzr, ", operands, :word, "b.ge") 801 854 when "bpgteq" 802 emitARM64Branch("subs xzr, ", operands, :ptr, "b.ge")855 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ge") 803 856 when "bqgteq" 804 emitARM64Branch("subs xzr, ", operands, : ptr, "b.ge")857 emitARM64Branch("subs xzr, ", operands, :quad, "b.ge") 805 858 when "bilt", "bblt" 806 emitARM64Branch("subs wzr, ", operands, : int, "b.lt")859 emitARM64Branch("subs wzr, ", operands, :word, "b.lt") 807 860 when "bplt" 808 emitARM64Branch("subs xzr, ", operands, :ptr, "b.lt")861 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lt") 809 862 when "bqlt" 810 emitARM64Branch("subs xzr, ", operands, : ptr, "b.lt")863 emitARM64Branch("subs xzr, ", operands, :quad, "b.lt") 811 864 when "bilteq", "bblteq" 812 emitARM64Branch("subs wzr, ", operands, : int, "b.le")865 emitARM64Branch("subs wzr, ", operands, :word, "b.le") 813 866 when "bplteq" 814 emitARM64Branch("subs xzr, ", operands, :ptr, "b.le")867 emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.le") 815 868 when "bqlteq" 816 emitARM64Branch("subs xzr, ", operands, : ptr, "b.le")869 emitARM64Branch("subs xzr, ", operands, :quad, "b.le") 817 870 when "jmp" 818 871 if operands[0].label? 819 872 $asm.puts "b #{operands[0].asmLabel}" 820 873 else 821 emitARM64Unflipped("br", operands, : ptr)874 emitARM64Unflipped("br", operands, :quad) 822 875 end 823 876 when "call" … … 825 878 $asm.puts "bl #{operands[0].asmLabel}" 826 879 else 827 emitARM64Unflipped("blr", operands, : ptr)880 emitARM64Unflipped("blr", operands, :quad) 828 881 end 829 882 when "break" … … 832 885 $asm.puts "ret" 833 886 when "cieq", "cbeq" 834 emitARM64Compare(operands, : int, "ne")887 emitARM64Compare(operands, :word, "ne") 835 888 when "cpeq" 836 889 emitARM64Compare(operands, :ptr, "ne") 837 890 when "cqeq" 838 emitARM64Compare(operands, : ptr, "ne")891 emitARM64Compare(operands, :quad, "ne") 839 892 when "cineq", "cbneq" 840 emitARM64Compare(operands, : int, "eq")893 emitARM64Compare(operands, :word, "eq") 841 894 when "cpneq" 842 895 emitARM64Compare(operands, :ptr, "eq") 843 896 when "cqneq" 844 emitARM64Compare(operands, : ptr, "eq")897 emitARM64Compare(operands, :quad, "eq") 845 898 when "cia", "cba" 846 emitARM64Compare(operands, : int, "ls")899 emitARM64Compare(operands, :word, "ls") 847 900 when "cpa" 848 901 emitARM64Compare(operands, :ptr, "ls") 849 902 when "cqa" 850 emitARM64Compare(operands, : ptr, "ls")903 emitARM64Compare(operands, :quad, "ls") 851 904 when "ciaeq", "cbaeq" 852 emitARM64Compare(operands, : int, "lo")905 emitARM64Compare(operands, :word, "lo") 853 906 when "cpaeq" 854 907 emitARM64Compare(operands, :ptr, "lo") 855 908 when "cqaeq" 856 emitARM64Compare(operands, : ptr, "lo")909 emitARM64Compare(operands, :quad, "lo") 857 910 when "cib", "cbb" 858 emitARM64Compare(operands, : int, "hs")911 emitARM64Compare(operands, :word, "hs") 859 912 when "cpb" 860 913 emitARM64Compare(operands, :ptr, "hs") 861 914 when "cqb" 862 emitARM64Compare(operands, : ptr, "hs")915 emitARM64Compare(operands, :quad, "hs") 863 916 when "cibeq", "cbbeq" 864 emitARM64Compare(operands, : int, "hi")917 emitARM64Compare(operands, :word, "hi") 865 918 when "cpbeq" 866 919 emitARM64Compare(operands, :ptr, "hi") 867 920 when "cqbeq" 868 emitARM64Compare(operands, : ptr, "hi")921 emitARM64Compare(operands, :quad, "hi") 869 922 when "cilt", "cblt" 870 emitARM64Compare(operands, : int, "ge")923 emitARM64Compare(operands, :word, "ge") 871 924 when "cplt" 872 925 emitARM64Compare(operands, :ptr, "ge") 873 926 when "cqlt" 874 emitARM64Compare(operands, : ptr, "ge")927 emitARM64Compare(operands, :quad, "ge") 875 928 when "cilteq", "cblteq" 876 emitARM64Compare(operands, : int, "gt")929 emitARM64Compare(operands, :word, "gt") 877 930 when "cplteq" 878 931 emitARM64Compare(operands, :ptr, "gt") 879 932 when "cqlteq" 880 emitARM64Compare(operands, : ptr, "gt")933 emitARM64Compare(operands, :quad, "gt") 881 934 when "cigt", "cbgt" 882 emitARM64Compare(operands, : int, "le")935 emitARM64Compare(operands, :word, "le") 883 936 when "cpgt" 884 937 emitARM64Compare(operands, :ptr, "le") 885 938 when "cqgt" 886 emitARM64Compare(operands, : ptr, "le")939 emitARM64Compare(operands, :quad, "le") 887 940 when "cigteq", "cbgteq" 888 emitARM64Compare(operands, : int, "lt")941 emitARM64Compare(operands, :word, "lt") 889 942 when "cpgteq" 890 943 emitARM64Compare(operands, :ptr, "lt") 891 944 when "cqgteq" 892 emitARM64Compare(operands, : ptr, "lt")945 emitARM64Compare(operands, :quad, "lt") 893 946 when "peek" 894 $asm.puts "ldr #{operands[1].arm64Operand(: ptr)}, [sp, \##{operands[0].value * 8}]"947 $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]" 895 948 when "poke" 896 $asm.puts "str #{operands[1].arm64Operand(: ptr)}, [sp, \##{operands[0].value * 8}]"949 $asm.puts "str #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]" 897 950 when "fp2d" 898 951 emitARM64("fmov", operands, [:ptr, :double]) 899 952 when "fq2d" 900 emitARM64("fmov", operands, [: ptr, :double])953 emitARM64("fmov", operands, [:quad, :double]) 901 954 when "fd2p" 902 955 emitARM64("fmov", operands, [:double, :ptr]) 903 956 when "fd2q" 904 emitARM64("fmov", operands, [:double, : ptr])957 emitARM64("fmov", operands, [:double, :quad]) 905 958 when "bo" 906 959 $asm.puts "b.vs #{operands[0].asmLabel}" … … 912 965 $asm.puts "b.ne #{operands[0].asmLabel}" 913 966 when "leai" 914 operands[0].arm64EmitLea(operands[1], : int)967 operands[0].arm64EmitLea(operands[1], :word) 915 968 when "leap" 916 969 operands[0].arm64EmitLea(operands[1], :ptr) 917 970 when "leaq" 918 operands[0].arm64EmitLea(operands[1], : ptr)971 operands[0].arm64EmitLea(operands[1], :quad) 919 972 when "smulli" 920 $asm.puts "smaddl #{operands[2].arm64Operand(: ptr)}, #{operands[0].arm64Operand(:int)}, #{operands[1].arm64Operand(:int)}, xzr"973 $asm.puts "smaddl #{operands[2].arm64Operand(:quad)}, #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, xzr" 921 974 when "memfence" 922 975 $asm.puts "dmb sy" 923 976 when "pcrtoaddr" 924 $asm.puts "adr #{operands[1].arm64Operand(: ptr)}, #{operands[0].value}"977 $asm.puts "adr #{operands[1].arm64Operand(:quad)}, #{operands[0].value}" 925 978 when "nopCortexA53Fix835769" 926 979 $asm.putStr("#if CPU(ARM64_CORTEXA53)") … … 934 987 $asm.putStr("#if OS(DARWIN)") 935 988 $asm.puts "L_offlineasm_loh_adrp_#{uid}:" 936 $asm.puts "adrp #{operands[1].arm64Operand(: ptr)}, #{operands[0].asmLabel}@GOTPAGE"989 $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGE" 937 990 $asm.puts "L_offlineasm_loh_ldr_#{uid}:" 938 $asm.puts "ldr #{operands[1].arm64Operand(: ptr)}, [#{operands[1].arm64Operand(:ptr)}, #{operands[0].asmLabel}@GOTPAGEOFF]"991 $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGEOFF]" 939 992 940 993 # On Linux, use ELF GOT relocation specifiers. 941 994 $asm.putStr("#elif OS(LINUX)") 942 $asm.puts "adrp #{operands[1].arm64Operand(: ptr)}, :got:#{operands[0].asmLabel}"943 $asm.puts "ldr #{operands[1].arm64Operand(: ptr)}, [#{operands[1].arm64Operand(:ptr)}, :got_lo12:#{operands[0].asmLabel}]"995 $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, :got:#{operands[0].asmLabel}" 996 $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, :got_lo12:#{operands[0].asmLabel}]" 944 997 945 998 # Throw a compiler error everywhere else. -
trunk/Source/JavaScriptCore/offlineasm/asm.rb
r229506 r237173 390 390 lowLevelAST.validate 391 391 emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) { 392 $currentSettings = concreteSettings 392 393 $asm.inAsm { 393 394 lowLevelAST.lower(backend) -
trunk/Source/JavaScriptCore/offlineasm/ast.rb
r236434 r237173 812 812 @index = index 813 813 @scale = scale 814 raise unless [1, 2, 4, 8].member? @scale815 814 @offset = offset 816 815 end 817 816 817 def scaleValue 818 raise unless [1, 2, 4, 8].member? scale.value 819 scale.value 820 end 821 818 822 def scaleShift 819 case scale 823 case scaleValue 820 824 when 1 821 825 0 … … 827 831 3 828 832 else 829 raise "Bad scale at #{codeOriginString}"833 raise "Bad scale: #{scale.value} at #{codeOriginString}" 830 834 end 831 835 end … … 840 844 841 845 def mapChildren 842 BaseIndex.new(codeOrigin, (yield @base), (yield @index), @scale, (yield @offset))843 end 844 845 def dump 846 "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale }]"846 BaseIndex.new(codeOrigin, (yield @base), (yield @index), (yield @scale), (yield @offset)) 847 end 848 849 def dump 850 "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale.value}]" 847 851 end 848 852 -
trunk/Source/JavaScriptCore/offlineasm/backends.rb
r229482 r237173 87 87 if backendName =~ /ARM.*/ 88 88 backendName.sub!(/ARMV7(S?)(.*)/) { | _ | 'ARMv7' + $1.downcase + $2 } 89 backendName = "ARM64" if backendName == "ARM64_32" 89 90 end 90 91 backendName = "X86" if backendName == "I386" -
trunk/Source/JavaScriptCore/offlineasm/parser.rb
r236499 r237173 364 364 result 365 365 end 366 367 def parseConstExpr 368 if @tokens[@idx] == "constexpr" 369 @idx += 1 370 skipNewLine 371 if @tokens[@idx] == "(" 372 codeOrigin, text = parseTextInParens 373 text = text.join 374 else 375 codeOrigin, text = parseColonColon 376 text = text.join("::") 377 end 378 ConstExpr.forName(codeOrigin, text) 379 else 380 parseError 381 end 382 end 366 383 367 384 def parseAddress(offset) … … 388 405 b = parseVariable 389 406 if @tokens[@idx] == "]" 390 result = BaseIndex.new(codeOrigin, a, b, 1, offset)407 result = BaseIndex.new(codeOrigin, a, b, Immediate.new(codeOrigin, 1), offset) 391 408 else 392 409 parseError unless @tokens[@idx] == "," 393 410 @idx += 1 394 parseError unless ["1", "2", "4", "8"].member? @tokens[@idx].string 395 c = @tokens[@idx].string.to_i 396 @idx += 1 411 if ["1", "2", "4", "8"].member? @tokens[@idx].string 412 c = Immediate.new(codeOrigin, @tokens[@idx].string.to_i) 413 @idx += 1 414 elsif @tokens[@idx] == "constexpr" 415 c = parseConstExpr 416 else 417 c = parseVariable 418 end 397 419 parseError unless @tokens[@idx] == "]" 398 420 result = BaseIndex.new(codeOrigin, a, b, c, offset) … … 479 501 Sizeof.forName(codeOrigin, names.join('::')) 480 502 elsif @tokens[@idx] == "constexpr" 481 @idx += 1 482 skipNewLine 483 if @tokens[@idx] == "(" 484 codeOrigin, text = parseTextInParens 485 text = text.join 486 else 487 codeOrigin, text = parseColonColon 488 text = text.join("::") 489 end 490 ConstExpr.forName(codeOrigin, text) 503 parseConstExpr 491 504 elsif isLabel @tokens[@idx] 492 505 result = LabelReference.new(@tokens[@idx].codeOrigin, Label.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)) -
trunk/Source/JavaScriptCore/offlineasm/x86.rb
r230273 r237173 422 422 def x86AddressOperand(addressKind) 423 423 if !isIntelSyntax 424 "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scale })"425 else 426 "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scale }]"424 "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scaleValue})" 425 else 426 "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scaleValue}]" 427 427 end 428 428 end … … 432 432 x86AddressOperand(:ptr) 433 433 else 434 "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scale }]"434 "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scaleValue}]" 435 435 end 436 436 end -
trunk/Source/JavaScriptCore/runtime/BasicBlockLocation.cpp
r192125 r237173 75 75 { 76 76 Vector<Gap> executedRanges = getExecutedRanges(); 77 for (Gap gap : executedRanges) 78 dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:%zu\n", gap.first, gap.second, hasExecuted() ? "true" : "false", m_executionCount); 77 for (Gap gap : executedRanges) { 78 dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:", gap.first, gap.second, hasExecuted() ? "true" : "false"); 79 dataLogLn(m_executionCount); 80 } 79 81 } 80 82 … … 83 85 void BasicBlockLocation::emitExecuteCode(CCallHelpers& jit) const 84 86 { 85 static_assert(sizeof( size_t) == 8, "Assuming size_t is 64 bits on 64 bit platforms.");87 static_assert(sizeof(UCPURegister) == 8, "Assuming size_t is 64 bits on 64 bit platforms."); 86 88 jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(&m_executionCount)); 87 89 } -
trunk/Source/JavaScriptCore/runtime/BasicBlockLocation.h
r218794 r237173 63 63 int m_startOffset; 64 64 int m_endOffset; 65 size_t m_executionCount;66 65 Vector<Gap> m_gaps; 66 UCPURegister m_executionCount; 67 67 }; 68 68 -
trunk/Source/JavaScriptCore/runtime/HasOwnPropertyCache.h
r230856 r237173 34 34 class HasOwnPropertyCache { 35 35 static const uint32_t size = 2 * 1024; 36 static_assert( !(size & (size - 1)), "size should be a power of two.");36 static_assert(hasOneBitSet(size), "size should be a power of two."); 37 37 public: 38 38 static const uint32_t mask = size - 1; -
trunk/Source/JavaScriptCore/runtime/JSBigInt.cpp
r237080 r237173 227 227 228 228 // Multiplies {this} with {factor} and adds {summand} to the result. 229 inline void JSBigInt::inplaceMultiplyAdd(uintptr_t factor, uintptr_t summand) 230 { 231 STATIC_ASSERT(sizeof(factor) == sizeof(Digit)); 232 STATIC_ASSERT(sizeof(summand) == sizeof(Digit)); 233 229 void JSBigInt::inplaceMultiplyAdd(Digit factor, Digit summand) 230 { 234 231 internalMultiplyAdd(this, factor, summand, length(), this); 235 232 } … … 564 561 // This shifting produces a value which covers 0 < {s} <= (digitBits - 1) cases. {s} == digitBits never happen as we asserted. 565 562 // Since {sZeroMask} clears the value in the case of {s} == 0, {s} == 0 case is also covered. 566 STATIC_ASSERT(sizeof( intptr_t) == sizeof(Digit));567 Digit sZeroMask = static_cast<Digit>((-static_cast< intptr_t>(s)) >> (digitBits - 1));563 STATIC_ASSERT(sizeof(CPURegister) == sizeof(Digit)); 564 Digit sZeroMask = static_cast<Digit>((-static_cast<CPURegister>(s)) >> (digitBits - 1)); 568 565 static constexpr unsigned shiftMask = digitBits - 1; 569 566 Digit un32 = (high << s) | ((low >> ((digitBits - s) & shiftMask)) & sZeroMask); -
trunk/Source/JavaScriptCore/runtime/JSBigInt.h
r236901 r237173 26 26 #pragma once 27 27 28 #include "CPU.h" 28 29 #include "ExceptionHelpers.h" 29 30 #include "JSObject.h" … … 119 120 private: 120 121 121 using Digit = uintptr_t;122 using Digit = UCPURegister; 122 123 static constexpr unsigned bitsPerByte = 8; 123 124 static constexpr unsigned digitBits = sizeof(Digit) * bitsPerByte; -
trunk/Source/JavaScriptCore/runtime/JSObject.h
r236697 r237173 1075 1075 1076 1076 AuxiliaryBarrier<Butterfly*> m_butterfly; 1077 #if USE(JSVALUE32_64)1077 #if CPU(ADDRESS32) 1078 1078 unsigned m_32BitPadding; 1079 1079 #endif -
trunk/Source/JavaScriptCore/runtime/Options.cpp
r236883 r237173 218 218 219 219 return multiCorePriorityDelta; 220 } 221 222 static bool jitEnabledByDefault() 223 { 224 return is32Bit() || isAddress64Bit(); 220 225 } 221 226 -
trunk/Source/JavaScriptCore/runtime/Options.h
r237108 r237173 131 131 \ 132 132 v(bool, useLLInt, true, Normal, "allows the LLINT to be used if true") \ 133 v(bool, useJIT, true, Normal, "allows the executable pages to be allocated for JIT and thunks if true") \133 v(bool, useJIT, jitEnabledByDefault(), Normal, "allows the executable pages to be allocated for JIT and thunks if true") \ 134 134 v(bool, useBaselineJIT, true, Normal, "allows the baseline JIT to be used if true") \ 135 135 v(bool, useDFGJIT, true, Normal, "allows the DFG JIT to be used if true") \ -
trunk/Source/JavaScriptCore/runtime/RegExp.cpp
r235636 r237173 495 495 snprintf(jit16BitMatchAddr, jitAddrSize, "---- "); 496 496 } else { 497 snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<u nsigned long int>(codeBlock.get8BitMatchOnlyAddr()));498 snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<u nsigned long int>(codeBlock.get16BitMatchOnlyAddr()));499 snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<u nsigned long int>(codeBlock.get8BitMatchAddr()));500 snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<u nsigned long int>(codeBlock.get16BitMatchAddr()));497 snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchOnlyAddr())); 498 snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchOnlyAddr())); 499 snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchAddr())); 500 snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchAddr())); 501 501 } 502 502 #else -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
r236382 r237173 229 229 if (isCFrame()) { 230 230 RELEASE_ASSERT(!LLInt::isLLIntPC(frame()->callerFrame)); 231 stackTrace[m_depth] = UnprocessedStackFrame(frame()-> pc);231 stackTrace[m_depth] = UnprocessedStackFrame(frame()->returnPC); 232 232 m_depth++; 233 233 } else -
trunk/Source/JavaScriptCore/runtime/SlowPathReturnType.h
r206525 r237173 26 26 #pragma once 27 27 28 #include "CPU.h" 28 29 #include <wtf/StdLibExtras.h> 29 30 … … 35 36 // warnings, or worse, a change in the ABI used to return these types. 36 37 struct SlowPathReturnType { 37 void*a;38 void*b;38 CPURegister a; 39 CPURegister b; 39 40 }; 41 static_assert(sizeof(SlowPathReturnType) >= sizeof(void*) * 2, "SlowPathReturnType should fit in two machine registers"); 40 42 41 43 inline SlowPathReturnType encodeResult(void* a, void* b) 42 44 { 43 45 SlowPathReturnType result; 44 result.a = a;45 result.b = b;46 result.a = reinterpret_cast<CPURegister>(a); 47 result.b = reinterpret_cast<CPURegister>(b); 46 48 return result; 47 49 } … … 49 51 inline void decodeResult(SlowPathReturnType result, void*& a, void*& b) 50 52 { 51 a = re sult.a;52 b = re sult.b;53 a = reinterpret_cast<void*>(result.a); 54 b = reinterpret_cast<void*>(result.b); 53 55 } 54 56 -
trunk/Source/JavaScriptCore/tools/SigillCrashAnalyzer.cpp
r234649 r237173 322 322 char pcString[24]; 323 323 if (currentPC == machinePC) { 324 snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<u nsigned long>(currentPC));324 snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<uintptr_t>(currentPC)); 325 325 log("%20s: %s <=========================", pcString, m_arm64Opcode.disassemble(currentPC)); 326 326 } else { 327 snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<u nsigned long>(currentPC));327 snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<uintptr_t>(currentPC)); 328 328 log("%20s: %s", pcString, m_arm64Opcode.disassemble(currentPC)); 329 329 } -
trunk/Source/WTF/ChangeLog
r237156 r237173 1 2018-10-15 Keith Miller <keith_miller@apple.com> 2 3 Support arm64 CPUs with a 32-bit address space 4 https://bugs.webkit.org/show_bug.cgi?id=190273 5 6 Reviewed by Michael Saboff. 7 8 Use WTF_CPU_ADDRESS64/32 to decide if the system is running on arm64_32. 9 10 * wtf/MathExtras.h: 11 (getLSBSet): 12 * wtf/Platform.h: 13 1 14 2018-10-15 Timothy Hatcher <timothy@apple.com> 2 15 -
trunk/Source/WTF/wtf/MathExtras.h
r237099 r237173 207 207 } 208 208 209 template <typename T> inlineunsigned getLSBSet(T value)209 template <typename T> constexpr unsigned getLSBSet(T value) 210 210 { 211 211 typedef typename std::make_unsigned<T>::type UnsignedT; -
trunk/Source/WTF/wtf/Platform.h
r237136 r237173 737 737 738 738 #if !defined(USE_JSVALUE64) && !defined(USE_JSVALUE32_64) 739 #if CPU(ADDRESS64) 739 #if CPU(ADDRESS64) || CPU(ARM64) 740 740 #define USE_JSVALUE64 1 741 741 #else … … 746 746 /* The JIT is enabled by default on all x86, x86-64, ARM & MIPS platforms except ARMv7k. */ 747 747 #if !defined(ENABLE_JIT) \ 748 && (CPU(X86) || CPU(X86_64) || CPU(ARM) || (CPU(ARM64) && !defined(__ILP32__)) || CPU(MIPS)) \748 && (CPU(X86) || CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)) \ 749 749 && !CPU(APPLE_ARMV7K) 750 750 #define ENABLE_JIT 1 … … 866 866 867 867 #if !defined(ENABLE_WEBASSEMBLY) 868 #if ENABLE(B3_JIT) && PLATFORM(COCOA) 868 #if ENABLE(B3_JIT) && PLATFORM(COCOA) && CPU(ADDRESS64) 869 869 #define ENABLE_WEBASSEMBLY 1 870 870 #else -
trunk/Source/WebCore/ChangeLog
r237170 r237173 1 2018-10-15 Keith Miller <keith_miller@apple.com> 2 3 Support arm64 CPUs with a 32-bit address space 4 https://bugs.webkit.org/show_bug.cgi?id=190273 5 6 Reviewed by Michael Saboff. 7 8 Fix missing namespace annotation. 9 10 * cssjit/SelectorCompiler.cpp: 11 (WebCore::SelectorCompiler::SelectorCodeGenerator::generateAddStyleRelation): 12 1 13 2018-10-15 Justin Fan <justin_fan@apple.com> 2 14 -
trunk/Source/WebCore/cssjit/SelectorCompiler.cpp
r236228 r237173 2233 2233 m_assembler.lshiftPtr(Assembler::TrustedImm32(4), sizeAndTarget); 2234 2234 #else 2235 m_assembler.mul32( TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget);2235 m_assembler.mul32(Assembler::TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget); 2236 2236 #endif 2237 2237 m_assembler.addPtr(dataAddress, sizeAndTarget);
Note:
See TracChangeset
for help on using the changeset viewer.