Changeset 170428 in webkit
- Timestamp:
- Jun 25, 2014 9:37:19 AM (10 years ago)
- Location:
- trunk/Source
- Files:
-
- 1 added
- 18 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r170425 r170428 1 2014-06-25 peavo@outlook.com <peavo@outlook.com> 2 3 [Win64] ASM LLINT is not enabled. 4 https://bugs.webkit.org/show_bug.cgi?id=130638 5 6 This patch adds a new LLINT assembler backend for Win64, and implements it. 7 It makes adjustments to follow the Win64 ABI spec. where it's found to be needed. 8 Also, LLINT and JIT is enabled for Win64. 9 10 Reviewed by Mark Lam. 11 12 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: Added JITStubsMSVC64.asm. 13 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: Ditto. 14 * JavaScriptCore/JavaScriptCore.vcxproj/jsc/jscCommon.props: Increased stack size to avoid stack overflow in tests. 15 * JavaScriptCore.vcxproj/LLInt/LLIntAssembly/build-LLIntAssembly.sh: Generate assembler source file for Win64. 16 * assembler/MacroAssemblerX86_64.h: 17 (JSC::MacroAssemblerX86_64::call): Follow Win64 ABI spec. 18 * jit/JITStubsMSVC64.asm: Added. 19 * jit/Repatch.cpp: 20 (JSC::emitPutTransitionStub): Compile fix. 21 * jit/ThunkGenerators.cpp: 22 (JSC::nativeForGenerator): Follow Win64 ABI spec. 23 * llint/LLIntData.cpp: 24 (JSC::LLInt::Data::performAssertions): Ditto. 25 * llint/LLIntOfflineAsmConfig.h: Enable new llint backend for Win64. 26 * llint/LowLevelInterpreter.asm: Implement new Win64 backend, and follow Win64 ABI spec. 27 * llint/LowLevelInterpreter64.asm: Ditto. 28 * offlineasm/asm.rb: Compile fix. 29 * offlineasm/backends.rb: Add new llint backend for Win64. 30 * offlineasm/settings.rb: Compile fix. 31 * offlineasm/x86.rb: Implement new llint Win64 backend. 32 1 33 2014-06-25 Laszlo Gombos <l.gombos@samsung.com> 2 34 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
r170130 r170428 1592 1592 <UseSafeExceptionHandlers Condition="'$(Configuration)|$(Platform)'=='Release_WinCairo|Win32'">true</UseSafeExceptionHandlers> 1593 1593 </MASM> 1594 <MASM Include="..\jit\JITStubsMSVC64.asm"> 1595 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Debug_WinCairo|Win32'">true</ExcludedFromBuild> 1596 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">true</ExcludedFromBuild> 1597 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='DebugSuffix|Win32'">true</ExcludedFromBuild> 1598 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Release_WinCairo|Win32'">true</ExcludedFromBuild> 1599 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Production|Win32'">true</ExcludedFromBuild> 1600 <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">true</ExcludedFromBuild> 1601 </MASM> 1594 1602 </ItemGroup> 1595 1603 <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" /> -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
r170130 r170428 3715 3715 <ItemGroup> 3716 3716 <MASM Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\LowLevelInterpreterWin.asm" /> 3717 <MASM Include="..\jit\JITStubsMSVC64.asm"> 3718 <Filter>jit</Filter> 3719 </MASM> 3717 3720 </ItemGroup> 3718 3721 </Project> -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/LLInt/LLIntAssembly/build-LLIntAssembly.sh
r167094 r170428 26 26 printf "END" > LowLevelInterpreterWin.asm 27 27 28 # Win32 is using the LLINT x86 backend, and should generate an assembler file. 29 # Win64 is using the LLINT C backend, and should generate a header file. 28 # If you want to enable the LLINT C loop, set OUTPUTFILENAME to "LLIntAssembly.h" 30 29 31 if [ "${PLATFORMARCHITECTURE}" == "32" ]; then 32 OUTPUTFILENAME="LowLevelInterpreterWin.asm" 33 else 34 OUTPUTFILENAME="LLIntAssembly.h" 35 fi 30 OUTPUTFILENAME="LowLevelInterpreterWin.asm" 36 31 37 32 /usr/bin/env ruby "${SRCROOT}/offlineasm/asm.rb" "-I." "${SRCROOT}/llint/LowLevelInterpreter.asm" "${BUILT_PRODUCTS_DIR}/LLIntOffsetsExtractor/LLIntOffsetsExtractor${3}.exe" "${OUTPUTFILENAME}" || exit 1 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/jsc/jscCommon.props
r159499 r170428 17 17 </ModuleDefinitionFile> 18 18 <SubSystem>Console</SubSystem> 19 <StackReserveSize>2097152</StackReserveSize> 19 20 </Link> 20 21 </ItemDefinitionGroup> -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
r169942 r170428 154 154 Call call() 155 155 { 156 #if OS(WINDOWS) 157 // JIT relies on the CallerFrame (frame pointer) being put on the stack, 158 // On Win64 we need to manually copy the frame pointer to the stack, since MSVC may not maintain a frame pointer on 64-bit. 159 // See http://msdn.microsoft.com/en-us/library/9z1stfyw.aspx where it's stated that rbp MAY be used as a frame pointer. 160 store64(X86Registers::ebp, Address(X86Registers::esp, -16)); 161 162 // On Windows we need to copy the arguments that don't fit in registers to the stack location where the callee expects to find them. 163 // We don't know the number of arguments at this point, so the arguments (5, 6, ...) should always be copied. 164 165 // Copy argument 5 166 load64(Address(X86Registers::esp, 4 * sizeof(int64_t)), scratchRegister); 167 store64(scratchRegister, Address(X86Registers::esp, -4 * sizeof(int64_t))); 168 169 // Copy argument 6 170 load64(Address(X86Registers::esp, 5 * sizeof(int64_t)), scratchRegister); 171 store64(scratchRegister, Address(X86Registers::esp, -3 * sizeof(int64_t))); 172 173 // We also need to allocate the shadow space on the stack for the 4 parameter registers. 174 // Also, we should allocate 16 bytes for the frame pointer, and return address (not populated). 175 // In addition, we need to allocate 16 bytes for two more parameters, since the call can have up to 6 parameters. 176 sub64(TrustedImm32(8 * sizeof(int64_t)), X86Registers::esp); 177 #endif 156 178 DataLabelPtr label = moveWithPatch(TrustedImmPtr(0), scratchRegister); 157 179 Call result = Call(m_assembler.call(scratchRegister), Call::Linkable); 180 #if OS(WINDOWS) 181 add64(TrustedImm32(8 * sizeof(int64_t)), X86Registers::esp); 182 #endif 158 183 ASSERT_UNUSED(label, differenceBetween(label, result) == REPTACH_OFFSET_CALL_R11); 159 184 return result; -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r169853 r170428 1083 1083 ASSERT(oldStructure->typeInfo().inlineTypeFlags() == structure->typeInfo().inlineTypeFlags()); 1084 1084 ASSERT(oldStructure->indexingType() == structure->indexingType()); 1085 stubJit.store32(MacroAssembler::TrustedImm32(reinterpret_cast<uint32_t>(structure->id())), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset())); 1085 #if USE(JSVALUE64) 1086 uint32_t val = structure->id(); 1087 #else 1088 uint32_t val = reinterpret_cast<uint32_t>(structure->id()); 1089 #endif 1090 stubJit.store32(MacroAssembler::TrustedImm32(val), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset())); 1086 1091 #if USE(JSVALUE64) 1087 1092 if (isInlineOffset(slot.cachedOffset())) -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r168776 r170428 316 316 jit.move(JSInterfaceJIT::callFrameRegister, X86Registers::ecx); 317 317 318 // Leave space for the callee parameter home addresses and align the stack. 319 jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t) + 16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); 318 // Leave space for the callee parameter home addresses. 319 // At this point the stack is aligned to 16 bytes, but if this changes at some point, we need to emit code to align it. 320 jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); 320 321 321 322 jit.emitGetFromCallFrameHeaderPtr(JSStack::Callee, X86Registers::edx); … … 323 324 jit.call(JSInterfaceJIT::Address(X86Registers::r9, executableOffsetToFunction)); 324 325 325 jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t) + 16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);326 jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); 326 327 #endif 327 328 … … 399 400 jit.push(JSInterfaceJIT::regT0); 400 401 #else 402 #if OS(WINDOWS) 403 // Allocate space on stack for the 4 parameter registers. 404 jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); 405 #endif 401 406 jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister), JSInterfaceJIT::argumentGPR0); 402 407 #endif … … 405 410 #if CPU(X86) && USE(JSVALUE32_64) 406 411 jit.addPtr(JSInterfaceJIT::TrustedImm32(16), JSInterfaceJIT::stackPointerRegister); 412 #elif OS(WINDOWS) 413 jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); 407 414 #endif 408 415 -
trunk/Source/JavaScriptCore/llint/LLIntData.cpp
r170147 r170428 124 124 ASSERT(ValueNull == TagBitTypeOther); 125 125 #endif 126 #if CPU(X86_64) || CPU(ARM64) || !ENABLE(JIT)126 #if (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64) || !ENABLE(JIT) 127 127 ASSERT(!maxFrameExtentForSlowPathCall); 128 128 #elif CPU(ARM) || CPU(SH4) … … 130 130 #elif CPU(X86) || CPU(MIPS) 131 131 ASSERT(maxFrameExtentForSlowPathCall == 40); 132 #elif CPU(X86_64) && OS(WINDOWS) 133 ASSERT(maxFrameExtentForSlowPathCall == 64); 132 134 #endif 133 135 ASSERT(StringType == 5); -
trunk/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
r170147 r170428 40 40 #define OFFLINE_ASM_ARM64 0 41 41 #define OFFLINE_ASM_X86_64 0 42 #define OFFLINE_ASM_X86_64_WIN 0 42 43 #define OFFLINE_ASM_ARMv7s 0 43 44 #define OFFLINE_ASM_MIPS 0 … … 85 86 #endif 86 87 87 #if CPU(X86_64) 88 #if CPU(X86_64) && !PLATFORM(WIN) 88 89 #define OFFLINE_ASM_X86_64 1 89 90 #else 90 91 #define OFFLINE_ASM_X86_64 0 92 #endif 93 94 #if CPU(X86_64) && PLATFORM(WIN) 95 #define OFFLINE_ASM_X86_64_WIN 1 96 #else 97 #define OFFLINE_ASM_X86_64_WIN 0 91 98 #endif 92 99 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r169703 r170428 85 85 elsif MIPS 86 86 const maxFrameExtentForSlowPathCall = 40 87 elsif X86_64_WIN 88 const maxFrameExtentForSlowPathCall = 64 87 89 end 88 90 … … 249 251 push lr 250 252 push cfr 251 elsif X86 or X86_WIN or X86_64 253 elsif X86 or X86_WIN or X86_64 or X86_64_WIN 252 254 push cfr 253 255 elsif ARM64 … … 264 266 pop cfr 265 267 pop lr 266 elsif X86 or X86_WIN or X86_64 268 elsif X86 or X86_WIN or X86_64 or X86_64_WIN 267 269 pop cfr 268 270 elsif ARM64 … … 275 277 # In C_LOOP case, we're only preserving the bytecode vPC. 276 278 move lr, destinationRegister 277 elsif X86 or X86_WIN or X86_64 279 elsif X86 or X86_WIN or X86_64 or X86_64_WIN 278 280 pop destinationRegister 279 281 else … … 286 288 # In C_LOOP case, we're only restoring the bytecode vPC. 287 289 move sourceRegister, lr 288 elsif X86 or X86_WIN or X86_64 290 elsif X86 or X86_WIN or X86_64 or X86_64_WIN 289 291 push sourceRegister 290 292 else … … 294 296 295 297 macro functionPrologue() 296 if X86 or X86_WIN or X86_64 298 if X86 or X86_WIN or X86_64 or X86_64_WIN 297 299 push cfr 298 300 elsif ARM64 … … 306 308 307 309 macro functionEpilogue() 308 if X86 or X86_WIN or X86_64 310 if X86 or X86_WIN or X86_64 or X86_64_WIN 309 311 pop cfr 310 312 elsif ARM64 … … 317 319 318 320 macro callToJavaScriptPrologue() 319 if X86_64 321 if X86_64 or X86_64_WIN 320 322 push cfr 321 323 push t0 … … 372 374 373 375 popCalleeSaves 374 if X86_64 376 if X86_64 or X86_64_WIN 375 377 pop t2 376 378 pop cfr … … 657 659 const address = t1 658 660 const zeroValue = t0 661 elsif X86_64_WIN 662 const vm = t2 663 const address = t1 664 const zeroValue = t0 659 665 elsif X86 or X86_WIN 660 666 const vm = t2 … … 693 699 else 694 700 macro initPCRelative(pcBase) 695 if X86_64 701 if X86_64 or X86_64_WIN 696 702 call _relativePCBase 697 703 _relativePCBase: … … 726 732 move index, t2 727 733 storep t0, [t4, t2, 8] 734 elsif X86_64_WIN 735 leap (label - _relativePCBase)[t1], t0 736 move index, t4 737 storep t0, [t2, t4, 8] 728 738 elsif X86 or X86_WIN 729 739 leap (label - _relativePCBase)[t1], t0 -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r169751 r170428 58 58 move arg2, t5 59 59 call function 60 elsif X86_64_WIN 61 # Note: this implementation is only correct if the return type size is > 8 bytes. 62 # See macro cCall2Void for an implementation when the return type <= 8 bytes. 63 # On Win64, when the return type is larger than 8 bytes, we need to allocate space on the stack for the return value. 64 # On entry rcx (t2), should contain a pointer to this stack space. The other parameters are shifted to the right, 65 # rdx (t1) should contain the first argument, and r8 (t6) should contain the second argument. 66 # On return, rax contains a pointer to this stack value, and we then need to copy the 16 byte return value into rax (t0) and rdx (t1) 67 # since the return value is expected to be split between the two. 68 # See http://msdn.microsoft.com/en-us/library/7572ztz4.aspx 69 move arg1, t1 70 move arg2, t6 71 subp 48, sp 72 move sp, t2 73 addp 32, t2 74 call function 75 addp 48, sp 76 move 8[t0], t1 77 move [t0], t0 60 78 elsif ARM64 61 79 move arg1, t0 … … 72 90 if C_LOOP 73 91 cloopCallSlowPathVoid function, arg1, arg2 92 elsif X86_64_WIN 93 # Note: we cannot use the cCall2 macro for Win64 in this case, 94 # as the Win64 cCall2 implemenation is only correct when the return type size is > 8 bytes. 95 # On Win64, rcx and rdx are used for passing the first two parameters. 96 # We also need to make room on the stack for all four parameter registers. 97 # See http://msdn.microsoft.com/en-us/library/ms235286.aspx 98 move arg2, t1 99 move arg1, t2 100 subp 32, sp 101 call function 102 addp 32, sp 74 103 else 75 104 cCall2(function, arg1, arg2) … … 86 115 move arg4, t2 87 116 call function 117 elsif X86_64_WIN 118 # On Win64, rcx, rdx, r8, and r9 are used for passing the first four parameters. 119 # We also need to make room on the stack for all four parameter registers. 120 # See http://msdn.microsoft.com/en-us/library/ms235286.aspx 121 move arg1, t2 122 move arg2, t1 123 move arg3, t6 124 move arg4, t7 125 subp 32, sp 126 call function 127 addp 32, sp 88 128 elsif ARM64 89 129 move arg1, t0 … … 110 150 const temp2 = t3 111 151 const temp3 = t6 152 elsif X86_64_WIN 153 const entry = t2 154 const vm = t1 155 const protoCallFrame = t6 156 157 const previousCFR = t0 158 const previousPC = t4 159 const temp1 = t0 160 const temp2 = t3 161 const temp3 = t7 112 162 elsif ARM64 or C_LOOP 113 163 const entry = a0 … … 127 177 loadp 7*8[sp], previousPC 128 178 move 6*8[sp], previousCFR 179 elsif X86_64_WIN 180 # Win64 pushes two more registers 181 loadp 9*8[sp], previousPC 182 move 8*8[sp], previousCFR 129 183 elsif ARM64 130 184 move cfr, previousCFR … … 143 197 storep temp2, ScopeChain[cfr] 144 198 storep 1, CodeBlock[cfr] 145 if X86_64 146 loadp 7*8[sp], previousPC 147 loadp 6*8[sp], previousCFR 148 end 199 149 200 storep previousPC, ReturnPC[cfr] 150 201 storep previousCFR, CallerFrame[cfr] … … 239 290 checkStackPointerAlignment(temp3, 0xbad0dc04) 240 291 241 if X86_64 292 if X86_64 or X86_64_WIN 242 293 pop t5 243 294 end … … 263 314 if X86_64 264 315 move sp, t4 316 elsif X86_64_WIN 317 move sp, t2 265 318 elsif ARM64 or C_LOOP 266 319 move sp, a0 … … 270 323 storep lr, 8[sp] 271 324 cloopCallNative temp 325 elsif X86_64_WIN 326 # For a host function call, JIT relies on that the CallerFrame (frame pointer) is put on the stack, 327 # On Win64 we need to manually copy the frame pointer to the stack, since MSVC may not maintain a frame pointer on 64-bit. 328 # See http://msdn.microsoft.com/en-us/library/9z1stfyw.aspx where it's stated that rbp MAY be used as a frame pointer. 329 storep cfr, [sp] 330 331 # We need to allocate 32 bytes on the stack for the shadow space. 332 subp 32, sp 333 call temp 334 addp 32, sp 272 335 else 273 addp 16, sp 336 addp 16, sp 274 337 call temp 275 338 subp 16, sp … … 958 1021 _llint_op_div: 959 1022 traceExecution() 960 if X86_64 1023 if X86_64 or X86_64_WIN 961 1024 binaryOpCustomStore( 962 1025 macro (left, right, slow, index) … … 1969 2032 functionPrologue() 1970 2033 storep 0, CodeBlock[cfr] 1971 if X86_64 2034 if X86_64 or X86_64_WIN 2035 if X86_64 2036 const arg1 = t4 # t4 = rdi 2037 const arg2 = t5 # t5 = rsi 2038 const temp = t1 2039 elsif X86_64_WIN 2040 const arg1 = t2 # t2 = rcx 2041 const arg2 = t1 # t1 = rdx 2042 const temp = t0 2043 end 1972 2044 loadp ScopeChain[cfr], t0 1973 2045 andp MarkedBlockMask, t0 … … 1977 2049 loadq ScopeChain[t0], t1 1978 2050 storeq t1, ScopeChain[cfr] 1979 move cfr, t4 # t4 = rdi1980 loadp Callee[cfr], t5 # t5 = rsi1981 loadp JSFunction::m_executable[ t5], t12051 move cfr, arg1 2052 loadp Callee[cfr], arg2 2053 loadp JSFunction::m_executable[arg2], temp 1982 2054 checkStackPointerAlignment(t3, 0xdead0001) 1983 call executableOffsetToFunction[t1] 2055 if X86_64_WIN 2056 subp 32, sp 2057 end 2058 call executableOffsetToFunction[temp] 2059 if X86_64_WIN 2060 addp 32, sp 2061 end 1984 2062 loadp ScopeChain[cfr], t3 1985 2063 andp MarkedBlockMask, t3 -
trunk/Source/JavaScriptCore/offlineasm/asm.rb
r167094 r170428 263 263 @commentState = :many 264 264 when :many 265 @outp.puts "//#{text}" if $enableCodeOriginComments265 @outp.puts $commentPrefix + " #{text}" if $enableCodeOriginComments 266 266 else 267 267 raise -
trunk/Source/JavaScriptCore/offlineasm/backends.rb
r167094 r170428 36 36 "X86_WIN", 37 37 "X86_64", 38 "X86_64_WIN", 38 39 "ARM", 39 40 "ARMv7", … … 55 56 "X86_WIN", 56 57 "X86_64", 58 "X86_64_WIN", 57 59 "ARM", 58 60 "ARMv7", -
trunk/Source/JavaScriptCore/offlineasm/settings.rb
r167094 r170428 178 178 $output.puts cppSettingsTest(concreteSettings) 179 179 else 180 $output.puts ".MODEL FLAT, C" 180 if backend == "X86_WIN" 181 $output.puts ".MODEL FLAT, C" 182 end 181 183 $output.puts "INCLUDE #{File.basename($output.path)}.sym" 182 184 $output.puts "_TEXT SEGMENT" -
trunk/Source/JavaScriptCore/offlineasm/x86.rb
r167094 r170428 33 33 when "X86_64" 34 34 true 35 when "X86_64_WIN" 36 true 35 37 else 36 38 raise "bad value for $activeBackend: #{$activeBackend}" … … 45 47 true 46 48 when "X86_64" 49 false 50 when "X86_64_WIN" 47 51 false 48 52 else … … 101 105 size = "dword" 102 106 when :ptr 103 size = "dword"107 size = isX64 ? "qword" : "dword" 104 108 when :double 105 109 size = "qword" 110 when :quad 111 size = "qword" 106 112 else 107 raise 113 raise "Invalid kind #{kind}" 108 114 end 109 115 … … 117 123 case kind 118 124 when :half 119 "%" + @name + "w"125 register(@name + "w") 120 126 when :int 121 "%" + @name + "d"127 register(@name + "d") 122 128 when :ptr 123 "%" + @name129 register(@name) 124 130 when :quad 125 "%" + @name131 register(@name) 126 132 else 127 133 raise … … 285 291 case kind 286 292 when :half 287 "%r8w"293 register("r8w") 288 294 when :int 289 "%r8d"295 register("r8d") 290 296 when :ptr 291 "%r8"297 register("r8") 292 298 when :quad 293 "%r8"299 register("r8") 294 300 end 295 301 when "t7" … … 297 303 case kind 298 304 when :half 299 "%r9w"305 register("r9w") 300 306 when :int 301 "%r9d"307 register("r9d") 302 308 when :ptr 303 "%r9"309 register("r9") 304 310 when :quad 305 "%r9"311 register("r9") 306 312 end 307 313 when "csr1" … … 309 315 case kind 310 316 when :half 311 "%r14w"317 register("r14w") 312 318 when :int 313 "%r14d"319 register("r14d") 314 320 when :ptr 315 "%r14"321 register("r14") 316 322 when :quad 317 "%r14"323 register("r14") 318 324 end 319 325 when "csr2" … … 321 327 case kind 322 328 when :half 323 "%r15w"329 register("r15w") 324 330 when :int 325 "%r15d"331 register("r15d") 326 332 when :ptr 327 "%r15"333 register("r15") 328 334 when :quad 329 "%r15"335 register("r15") 330 336 end 331 337 else … … 344 350 case name 345 351 when "ft0", "fa0", "fr" 346 "%xmm0"352 register("xmm0") 347 353 when "ft1", "fa1" 348 "%xmm1"354 register("xmm1") 349 355 when "ft2", "fa2" 350 "%xmm2"356 register("xmm2") 351 357 when "ft3", "fa3" 352 "%xmm3"358 register("xmm3") 353 359 when "ft4" 354 "%xmm4"360 register("xmm4") 355 361 when "ft5" 356 "%xmm5"362 register("xmm5") 357 363 else 358 364 raise "Bad register #{name} for X86 at #{codeOriginString}" … … 511 517 return newList 512 518 end 519 def getModifiedListX86_64_WIN 520 getModifiedListX86_64 521 end 513 522 end 514 523 … … 605 614 case mode 606 615 when :normal 607 $asm.puts "ucomisd #{o perands[1].x86Operand(:double)}, #{operands[0].x86Operand(:double)}"616 $asm.puts "ucomisd #{orderOperands(operands[1].x86Operand(:double), operands[0].x86Operand(:double))}" 608 617 when :reverse 609 $asm.puts "ucomisd #{o perands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"618 $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}" 610 619 else 611 620 raise mode.inspect … … 846 855 def lowerX86_64 847 856 raise unless $activeBackend == "X86_64" 857 lowerX86Common 858 end 859 860 def lowerX86_64_WIN 861 raise unless $activeBackend == "X86_64_WIN" 848 862 lowerX86Common 849 863 end … … 920 934 when "loadis" 921 935 if isX64 922 $asm.puts "movslq #{x86Operands(:int, :quad)}" 936 if !isIntelSyntax 937 $asm.puts "movslq #{x86Operands(:int, :quad)}" 938 else 939 $asm.puts "movsxd #{x86Operands(:int, :quad)}" 940 end 923 941 else 924 942 $asm.puts "mov#{x86Suffix(:int)} #{x86Operands(:int, :int)}" … … 1022 1040 $asm.puts "fstp #{operands[1].x87Operand(1)}" 1023 1041 else 1024 $asm.puts "cvtsi2sd #{o perands[0].x86Operand(:int)}, #{operands[1].x86Operand(:double)}"1042 $asm.puts "cvtsi2sd #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:double))}" 1025 1043 end 1026 1044 when "bdeq" … … 1028 1046 handleX87Compare(:normal) 1029 1047 else 1030 $asm.puts "ucomisd #{o perands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"1048 $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}" 1031 1049 end 1032 1050 if operands[0] == operands[1] … … 1055 1073 handleX87Compare(:normal) 1056 1074 else 1057 $asm.puts "ucomisd #{o perands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"1075 $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}" 1058 1076 end 1059 1077 if operands[0] == operands[1] … … 1131 1149 when "popCalleeSaves" 1132 1150 if isX64 1133 $asm.puts "pop %rbx" 1134 $asm.puts "pop %r15" 1135 $asm.puts "pop %r14" 1136 $asm.puts "pop %r13" 1137 $asm.puts "pop %r12" 1151 if isMSVC 1152 $asm.puts "pop " + register("rsi") 1153 $asm.puts "pop " + register("rdi") 1154 end 1155 $asm.puts "pop " + register("rbx") 1156 $asm.puts "pop " + register("r15") 1157 $asm.puts "pop " + register("r14") 1158 $asm.puts "pop " + register("r13") 1159 $asm.puts "pop " + register("r12") 1138 1160 else 1139 1161 $asm.puts "pop " + register("ebx") … … 1143 1165 when "pushCalleeSaves" 1144 1166 if isX64 1145 $asm.puts "push %r12" 1146 $asm.puts "push %r13" 1147 $asm.puts "push %r14" 1148 $asm.puts "push %r15" 1149 $asm.puts "push %rbx" 1167 $asm.puts "push " + register("r12") 1168 $asm.puts "push " + register("r13") 1169 $asm.puts "push " + register("r14") 1170 $asm.puts "push " + register("r15") 1171 $asm.puts "push " + register("rbx") 1172 if isMSVC 1173 $asm.puts "push " + register("rdi") 1174 $asm.puts "push " + register("rsi") 1175 end 1150 1176 else 1151 1177 $asm.puts "push " + register("esi") … … 1156 1182 handleMove 1157 1183 when "sxi2q" 1158 $asm.puts "movslq #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:quad)}" 1184 if !isIntelSyntax 1185 $asm.puts "movslq #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:quad)}" 1186 else 1187 $asm.puts "movsxd #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:quad))}" 1188 end 1159 1189 when "zxi2q" 1160 1190 $asm.puts "mov#{x86Suffix(:int)} #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:int))}" … … 1442 1472 $asm.puts "cdq" 1443 1473 when "idivi" 1444 $asm.puts "idiv l#{operands[0].x86Operand(:int)}"1474 $asm.puts "idiv#{x86Suffix(:int)} #{operands[0].x86Operand(:int)}" 1445 1475 when "fii2d" 1446 1476 if useX87 … … 1480 1510 $asm.puts "fstp #{operands[1].x87Operand(1)}" 1481 1511 else 1482 $asm.puts "movq #{operands[0].x86Operand(:quad)}, #{operands[1].x86Operand(:double)}" 1512 if !isIntelSyntax 1513 $asm.puts "movq #{operands[0].x86Operand(:quad)}, #{operands[1].x86Operand(:double)}" 1514 else 1515 # MASM does not accept register operands with movq. 1516 # Debugging shows that movd actually moves a qword when using MASM. 1517 $asm.puts "movd #{operands[1].x86Operand(:double)}, #{operands[0].x86Operand(:quad)}" 1518 end 1483 1519 end 1484 1520 when "fd2q" … … 1493 1529 $asm.puts "movq -8(#{sp.x86Operand(:ptr)}), #{operands[1].x86Operand(:quad)}" 1494 1530 else 1495 $asm.puts "movq #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:quad)}" 1531 if !isIntelSyntax 1532 $asm.puts "movq #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:quad)}" 1533 else 1534 # MASM does not accept register operands with movq. 1535 # Debugging shows that movd actually moves a qword when using MASM. 1536 $asm.puts "movd #{operands[1].x86Operand(:quad)}, #{operands[0].x86Operand(:double)}" 1537 end 1496 1538 end 1497 1539 when "bo" -
trunk/Source/WTF/ChangeLog
r170425 r170428 1 2014-06-25 peavo@outlook.com <peavo@outlook.com> 2 3 [Win64] ASM LLINT is not enabled. 4 https://bugs.webkit.org/show_bug.cgi?id=130638 5 6 Reviewed by Mark Lam. 7 8 * wtf/Platform.h: Enable LLINT and JIT for Win64. 9 1 10 2014-06-25 Laszlo Gombos <l.gombos@samsung.com> 2 11 -
trunk/Source/WTF/wtf/Platform.h
r170353 r170428 641 641 && (CPU(X86) || CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)) \ 642 642 && !CPU(APPLE_ARMV7K) \ 643 && !OS(WINCE) \ 644 && !(OS(WINDOWS) && CPU(X86_64)) 643 && !OS(WINCE) 645 644 #define ENABLE_JIT 1 646 645 #endif
Note: See TracChangeset
for help on using the changeset viewer.