Changeset 279256 in webkit
- Timestamp:
- Jun 24, 2021, 5:06:56 PM (4 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 1 added
- 17 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r278696 r279256 758 758 759 759 jit/AssemblyHelpers.h 760 jit/AssemblyHelpersSpoolers.h 760 761 jit/CCallHelpers.h 761 762 jit/ExecutableAllocator.h -
trunk/Source/JavaScriptCore/ChangeLog
r279253 r279256 1 2021-06-24 Mark Lam <mark.lam@apple.com> 2 3 Use ldp and stp more for saving / restoring registers on ARM64. 4 https://bugs.webkit.org/show_bug.cgi?id=227039 5 rdar://79354736 6 7 Reviewed by Saam Barati. 8 9 This patch introduces a spooler abstraction in AssemblyHelpers. The spooler 10 basically batches up load / store operations and emit them as pair instructions 11 if appropriate. 12 13 There are 4 spooler classes: 14 a. Spooler 15 - template base class for LoadRegSpooler and StoreRegSpooler. 16 - encapsulates the batching strategy for load / store pairs. 17 18 b. LoadRegSpooler - specializes Spooler to handle load pairs. 19 b. StoreRegSpooler - specializes Spooler to handle store pairs. 20 21 d. CopySpooler 22 - handles matching loads with stores. 23 - tries to emit loads as load pairs if possible. 24 - tries to emot stores as store pairs if possible. 25 - ensures that pre-requisite loads are emitted before stores are emitted. 26 - other than loads, also support constants and registers as sources of values 27 to be stored. This is useful in OSR exit ramps where we may materialize a 28 stack value to store from constants or registers in addition to values we 29 load from the old stack frame or from a scratch buffer. 30 31 In this patch, we also do the following: 32 33 1. Use spoolers in many places so that we can emit load / store pairs instead of 34 single load / stores. This helps shrink JIT code side, and also potentially 35 improves performance. 36 37 2. In DFG::OSRExit::compileExit(), we used to recover constants into a scratch 38 buffer, and then later, load from that scratch buffer to store into the 39 new stack frame(s). 40 41 This patch changes it so that we defer constant recovery until the final 42 loop where we store the recovered value directly into the new stack frame(s). 43 This saves us the work (and JIT code space) for storing into a scratch buffer 44 and then reloading from the scratch buffer. 45 46 There is one exception: tmp values used by active checkpoints. We need to call 47 operationMaterializeOSRExitSideState() to materialize the active checkpoint 48 side state before the final loop where we now recover constants. Hence, we 49 need these tmp values recovered before hand. 50 51 So, we check upfront if we have active checkpoint side state to materialize. 52 If so, we'll eagerly recover the constants for initializing those tmps. 53 54 We also use the CopySpooler in the final loop to emit load / store pairs for 55 filling in the new stack frame(s). 56 57 One more thing: it turns out that the vast majority of constants to be recovered 58 is simply the undefined value. So, as an optimization, the final loop keeps 59 the undefined value in a register, and has the spooler store directly from 60 that register when appropriate. This saves on JIT code to repeatedly materialize 61 the undefined JSValue constant. 62 63 3. In reifyInlinedCallFrames(), replace the use of GPRInfo::nonArgGPR0 with 64 GPRInfo::regT4. nonArgGPRs are sometimes map to certain regTXs on certain ports. 65 Replacing with regT4 makes it easier to ensure that we're not trashing the 66 register when we use more temp registers. 67 68 reifyInlinedCallFrames() will be using emitSaveOrCopyLLIntBaselineCalleeSavesFor() 69 later where we need more temp registers. 70 71 4. Move the following functions to AssemblyHelpers.cpp. They don't need to be 72 inline functions. Speedometer2 and JetStream2 shows that making these non 73 inline does not hurt performance: 74 75 AssemblyHelpers::emitSave(const RegisterAtOffsetList&); 76 AssemblyHelpers::emitRestore(const RegisterAtOffsetList&); 77 AssemblyHelpers::emitSaveCalleeSavesFor(const RegisterAtOffsetList*); 78 AssemblyHelpers::emitSaveOrCopyCalleeSavesFor(...); 79 AssemblyHelpers::emitRestoreCalleeSavesFor(const RegisterAtOffsetList*); 80 AssemblyHelpers::copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(...); 81 82 Also renamed emitSaveOrCopyCalleeSavesFor() to emitSaveOrCopyLLIntBaselineCalleeSavesFor() 83 because it is only used with baseline codeBlocks. 84 85 Results: 86 Cummulative LinkBuffer profile sizes shrunk by ~2M in aggregate: 87 88 base new 89 ==== === 90 BaselineJIT: 83827048 (79.943703 MB) => 83718736 (79.840408 MB) 91 DFG: 56594836 (53.973042 MB) => 56603508 (53.981312 MB) 92 InlineCache: 33923900 (32.352352 MB) => 33183156 (31.645924 MB) 93 FTL: 6770956 (6.457287 MB) => 6568964 (6.264652 MB) 94 DFGOSRExit: 5212096 (4.970642 MB) => 3728088 (3.555382 MB) 95 CSSJIT: 748428 (730.886719 KB) => 748428 (730.886719 KB) 96 FTLOSRExit: 692276 (676.050781 KB) => 656884 (641.488281 KB) 97 YarrJIT: 445280 (434.843750 KB) => 512988 (500.964844 KB) 98 FTLThunk: 22908 (22.371094 KB) => 22556 (22.027344 KB) 99 BoundFunctionThunk: 8400 (8.203125 KB) => 10088 (9.851562 KB) 100 ExtraCTIThunk: 6952 (6.789062 KB) => 6824 (6.664062 KB) 101 SpecializedThunk: 4508 (4.402344 KB) => 4508 (4.402344 KB) 102 Thunk: 3912 (3.820312 KB) => 3784 (3.695312 KB) 103 LLIntThunk: 2908 (2.839844 KB) => 2908 (2.839844 KB) 104 VirtualThunk: 1248 (1.218750 KB) => 1248 (1.218750 KB) 105 DFGThunk: 1084 (1.058594 KB) => 444 106 DFGOSREntry: 216 => 184 107 JumpIsland: 0 108 WasmThunk: 0 109 Wasm: 0 110 Uncategorized: 0 111 Total: 188266956 (179.545361 MB) => 185773296 (177.167221 MB) 112 113 Speedometer2 and JetStream2 results shows that performance is neutral for this 114 patch (as measured on an M1 Mac): 115 116 Speedometer2: 117 ---------------------------------------------------------------------------------------------------------------------------------- 118 | subtest | ms | ms | b / a | pValue (significance using False Discovery Rate) | 119 ---------------------------------------------------------------------------------------------------------------------------------- 120 | Elm-TodoMVC |129.037500 |127.212500 |0.985857 | 0.012706 | 121 | VueJS-TodoMVC |28.312500 |27.525000 |0.972185 | 0.240315 | 122 | EmberJS-TodoMVC |132.550000 |132.025000 |0.996039 | 0.538034 | 123 | Flight-TodoMVC |80.762500 |80.875000 |1.001393 | 0.914749 | 124 | BackboneJS-TodoMVC |51.637500 |51.175000 |0.991043 | 0.285427 | 125 | Preact-TodoMVC |21.025000 |22.075000 |1.049941 | 0.206140 | 126 | AngularJS-TodoMVC |142.900000 |142.887500 |0.999913 | 0.990681 | 127 | Inferno-TodoMVC |69.300000 |69.775000 |1.006854 | 0.505201 | 128 | Vanilla-ES2015-TodoMVC |71.500000 |71.225000 |0.996154 | 0.608650 | 129 | Angular2-TypeScript-TodoMVC |43.287500 |43.275000 |0.999711 | 0.987926 | 130 | VanillaJS-TodoMVC |57.212500 |57.812500 |1.010487 | 0.333357 | 131 | jQuery-TodoMVC |276.150000 |276.775000 |1.002263 | 0.614404 | 132 | EmberJS-Debug-TodoMVC |353.612500 |352.762500 |0.997596 | 0.518836 | 133 | React-TodoMVC |93.637500 |92.637500 |0.989321 | 0.036277 | 134 | React-Redux-TodoMVC |158.237500 |156.587500 |0.989573 | 0.042154 | 135 | Vanilla-ES2015-Babel-Webpack-TodoMVC |68.050000 |68.087500 |1.000551 | 0.897149 | 136 ---------------------------------------------------------------------------------------------------------------------------------- 137 138 a mean = 236.26950 139 b mean = 236.57964 140 pValue = 0.7830785938 141 (Bigger means are better.) 142 1.001 times better 143 Results ARE NOT significant 144 145 JetStream2: 146 ------------------------------------------------------------------------------------------------------------------------- 147 | subtest | pts | pts | b / a | pValue (significance using False Discovery Rate) | 148 ------------------------------------------------------------------------------------------------------------------------- 149 | gaussian-blur |542.570057 |542.671885 |1.000188 | 0.982573 | 150 | HashSet-wasm |57.710498 |64.406371 |1.116025 | 0.401424 | 151 | gcc-loops-wasm |44.516009 |44.453535 |0.998597 | 0.973651 | 152 | json-parse-inspector |241.275085 |240.720491 |0.997701 | 0.704732 | 153 | prepack-wtb |62.640114 |63.754878 |1.017796 | 0.205840 | 154 | date-format-xparb-SP |416.976817 |448.921409 |1.076610 | 0.052977 | 155 | WSL |1.555257 |1.570233 |1.009629 | 0.427924 | 156 | OfflineAssembler |177.052352 |179.746511 |1.015217 | 0.112114 | 157 | cdjs |192.517586 |194.598906 |1.010811 | 0.025807 | 158 | UniPoker |514.023694 |526.111500 |1.023516 | 0.269892 | 159 | json-stringify-inspector |227.584725 |223.619390 |0.982576 | 0.102714 | 160 | crypto-sha1-SP |980.728788 |984.192104 |1.003531 | 0.838618 | 161 | Basic |685.148483 |711.590247 |1.038593 | 0.142952 | 162 | chai-wtb |106.256376 |106.590318 |1.003143 | 0.865894 | 163 | crypto-aes-SP |722.308829 |728.702310 |1.008851 | 0.486766 | 164 | Babylon |655.857561 |654.204901 |0.997480 | 0.931520 | 165 | string-unpack-code-SP |407.837271 |405.710752 |0.994786 | 0.729122 | 166 | stanford-crypto-aes |456.906021 |449.993856 |0.984872 | 0.272994 | 167 | raytrace |883.911335 |902.887238 |1.021468 | 0.189785 | 168 | multi-inspector-code-load |409.997347 |405.643639 |0.989381 | 0.644447 | 169 | hash-map |593.590160 |601.576332 |1.013454 | 0.249414 | 170 | stanford-crypto-pbkdf2 |722.178638 |728.283532 |1.008453 | 0.661195 | 171 | coffeescript-wtb |42.393544 |41.869545 |0.987640 | 0.197441 | 172 | Box2D |452.034685 |454.104868 |1.004580 | 0.535342 | 173 | richards-wasm |140.873688 |148.394050 |1.053384 | 0.303651 | 174 | lebab-wtb |61.671318 |62.119403 |1.007266 | 0.620998 | 175 | tsf-wasm |108.592794 |119.498398 |1.100427 | 0.504710 | 176 | base64-SP |629.744643 |603.425565 |0.958207 | 0.049997 | 177 | navier-stokes |740.588523 |739.951662 |0.999140 | 0.871445 | 178 | jshint-wtb |51.938359 |52.651104 |1.013723 | 0.217137 | 179 | regex-dna-SP |459.251148 |463.492489 |1.009235 | 0.371891 | 180 | async-fs |235.853820 |236.031189 |1.000752 | 0.938459 | 181 | first-inspector-code-load |275.298325 |274.172125 |0.995909 | 0.623403 | 182 | segmentation |44.002842 |43.445960 |0.987344 | 0.207134 | 183 | typescript |26.360161 |26.458820 |1.003743 | 0.609942 | 184 | octane-code-load |1126.749036 |1087.132024 |0.964840 | 0.524171 | 185 | float-mm.c |16.691935 |16.721354 |1.001762 | 0.194425 | 186 | quicksort-wasm |461.630091 |450.161127 |0.975156 | 0.371394 | 187 | Air |392.442375 |412.201810 |1.050350 | 0.046887 | 188 | splay |510.111886 |475.131657 |0.931426 | 0.024732 | 189 | ai-astar |607.966974 |626.573181 |1.030604 | 0.468711 | 190 | acorn-wtb |67.510766 |68.143956 |1.009379 | 0.481663 | 191 | gbemu |144.133842 |145.620304 |1.010313 | 0.802154 | 192 | richards |963.475078 |946.658879 |0.982546 | 0.231189 | 193 | 3d-cube-SP |549.426784 |550.479154 |1.001915 | 0.831307 | 194 | espree-wtb |68.707483 |73.762202 |1.073569 | 0.033603 | 195 | bomb-workers |96.882596 |96.116121 |0.992089 | 0.687952 | 196 | tagcloud-SP |309.888767 |303.538511 |0.979508 | 0.187768 | 197 | mandreel |133.667031 |135.009929 |1.010047 | 0.075232 | 198 | 3d-raytrace-SP |491.967649 |492.528992 |1.001141 | 0.957842 | 199 | delta-blue |1066.718312 |1080.230772 |1.012667 | 0.549382 | 200 | ML |139.617293 |140.088630 |1.003376 | 0.661651 | 201 | regexp |351.773956 |351.075935 |0.998016 | 0.769250 | 202 | crypto |1510.474663 |1519.218842 |1.005789 | 0.638420 | 203 | crypto-md5-SP |795.447899 |774.082493 |0.973140 | 0.079728 | 204 | earley-boyer |812.574545 |870.678372 |1.071506 | 0.044081 | 205 | octane-zlib |25.162470 |25.660261 |1.019783 | 0.554591 | 206 | date-format-tofte-SP |395.296135 |398.008992 |1.006863 | 0.650475 | 207 | n-body-SP |1165.386611 |1150.525110 |0.987248 | 0.227908 | 208 | pdfjs |189.060252 |191.015628 |1.010343 | 0.633777 | 209 | FlightPlanner |908.426192 |903.636642 |0.994728 | 0.838821 | 210 | uglify-js-wtb |34.029399 |34.164342 |1.003965 | 0.655652 | 211 | babylon-wtb |81.329869 |80.855680 |0.994170 | 0.854393 | 212 | stanford-crypto-sha256 |826.850533 |838.494164 |1.014082 | 0.579636 | 213 ------------------------------------------------------------------------------------------------------------------------- 214 215 a mean = 237.91084 216 b mean = 239.92670 217 pValue = 0.0657710897 218 (Bigger means are better.) 219 1.008 times better 220 Results ARE NOT significant 221 222 * CMakeLists.txt: 223 * JavaScriptCore.xcodeproj/project.pbxproj: 224 * assembler/MacroAssembler.h: 225 (JSC::MacroAssembler::pushToSaveByteOffset): 226 * assembler/MacroAssemblerARM64.h: 227 (JSC::MacroAssemblerARM64::pushToSaveByteOffset): 228 * dfg/DFGOSRExit.cpp: 229 (JSC::DFG::OSRExit::compileExit): 230 * dfg/DFGOSRExitCompilerCommon.cpp: 231 (JSC::DFG::reifyInlinedCallFrames): 232 * dfg/DFGThunks.cpp: 233 (JSC::DFG::osrExitGenerationThunkGenerator): 234 * ftl/FTLSaveRestore.cpp: 235 (JSC::FTL::saveAllRegisters): 236 (JSC::FTL::restoreAllRegisters): 237 * ftl/FTLSaveRestore.h: 238 * ftl/FTLThunks.cpp: 239 (JSC::FTL::genericGenerationThunkGenerator): 240 (JSC::FTL::slowPathCallThunkGenerator): 241 * jit/AssemblyHelpers.cpp: 242 (JSC::AssemblyHelpers::restoreCalleeSavesFromEntryFrameCalleeSavesBuffer): 243 (JSC::AssemblyHelpers::copyCalleeSavesToEntryFrameCalleeSavesBufferImpl): 244 (JSC::AssemblyHelpers::emitSave): 245 (JSC::AssemblyHelpers::emitRestore): 246 (JSC::AssemblyHelpers::emitSaveCalleeSavesFor): 247 (JSC::AssemblyHelpers::emitRestoreCalleeSavesFor): 248 (JSC::AssemblyHelpers::copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer): 249 (JSC::AssemblyHelpers::emitSaveOrCopyLLIntBaselineCalleeSavesFor): 250 * jit/AssemblyHelpers.h: 251 (JSC::AssemblyHelpers::copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer): 252 (JSC::AssemblyHelpers::emitSave): Deleted. 253 (JSC::AssemblyHelpers::emitRestore): Deleted. 254 (JSC::AssemblyHelpers::emitSaveOrCopyCalleeSavesFor): Deleted. 255 * jit/AssemblyHelpersSpoolers.h: Added. 256 (JSC::AssemblyHelpers::Spooler::Spooler): 257 (JSC::AssemblyHelpers::Spooler::handleGPR): 258 (JSC::AssemblyHelpers::Spooler::finalizeGPR): 259 (JSC::AssemblyHelpers::Spooler::handleFPR): 260 (JSC::AssemblyHelpers::Spooler::finalizeFPR): 261 (JSC::AssemblyHelpers::Spooler::op): 262 (JSC::AssemblyHelpers::LoadRegSpooler::LoadRegSpooler): 263 (JSC::AssemblyHelpers::LoadRegSpooler::loadGPR): 264 (JSC::AssemblyHelpers::LoadRegSpooler::finalizeGPR): 265 (JSC::AssemblyHelpers::LoadRegSpooler::loadFPR): 266 (JSC::AssemblyHelpers::LoadRegSpooler::finalizeFPR): 267 (JSC::AssemblyHelpers::LoadRegSpooler::handlePair): 268 (JSC::AssemblyHelpers::LoadRegSpooler::handleSingle): 269 (JSC::AssemblyHelpers::StoreRegSpooler::StoreRegSpooler): 270 (JSC::AssemblyHelpers::StoreRegSpooler::storeGPR): 271 (JSC::AssemblyHelpers::StoreRegSpooler::finalizeGPR): 272 (JSC::AssemblyHelpers::StoreRegSpooler::storeFPR): 273 (JSC::AssemblyHelpers::StoreRegSpooler::finalizeFPR): 274 (JSC::AssemblyHelpers::StoreRegSpooler::handlePair): 275 (JSC::AssemblyHelpers::StoreRegSpooler::handleSingle): 276 (JSC::RegDispatch<GPRReg>::get): 277 (JSC::RegDispatch<GPRReg>::temp1): 278 (JSC::RegDispatch<GPRReg>::temp2): 279 (JSC::RegDispatch<GPRReg>::regToStore): 280 (JSC::RegDispatch<GPRReg>::invalid): 281 (JSC::RegDispatch<GPRReg>::regSize): 282 (JSC::RegDispatch<GPRReg>::isValidLoadPairImm): 283 (JSC::RegDispatch<GPRReg>::isValidStorePairImm): 284 (JSC::RegDispatch<FPRReg>::get): 285 (JSC::RegDispatch<FPRReg>::temp1): 286 (JSC::RegDispatch<FPRReg>::temp2): 287 (JSC::RegDispatch<FPRReg>::regToStore): 288 (JSC::RegDispatch<FPRReg>::invalid): 289 (JSC::RegDispatch<FPRReg>::regSize): 290 (JSC::RegDispatch<FPRReg>::isValidLoadPairImm): 291 (JSC::RegDispatch<FPRReg>::isValidStorePairImm): 292 (JSC::AssemblyHelpers::CopySpooler::Source::getReg): 293 (JSC::AssemblyHelpers::CopySpooler::CopySpooler): 294 (JSC::AssemblyHelpers::CopySpooler::temp1 const): 295 (JSC::AssemblyHelpers::CopySpooler::temp2 const): 296 (JSC::AssemblyHelpers::CopySpooler::regToStore): 297 (JSC::AssemblyHelpers::CopySpooler::invalid): 298 (JSC::AssemblyHelpers::CopySpooler::regSize): 299 (JSC::AssemblyHelpers::CopySpooler::isValidLoadPairImm): 300 (JSC::AssemblyHelpers::CopySpooler::isValidStorePairImm): 301 (JSC::AssemblyHelpers::CopySpooler::load): 302 (JSC::AssemblyHelpers::CopySpooler::move): 303 (JSC::AssemblyHelpers::CopySpooler::copy): 304 (JSC::AssemblyHelpers::CopySpooler::store): 305 (JSC::AssemblyHelpers::CopySpooler::flush): 306 (JSC::AssemblyHelpers::CopySpooler::loadGPR): 307 (JSC::AssemblyHelpers::CopySpooler::copyGPR): 308 (JSC::AssemblyHelpers::CopySpooler::moveConstant): 309 (JSC::AssemblyHelpers::CopySpooler::storeGPR): 310 (JSC::AssemblyHelpers::CopySpooler::finalizeGPR): 311 (JSC::AssemblyHelpers::CopySpooler::loadFPR): 312 (JSC::AssemblyHelpers::CopySpooler::copyFPR): 313 (JSC::AssemblyHelpers::CopySpooler::storeFPR): 314 (JSC::AssemblyHelpers::CopySpooler::finalizeFPR): 315 (JSC::AssemblyHelpers::CopySpooler::loadPair): 316 (JSC::AssemblyHelpers::CopySpooler::storePair): 317 * jit/ScratchRegisterAllocator.cpp: 318 (JSC::ScratchRegisterAllocator::preserveReusedRegistersByPushing): 319 (JSC::ScratchRegisterAllocator::restoreReusedRegistersByPopping): 320 (JSC::ScratchRegisterAllocator::preserveRegistersToStackForCall): 321 (JSC::ScratchRegisterAllocator::restoreRegistersFromStackForCall): 322 * jit/ScratchRegisterAllocator.h: 323 * wasm/WasmAirIRGenerator.cpp: 324 (JSC::Wasm::AirIRGenerator::addReturn): 325 * wasm/WasmB3IRGenerator.cpp: 326 (JSC::Wasm::B3IRGenerator::addReturn): 327 1 328 2021-06-24 Yusuke Suzuki <ysuzuki@apple.com> 2 329 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r278942 r279256 1150 1150 6A38CFAA1E32B5AB0060206F /* AsyncStackTrace.h in Headers */ = {isa = PBXBuildFile; fileRef = 6A38CFA81E32B58B0060206F /* AsyncStackTrace.h */; }; 1151 1151 6AD2CB4D19B9140100065719 /* DebuggerEvalEnabler.h in Headers */ = {isa = PBXBuildFile; fileRef = 6AD2CB4C19B9140100065719 /* DebuggerEvalEnabler.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1152 6B767E7B26791F270017F8D1 /* AssemblyHelpersSpoolers.h in Headers */ = {isa = PBXBuildFile; fileRef = 6B767E7A26791F270017F8D1 /* AssemblyHelpersSpoolers.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1152 1153 6BCCEC0425D1FA27000F391D /* VerifierSlotVisitorInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 6BCCEC0325D1FA27000F391D /* VerifierSlotVisitorInlines.h */; }; 1153 1154 70113D4C1A8DB093003848C4 /* IteratorOperations.h in Headers */ = {isa = PBXBuildFile; fileRef = 70113D4A1A8DB093003848C4 /* IteratorOperations.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 4000 4001 6AD2CB4C19B9140100065719 /* DebuggerEvalEnabler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebuggerEvalEnabler.h; sourceTree = "<group>"; }; 4001 4002 6B731CC02647A8370014646F /* SlowPathCall.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = SlowPathCall.cpp; sourceTree = "<group>"; }; 4003 6B767E7A26791F270017F8D1 /* AssemblyHelpersSpoolers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AssemblyHelpersSpoolers.h; sourceTree = "<group>"; }; 4002 4004 6BA93C9590484C5BAD9316EA /* JSScriptFetcher.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSScriptFetcher.h; sourceTree = "<group>"; }; 4003 4005 6BCCEC0325D1FA27000F391D /* VerifierSlotVisitorInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VerifierSlotVisitorInlines.h; sourceTree = "<group>"; }; … … 6201 6203 0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */, 6202 6204 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */, 6205 6B767E7A26791F270017F8D1 /* AssemblyHelpersSpoolers.h */, 6203 6206 723998F6265DBCDB0057867F /* BaselineJITPlan.cpp */, 6204 6207 723998F5265DBCDB0057867F /* BaselineJITPlan.h */, … … 9155 9158 534E034E1E4D4B1600213F64 /* AccessCase.h in Headers */, 9156 9159 E3BFD0BC1DAF808E0065DEA2 /* AccessCaseSnippetParams.h in Headers */, 9160 6B767E7B26791F270017F8D1 /* AssemblyHelpersSpoolers.h in Headers */, 9157 9161 5370B4F61BF26205005C40FC /* AdaptiveInferredPropertyValueWatchpointBase.h in Headers */, 9158 9162 9168BD872447BA4E0080FFB4 /* AggregateError.h in Headers */, -
trunk/Source/JavaScriptCore/assembler/MacroAssembler.h
r277936 r279256 338 338 } 339 339 340 static ptrdiff_t pushToSaveByteOffset() { return sizeof(void*); }340 static constexpr ptrdiff_t pushToSaveByteOffset() { return sizeof(void*); } 341 341 #endif // !CPU(ARM64) 342 342 -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r279249 r279256 2535 2535 } 2536 2536 2537 static ptrdiff_t pushToSaveByteOffset() { return 16; }2537 static constexpr ptrdiff_t pushToSaveByteOffset() { return 16; } 2538 2538 2539 2539 // Register move operations: -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
r278875 r279256 29 29 #if ENABLE(DFG_JIT) 30 30 31 #include "AssemblyHelpers .h"31 #include "AssemblyHelpersSpoolers.h" 32 32 #include "BytecodeStructs.h" 33 33 #include "CheckpointOSRExitSideState.h" … … 563 563 // It makes things simpler later. 564 564 565 bool inlineStackContainsActiveCheckpoint = exit.m_codeOrigin.inlineStackContainsActiveCheckpoint(); 566 size_t firstTmpToRestoreEarly = operands.size() - operands.numberOfTmps(); 567 if (!inlineStackContainsActiveCheckpoint) 568 firstTmpToRestoreEarly = operands.size(); // Don't eagerly restore. 569 565 570 // The tag registers are needed to materialize recoveries below. 566 571 jit.emitMaterializeTagCheckRegisters(); … … 569 574 const ValueRecovery& recovery = operands[index]; 570 575 571 switch (recovery.technique()) { 576 auto currentTechnique = recovery.technique(); 577 switch (currentTechnique) { 572 578 case DisplacedInJSStack: 573 579 #if USE(JSVALUE64) … … 595 601 case Constant: { 596 602 #if USE(JSVALUE64) 597 jit.move(AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())), GPRInfo::regT0); 598 jit.store64(GPRInfo::regT0, scratch + index); 599 #else 603 if (index >= firstTmpToRestoreEarly) { 604 ASSERT(operands.operandForIndex(index).isTmp()); 605 jit.move(AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())), GPRInfo::regT0); 606 jit.store64(GPRInfo::regT0, scratch + index); 607 } 608 #else // not USE(JSVALUE64) 600 609 jit.store32( 601 610 AssemblyHelpers::TrustedImm32(recovery.constant().tag()), … … 767 776 jit.copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm.topEntryFrame); 768 777 769 if ( exit.m_codeOrigin.inlineStackContainsActiveCheckpoint()) {778 if (inlineStackContainsActiveCheckpoint) { 770 779 EncodedJSValue* tmpScratch = scratch + operands.tmpIndex(0); 771 780 jit.setupArguments<decltype(operationMaterializeOSRExitSideState)>(&vm, &exit, tmpScratch); … … 777 786 // Do all data format conversions and store the results into the stack. 778 787 788 #if USE(JSVALUE64) 789 constexpr GPRReg srcBufferGPR = GPRInfo::regT2; 790 constexpr GPRReg destBufferGPR = GPRInfo::regT3; 791 constexpr GPRReg undefinedGPR = GPRInfo::regT4; 792 bool undefinedGPRIsInitialized = false; 793 794 jit.move(CCallHelpers::TrustedImmPtr(scratch), srcBufferGPR); 795 jit.move(CCallHelpers::framePointerRegister, destBufferGPR); 796 CCallHelpers::CopySpooler spooler(CCallHelpers::CopySpooler::BufferRegs::AllowModification, jit, srcBufferGPR, destBufferGPR, GPRInfo::regT0, GPRInfo::regT1); 797 #endif 779 798 for (size_t index = 0; index < operands.size(); ++index) { 780 799 const ValueRecovery& recovery = operands[index]; … … 787 806 788 807 switch (recovery.technique()) { 808 case Constant: { 809 #if USE(JSVALUE64) 810 EncodedJSValue currentConstant = JSValue::encode(recovery.constant()); 811 if (currentConstant == encodedJSUndefined()) { 812 if (!undefinedGPRIsInitialized) { 813 jit.move(CCallHelpers::TrustedImm64(encodedJSUndefined()), undefinedGPR); 814 undefinedGPRIsInitialized = true; 815 } 816 spooler.copyGPR(undefinedGPR); 817 } else 818 spooler.moveConstant(currentConstant); 819 spooler.storeGPR(operand.virtualRegister().offset() * sizeof(CPURegister)); 820 break; 821 #else 822 FALLTHROUGH; 823 #endif 824 } 789 825 case DisplacedInJSStack: 790 826 case BooleanDisplacedInJSStack: … … 796 832 case UnboxedCellInGPR: 797 833 case UnboxedDoubleInFPR: 798 case Constant:799 834 case InFPR: 800 835 #if USE(JSVALUE64) … … 804 839 case UnboxedStrictInt52InGPR: 805 840 case StrictInt52DisplacedInJSStack: 806 jit.load64(scratch + index, GPRInfo::regT0);807 jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));841 spooler.loadGPR(index * sizeof(CPURegister)); 842 spooler.storeGPR(operand.virtualRegister().offset() * sizeof(CPURegister)); 808 843 break; 809 844 #else // not USE(JSVALUE64) … … 834 869 } 835 870 } 871 #if USE(JSVALUE64) 872 spooler.finalizeGPR(); 873 #endif 836 874 837 875 // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r277680 r279256 285 285 jit.untagPtr(GPRInfo::regT2, GPRInfo::regT3); 286 286 jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->returnPCOffset() + sizeof(void*)), GPRInfo::callFrameRegister, GPRInfo::regT2); 287 jit.validateUntaggedPtr(GPRInfo::regT3, GPRInfo:: nonArgGPR0);287 jit.validateUntaggedPtr(GPRInfo::regT3, GPRInfo::regT4); 288 288 jit.tagPtr(GPRInfo::regT2, GPRInfo::regT3); 289 289 #endif … … 306 306 #if CPU(ARM64E) 307 307 jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->returnPCOffset() + sizeof(void*)), GPRInfo::callFrameRegister, GPRInfo::regT2); 308 jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget.untaggedExecutableAddress()), GPRInfo:: nonArgGPR0);309 jit.tagPtr(GPRInfo::regT2, GPRInfo:: nonArgGPR0);310 jit.storePtr(GPRInfo:: nonArgGPR0, AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));308 jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget.untaggedExecutableAddress()), GPRInfo::regT4); 309 jit.tagPtr(GPRInfo::regT2, GPRInfo::regT4); 310 jit.storePtr(GPRInfo::regT4, AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset())); 311 311 #else 312 312 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget.untaggedExecutableAddress()), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset())); … … 319 319 // If this inlined frame is a tail call that will return back to the original caller, we need to 320 320 // copy the prior contents of the tag registers already saved for the outer frame to this frame. 321 jit.emitSaveOrCopy CalleeSavesFor(321 jit.emitSaveOrCopyLLIntBaselineCalleeSavesFor( 322 322 baselineCodeBlock, 323 323 static_cast<VirtualRegister>(inlineCallFrame->stackOffset), 324 324 trueCaller ? AssemblyHelpers::UseExistingTagRegisterContents : AssemblyHelpers::CopyBaselineCalleeSavedRegistersFromBaseFrame, 325 GPRInfo::regT2 );325 GPRInfo::regT2, GPRInfo::regT1, GPRInfo::regT4); 326 326 327 327 if (callerIsLLInt) { -
trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp
r278875 r279256 29 29 #if ENABLE(DFG_JIT) 30 30 31 #include "AssemblyHelpersSpoolers.h" 31 32 #include "CCallHelpers.h" 32 33 #include "DFGJITCode.h" … … 50 51 ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize); 51 52 EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()); 52 53 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) { 53 54 #if CPU(ARM64) 55 constexpr GPRReg bufferGPR = CCallHelpers::memoryTempRegister; 56 constexpr unsigned firstGPR = 0; 57 #elif CPU(X86_64) 58 GPRReg bufferGPR = jit.scratchRegister(); 59 constexpr unsigned firstGPR = 0; 60 #else 61 GPRReg bufferGPR = GPRInfo::toRegister(0); 62 constexpr unsigned firstGPR = 1; 63 #endif 64 65 if constexpr (firstGPR) { 66 // We're using the firstGPR as the bufferGPR, and need to save it manually. 67 RELEASE_ASSERT(GPRInfo::numberOfRegisters >= 1); 68 RELEASE_ASSERT(bufferGPR == GPRInfo::toRegister(0)); 54 69 #if USE(JSVALUE64) 55 jit.store64( GPRInfo::toRegister(i), buffer + i);70 jit.store64(bufferGPR, buffer); 56 71 #else 57 jit.store32( GPRInfo::toRegister(i), buffer + i);72 jit.store32(bufferGPR, buffer); 58 73 #endif 59 74 } 75 76 jit.move(CCallHelpers::TrustedImmPtr(buffer), bufferGPR); 77 78 CCallHelpers::StoreRegSpooler storeSpooler(jit, bufferGPR); 79 80 for (unsigned i = firstGPR; i < GPRInfo::numberOfRegisters; ++i) { 81 ptrdiff_t offset = i * sizeof(CPURegister); 82 storeSpooler.storeGPR({ GPRInfo::toRegister(i), offset }); 83 } 84 storeSpooler.finalizeGPR(); 85 60 86 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { 61 jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);62 jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0));87 ptrdiff_t offset = (GPRInfo::numberOfRegisters + i) * sizeof(double); 88 storeSpooler.storeFPR({ FPRInfo::toRegister(i), offset }); 63 89 } 64 90 storeSpooler.finalizeFPR(); 91 65 92 // Set up one argument. 66 93 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0); … … 69 96 MacroAssembler::Call functionCall = jit.call(OperationPtrTag); 70 97 98 jit.move(CCallHelpers::TrustedImmPtr(buffer), bufferGPR); 99 CCallHelpers::LoadRegSpooler loadSpooler(jit, bufferGPR); 100 101 for (unsigned i = firstGPR; i < GPRInfo::numberOfRegisters; ++i) { 102 ptrdiff_t offset = i * sizeof(CPURegister); 103 loadSpooler.loadGPR({ GPRInfo::toRegister(i), offset }); 104 } 105 loadSpooler.finalizeGPR(); 106 71 107 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { 72 jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);73 jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i));108 ptrdiff_t offset = (GPRInfo::numberOfRegisters + i) * sizeof(double); 109 loadSpooler.loadFPR({ FPRInfo::toRegister(i), offset }); 74 110 } 75 for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) { 111 loadSpooler.finalizeFPR(); 112 113 if constexpr (firstGPR) { 114 // We're using the firstGPR as the bufferGPR, and need to restore it manually. 115 ASSERT(bufferGPR == GPRInfo::toRegister(0)); 76 116 #if USE(JSVALUE64) 77 jit.load64(buffer + i, GPRInfo::toRegister(i));117 jit.load64(buffer, bufferGPR); 78 118 #else 79 jit.load32(buffer + i, GPRInfo::toRegister(i));119 jit.load32(buffer, bufferGPR); 80 120 #endif 81 121 } -
trunk/Source/JavaScriptCore/ftl/FTLSaveRestore.cpp
r165293 r279256 1 1 /* 2 * Copyright (C) 2013 , 2014Apple Inc. All rights reserved.2 * Copyright (C) 2013-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 29 29 #if ENABLE(FTL_JIT) 30 30 31 #include "AssemblyHelpersSpoolers.h" 31 32 #include "FPRInfo.h" 32 33 #include "GPRInfo.h" 33 #include " MacroAssembler.h"34 #include "Reg.h" 34 35 #include "RegisterSet.h" 35 36 … … 78 79 special = RegisterSet::stackRegisters(); 79 80 special.merge(RegisterSet::reservedHardwareRegisters()); 80 81 81 82 first = MacroAssembler::firstRegister(); 82 83 while (special.get(first)) 83 84 first = MacroAssembler::nextRegister(first); 84 second = MacroAssembler::nextRegister(first);85 while (special.get(second))86 second = MacroAssembler::nextRegister(second);87 85 } 88 86 87 GPRReg nextRegister(GPRReg current) 88 { 89 auto next = MacroAssembler::nextRegister(current); 90 while (special.get(next)) 91 next = MacroAssembler::nextRegister(next); 92 return next; 93 } 94 95 FPRReg nextFPRegister(FPRReg current) 96 { 97 auto next = MacroAssembler::nextFPRegister(current); 98 while (special.get(next)) 99 next = MacroAssembler::nextFPRegister(next); 100 return next; 101 } 102 89 103 RegisterSet special; 90 104 GPRReg first; 91 GPRReg second;92 105 }; 93 106 94 107 } // anonymous namespace 95 108 96 void saveAllRegisters( MacroAssembler& jit, char* scratchMemory)109 void saveAllRegisters(AssemblyHelpers& jit, char* scratchMemory) 97 110 { 98 111 Regs regs; 99 112 100 113 // Get the first register out of the way, so that we can use it as a pointer. 101 jit.poke64(regs.first, 0); 102 jit.move(MacroAssembler::TrustedImmPtr(scratchMemory), regs.first); 103 114 GPRReg baseGPR = regs.first; 115 #if CPU(ARM64) 116 GPRReg nextGPR = regs.nextRegister(baseGPR); 117 GPRReg firstToSaveGPR = regs.nextRegister(nextGPR); 118 ASSERT(baseGPR == ARM64Registers::x0); 119 ASSERT(nextGPR == ARM64Registers::x1); 120 #else 121 GPRReg firstToSaveGPR = regs.nextRegister(baseGPR); 122 #endif 123 jit.poke64(baseGPR, 0); 124 jit.move(MacroAssembler::TrustedImmPtr(scratchMemory), baseGPR); 125 126 AssemblyHelpers::StoreRegSpooler spooler(jit, baseGPR); 127 104 128 // Get all of the other GPRs out of the way. 105 for (MacroAssembler::RegisterID reg = regs.second; reg <= MacroAssembler::lastRegister(); reg = MacroAssembler::nextRegister(reg)) { 106 if (regs.special.get(reg)) 107 continue; 108 jit.store64(reg, MacroAssembler::Address(regs.first, offsetOfGPR(reg))); 109 } 129 for (GPRReg reg = firstToSaveGPR; reg <= MacroAssembler::lastRegister(); reg = regs.nextRegister(reg)) 130 spooler.storeGPR({ reg, static_cast<ptrdiff_t>(offsetOfGPR(reg)) }); 131 spooler.finalizeGPR(); 110 132 111 133 // Restore the first register into the second one and save it. 112 jit.peek64(regs.second, 0); 113 jit.store64(regs.second, MacroAssembler::Address(regs.first, offsetOfGPR(regs.first))); 134 jit.peek64(firstToSaveGPR, 0); 135 #if CPU(ARM64) 136 jit.storePair64(firstToSaveGPR, nextGPR, baseGPR, AssemblyHelpers::TrustedImm32(offsetOfGPR(baseGPR))); 137 #else 138 jit.store64(firstToSaveGPR, MacroAssembler::Address(baseGPR, offsetOfGPR(baseGPR))); 139 #endif 114 140 115 141 // Finally save all FPR's. 116 for (MacroAssembler::FPRegisterID reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = MacroAssembler::nextFPRegister(reg)) { 117 if (regs.special.get(reg)) 118 continue; 119 jit.storeDouble(reg, MacroAssembler::Address(regs.first, offsetOfFPR(reg))); 120 } 142 for (MacroAssembler::FPRegisterID reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = regs.nextFPRegister(reg)) 143 spooler.storeFPR({ reg, static_cast<ptrdiff_t>(offsetOfFPR(reg)) }); 144 spooler.finalizeFPR(); 121 145 } 122 146 123 void restoreAllRegisters( MacroAssembler& jit, char* scratchMemory)147 void restoreAllRegisters(AssemblyHelpers& jit, char* scratchMemory) 124 148 { 125 149 Regs regs; 126 150 127 151 // Give ourselves a pointer to the scratch memory. 128 jit.move(MacroAssembler::TrustedImmPtr(scratchMemory), regs.first); 152 GPRReg baseGPR = regs.first; 153 jit.move(MacroAssembler::TrustedImmPtr(scratchMemory), baseGPR); 129 154 155 AssemblyHelpers::LoadRegSpooler spooler(jit, baseGPR); 156 130 157 // Restore all FPR's. 131 for (MacroAssembler::FPRegisterID reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = MacroAssembler::nextFPRegister(reg)) { 132 if (regs.special.get(reg)) 133 continue; 134 jit.loadDouble(MacroAssembler::Address(regs.first, offsetOfFPR(reg)), reg); 135 } 158 for (MacroAssembler::FPRegisterID reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = regs.nextFPRegister(reg)) 159 spooler.loadFPR({ reg, static_cast<ptrdiff_t>(offsetOfFPR(reg)) }); 160 spooler.finalizeFPR(); 136 161 137 for (MacroAssembler::RegisterID reg = regs.second; reg <= MacroAssembler::lastRegister(); reg = MacroAssembler::nextRegister(reg)) { 138 if (regs.special.get(reg)) 139 continue; 140 jit.load64(MacroAssembler::Address(regs.first, offsetOfGPR(reg)), reg); 141 } 142 143 jit.load64(MacroAssembler::Address(regs.first, offsetOfGPR(regs.first)), regs.first); 162 #if CPU(ARM64) 163 GPRReg nextGPR = regs.nextRegister(baseGPR); 164 GPRReg firstToRestoreGPR = regs.nextRegister(nextGPR); 165 ASSERT(baseGPR == ARM64Registers::x0); 166 ASSERT(nextGPR == ARM64Registers::x1); 167 #else 168 GPRReg firstToRestoreGPR = regs.nextRegister(baseGPR); 169 #endif 170 for (MacroAssembler::RegisterID reg = firstToRestoreGPR; reg <= MacroAssembler::lastRegister(); reg = regs.nextRegister(reg)) 171 spooler.loadGPR({ reg, static_cast<ptrdiff_t>(offsetOfGPR(reg)) }); 172 spooler.finalizeGPR(); 173 174 #if CPU(ARM64) 175 jit.loadPair64(baseGPR, AssemblyHelpers::TrustedImm32(offsetOfGPR(baseGPR)), baseGPR, nextGPR); 176 #else 177 jit.load64(MacroAssembler::Address(baseGPR, offsetOfGPR(baseGPR)), baseGPR); 178 #endif 144 179 } 145 180 -
trunk/Source/JavaScriptCore/ftl/FTLSaveRestore.h
r206525 r279256 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 34 34 namespace JSC { 35 35 36 class MacroAssembler;36 class AssemblyHelpers; 37 37 38 38 namespace FTL { … … 47 47 // the registers into the scratch buffer such that RegisterID * sizeof(int64_t) is the 48 48 // offset of every register. 49 void saveAllRegisters( MacroAssembler& jit, char* scratchMemory);49 void saveAllRegisters(AssemblyHelpers& jit, char* scratchMemory); 50 50 51 void restoreAllRegisters( MacroAssembler& jit, char* scratchMemory);51 void restoreAllRegisters(AssemblyHelpers& jit, char* scratchMemory); 52 52 53 53 } } // namespace JSC::FTL -
trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp
r278875 r279256 29 29 #if ENABLE(FTL_JIT) 30 30 31 #include "AssemblyHelpers .h"31 #include "AssemblyHelpersSpoolers.h" 32 32 #include "DFGOSRExitCompilerCommon.h" 33 33 #include "FTLOSRExitCompiler.h" … … 57 57 58 58 // Note that the "return address" will be the ID that we pass to the generation function. 59 60 ptrdiff_t stackMisalignment = MacroAssembler::pushToSaveByteOffset(); 61 59 60 constexpr GPRReg stackPointerRegister = MacroAssembler::stackPointerRegister; 61 constexpr GPRReg framePointerRegister = MacroAssembler::framePointerRegister; 62 constexpr ptrdiff_t pushToSaveByteOffset = MacroAssembler::pushToSaveByteOffset(); 63 ptrdiff_t stackMisalignment = pushToSaveByteOffset; 64 62 65 // Pretend that we're a C call frame. 63 jit.pushToSave( MacroAssembler::framePointerRegister);64 jit.move( MacroAssembler::stackPointerRegister, MacroAssembler::framePointerRegister);65 stackMisalignment += MacroAssembler::pushToSaveByteOffset();66 66 jit.pushToSave(framePointerRegister); 67 jit.move(stackPointerRegister, framePointerRegister); 68 stackMisalignment += pushToSaveByteOffset; 69 67 70 // Now create ourselves enough stack space to give saveAllRegisters() a scratch slot. 68 71 unsigned numberOfRequiredPops = 0; 69 72 do { 70 jit.pushToSave(GPRInfo::regT0); 71 stackMisalignment += MacroAssembler::pushToSaveByteOffset(); 73 stackMisalignment += pushToSaveByteOffset; 72 74 numberOfRequiredPops++; 73 75 } while (stackMisalignment % stackAlignmentBytes()); 74 76 jit.subPtr(MacroAssembler::TrustedImm32(numberOfRequiredPops * pushToSaveByteOffset), stackPointerRegister); 77 75 78 ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(requiredScratchMemorySizeInBytes()); 76 79 char* buffer = static_cast<char*>(scratchBuffer->dataBuffer()); … … 78 81 saveAllRegisters(jit, buffer); 79 82 80 jit.loadPtr( GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);83 jit.loadPtr(framePointerRegister, GPRInfo::argumentGPR0); 81 84 jit.peek( 82 85 GPRInfo::argumentGPR1, 83 (stackMisalignment - MacroAssembler::pushToSaveByteOffset()) / sizeof(void*));86 (stackMisalignment - pushToSaveByteOffset) / sizeof(void*)); 84 87 jit.prepareCallOperation(vm); 85 88 MacroAssembler::Call functionCall = jit.call(OperationPtrTag); … … 93 96 94 97 // Prepare for tail call. 95 while (numberOfRequiredPops--) 96 jit.popToRestore(GPRInfo::regT1); 97 jit.popToRestore(MacroAssembler::framePointerRegister); 98 99 // When we came in here, there was an additional thing pushed to the stack. Some clients want it 100 // popped before proceeding. 101 while (extraPopsToRestore--) 102 jit.popToRestore(GPRInfo::regT1); 98 99 jit.loadPtr(MacroAssembler::Address(stackPointerRegister, numberOfRequiredPops * pushToSaveByteOffset), framePointerRegister); 100 101 // When we came in here, there was an additional thing pushed to the stack (extraPopsToRestore). 102 // Some clients want it popped before proceeding. Also add 1 for the pushToSave of the framePointerRegister. 103 numberOfRequiredPops += 1 + extraPopsToRestore; 104 jit.addPtr(MacroAssembler::TrustedImm32(numberOfRequiredPops * pushToSaveByteOffset), stackPointerRegister); 103 105 104 106 // Put the return address wherever the return instruction wants it. On all platforms, this … … 178 180 currentOffset += sizeof(void*); 179 181 #endif 180 182 183 AssemblyHelpers::StoreRegSpooler storeSpooler(jit, MacroAssembler::stackPointerRegister); 184 181 185 for (MacroAssembler::RegisterID reg = MacroAssembler::firstRegister(); reg <= MacroAssembler::lastRegister(); reg = static_cast<MacroAssembler::RegisterID>(reg + 1)) { 182 186 if (!key.usedRegisters().get(reg)) 183 187 continue; 184 jit.storePtr(reg, AssemblyHelpers::Address(MacroAssembler::stackPointerRegister, currentOffset));188 storeSpooler.storeGPR({ reg, static_cast<ptrdiff_t>(currentOffset) }); 185 189 currentOffset += sizeof(void*); 186 190 } 187 191 storeSpooler.finalizeGPR(); 192 188 193 for (MacroAssembler::FPRegisterID reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = static_cast<MacroAssembler::FPRegisterID>(reg + 1)) { 189 194 if (!key.usedRegisters().get(reg)) 190 195 continue; 191 jit.storeDouble(reg, AssemblyHelpers::Address(MacroAssembler::stackPointerRegister, currentOffset));196 storeSpooler.storeFPR({ reg, static_cast<ptrdiff_t>(currentOffset) }); 192 197 currentOffset += sizeof(double); 193 198 } 194 199 storeSpooler.finalizeFPR(); 200 195 201 jit.preserveReturnAddressAfterCall(GPRInfo::nonArgGPR1); 196 202 jit.storePtr(GPRInfo::nonArgGPR1, AssemblyHelpers::Address(MacroAssembler::stackPointerRegister, key.offset())); … … 211 217 jit.restoreReturnAddressBeforeReturn(GPRInfo::nonPreservedNonReturnGPR); 212 218 219 AssemblyHelpers::LoadRegSpooler loadSpooler(jit, MacroAssembler::stackPointerRegister); 220 213 221 for (MacroAssembler::FPRegisterID reg = MacroAssembler::lastFPRegister(); ; reg = static_cast<MacroAssembler::FPRegisterID>(reg - 1)) { 214 222 if (key.usedRegisters().get(reg)) { 215 223 currentOffset -= sizeof(double); 216 jit.loadDouble(AssemblyHelpers::Address(MacroAssembler::stackPointerRegister, currentOffset), reg);224 loadSpooler.loadFPR({ reg, static_cast<ptrdiff_t>(currentOffset) }); 217 225 } 218 226 if (reg == MacroAssembler::firstFPRegister()) 219 227 break; 220 228 } 221 229 loadSpooler.finalizeFPR(); 230 222 231 for (MacroAssembler::RegisterID reg = MacroAssembler::lastRegister(); ; reg = static_cast<MacroAssembler::RegisterID>(reg - 1)) { 223 232 if (key.usedRegisters().get(reg)) { 224 233 currentOffset -= sizeof(void*); 225 jit.loadPtr(AssemblyHelpers::Address(MacroAssembler::stackPointerRegister, currentOffset), reg);234 loadSpooler.loadGPR({ reg, static_cast<ptrdiff_t>(currentOffset) }); 226 235 } 227 236 if (reg == MacroAssembler::firstRegister()) 228 237 break; 229 238 } 230 239 loadSpooler.finalizeGPR(); 240 231 241 jit.ret(); 232 242 -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r279253 r279256 30 30 31 31 #include "AccessCase.h" 32 #include "AssemblyHelpersSpoolers.h" 32 33 #include "JITOperations.h" 33 34 #include "JSArrayBufferView.h" … … 607 608 GPRReg scratch = InvalidGPRReg; 608 609 unsigned scratchGPREntryIndex = 0; 609 610 // Use the first GPR entry's register as our scratch. 610 #if CPU(ARM64) 611 // We don't need a second scratch GPR, but we'll also defer restoring this 612 // GPR (in the next slot after the scratch) so that we can restore them together 613 // later using a loadPair64. 614 GPRReg unusedNextSlotGPR = InvalidGPRReg; 615 #endif 616 617 // Use the first GPR entry's register as our baseGPR. 611 618 for (unsigned i = 0; i < registerCount; i++) { 612 619 RegisterAtOffset entry = allCalleeSaves->at(i); 613 if (dontRestoreRegisters. get(entry.reg()))620 if (dontRestoreRegisters.contains(entry.reg())) 614 621 continue; 615 622 if (entry.reg().isGPR()) { 623 #if CPU(ARM64) 624 if (i + 1 < registerCount) { 625 RegisterAtOffset entry2 = allCalleeSaves->at(i + 1); 626 if (!dontRestoreRegisters.contains(entry2.reg()) 627 && entry2.reg().isGPR() 628 && entry2.offset() == entry.offset() + static_cast<ptrdiff_t>(sizeof(CPURegister))) { 629 scratchGPREntryIndex = i; 630 scratch = entry.reg().gpr(); 631 unusedNextSlotGPR = entry2.reg().gpr(); 632 break; 633 } 634 } 635 #else 616 636 scratchGPREntryIndex = i; 617 637 scratch = entry.reg().gpr(); 618 638 break; 639 #endif 619 640 } 620 641 } 621 642 ASSERT(scratch != InvalidGPRReg); 643 644 RegisterSet skipList; 645 skipList.set(dontRestoreRegisters); 646 647 // Skip the scratch register(s). We'll restore them later. 648 skipList.add(scratch); 649 #if CPU(ARM64) 650 RELEASE_ASSERT(unusedNextSlotGPR != InvalidGPRReg); 651 skipList.add(unusedNextSlotGPR); 652 #endif 622 653 623 654 loadPtr(&topEntryFrame, scratch); 624 655 addPtr(TrustedImm32(EntryFrame::calleeSaveRegistersBufferOffset()), scratch); 625 656 657 LoadRegSpooler spooler(*this, scratch); 658 626 659 // Restore all callee saves except for the scratch. 627 for (unsigned i = 0; i < registerCount; i++) { 660 unsigned i = 0; 661 for (; i < registerCount; i++) { 628 662 RegisterAtOffset entry = allCalleeSaves->at(i); 629 if ( dontRestoreRegisters.get(entry.reg()))663 if (skipList.contains(entry.reg())) 630 664 continue; 631 if (i == scratchGPREntryIndex) 665 if (!entry.reg().isGPR()) 666 break; 667 spooler.loadGPR(entry); 668 } 669 spooler.finalizeGPR(); 670 for (; i < registerCount; i++) { 671 RegisterAtOffset entry = allCalleeSaves->at(i); 672 if (skipList.contains(entry.reg())) 632 673 continue; 633 loadReg(Address(scratch, entry.offset()), entry.reg()); 634 } 674 ASSERT(!entry.reg().isGPR()); 675 spooler.loadFPR(entry); 676 } 677 spooler.finalizeFPR(); 635 678 636 679 // Restore the callee save value of the scratch. … … 639 682 ASSERT(entry.reg().isGPR()); 640 683 ASSERT(scratch == entry.reg().gpr()); 641 loadReg(Address(scratch, entry.offset()), scratch); 684 #if CPU(ARM64) 685 RegisterAtOffset entry2 = allCalleeSaves->at(scratchGPREntryIndex + 1); 686 ASSERT_UNUSED(entry2, !dontRestoreRegisters.get(entry2.reg())); 687 ASSERT(entry2.reg().isGPR()); 688 ASSERT(unusedNextSlotGPR == entry2.reg().gpr()); 689 loadPair64(scratch, TrustedImm32(entry.offset()), scratch, unusedNextSlotGPR); 690 #else 691 loadPtr(Address(scratch, entry.offset()), scratch); 692 #endif 693 642 694 #else 643 695 UNUSED_PARAM(topEntryFrame); 644 #endif 696 #endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 645 697 } 646 698 … … 1000 1052 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 1001 1053 unsigned registerCount = allCalleeSaves->size(); 1002 1003 for (unsigned i = 0; i < registerCount; i++) { 1054 1055 StoreRegSpooler spooler(*this, calleeSavesBuffer); 1056 1057 unsigned i = 0; 1058 for (; i < registerCount; i++) { 1004 1059 RegisterAtOffset entry = allCalleeSaves->at(i); 1005 if (dontCopyRegisters. get(entry.reg()))1060 if (dontCopyRegisters.contains(entry.reg())) 1006 1061 continue; 1007 storeReg(entry.reg(), Address(calleeSavesBuffer, entry.offset())); 1008 } 1062 if (!entry.reg().isGPR()) 1063 break; 1064 spooler.storeGPR(entry); 1065 } 1066 spooler.finalizeGPR(); 1067 for (; i < registerCount; i++) { 1068 RegisterAtOffset entry = allCalleeSaves->at(i); 1069 if (dontCopyRegisters.contains(entry.reg())) 1070 continue; 1071 spooler.storeFPR(entry); 1072 } 1073 spooler.finalizeFPR(); 1074 1009 1075 #else 1010 1076 UNUSED_PARAM(calleeSavesBuffer); … … 1104 1170 } 1105 1171 1172 void AssemblyHelpers::emitSave(const RegisterAtOffsetList& list) 1173 { 1174 StoreRegSpooler spooler(*this, framePointerRegister); 1175 1176 size_t listSize = list.size(); 1177 size_t i = 0; 1178 for (; i < listSize; i++) { 1179 auto entry = list.at(i); 1180 if (!entry.reg().isGPR()) 1181 break; 1182 spooler.storeGPR(entry); 1183 } 1184 spooler.finalizeGPR(); 1185 1186 for (; i < listSize; i++) 1187 spooler.storeFPR(list.at(i)); 1188 spooler.finalizeFPR(); 1189 } 1190 1191 void AssemblyHelpers::emitRestore(const RegisterAtOffsetList& list) 1192 { 1193 LoadRegSpooler spooler(*this, framePointerRegister); 1194 1195 size_t listSize = list.size(); 1196 size_t i = 0; 1197 for (; i < listSize; i++) { 1198 auto entry = list.at(i); 1199 if (!entry.reg().isGPR()) 1200 break; 1201 spooler.loadGPR(entry); 1202 } 1203 spooler.finalizeGPR(); 1204 1205 for (; i < listSize; i++) 1206 spooler.loadFPR(list.at(i)); 1207 spooler.finalizeFPR(); 1208 } 1209 1210 void AssemblyHelpers::emitSaveCalleeSavesFor(const RegisterAtOffsetList* calleeSaves) 1211 { 1212 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters()); 1213 unsigned registerCount = calleeSaves->size(); 1214 1215 StoreRegSpooler spooler(*this, framePointerRegister); 1216 1217 unsigned i = 0; 1218 for (; i < registerCount; i++) { 1219 RegisterAtOffset entry = calleeSaves->at(i); 1220 if (entry.reg().isFPR()) 1221 break; 1222 if (dontSaveRegisters.contains(entry.reg())) 1223 continue; 1224 spooler.storeGPR(entry); 1225 } 1226 spooler.finalizeGPR(); 1227 for (; i < registerCount; i++) { 1228 RegisterAtOffset entry = calleeSaves->at(i); 1229 if (dontSaveRegisters.contains(entry.reg())) 1230 continue; 1231 spooler.storeFPR(entry); 1232 } 1233 spooler.finalizeFPR(); 1234 } 1235 1236 void AssemblyHelpers::emitRestoreCalleeSavesFor(const RegisterAtOffsetList* calleeSaves) 1237 { 1238 RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters()); 1239 unsigned registerCount = calleeSaves->size(); 1240 1241 LoadRegSpooler spooler(*this, framePointerRegister); 1242 1243 unsigned i = 0; 1244 for (; i < registerCount; i++) { 1245 RegisterAtOffset entry = calleeSaves->at(i); 1246 if (entry.reg().isFPR()) 1247 break; 1248 if (dontRestoreRegisters.get(entry.reg())) 1249 continue; 1250 spooler.loadGPR(entry); 1251 } 1252 spooler.finalizeGPR(); 1253 for (; i < registerCount; i++) { 1254 RegisterAtOffset entry = calleeSaves->at(i); 1255 if (dontRestoreRegisters.get(entry.reg())) 1256 continue; 1257 spooler.loadFPR(entry); 1258 } 1259 spooler.finalizeFPR(); 1260 } 1261 1262 void AssemblyHelpers::copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(EntryFrame*& topEntryFrame, const TempRegisterSet& usedRegisters) 1263 { 1264 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 1265 // Copy saved calleeSaves on stack or unsaved calleeSaves in register to vm calleeSave buffer 1266 GPRReg destBufferGPR = usedRegisters.getFreeGPR(0); 1267 GPRReg temp1 = usedRegisters.getFreeGPR(1); 1268 FPRReg fpTemp1 = usedRegisters.getFreeFPR(0); 1269 GPRReg temp2 = isARM64() ? usedRegisters.getFreeGPR(2) : InvalidGPRReg; 1270 FPRReg fpTemp2 = isARM64() ? usedRegisters.getFreeFPR(1) : InvalidFPRReg; 1271 1272 loadPtr(&topEntryFrame, destBufferGPR); 1273 addPtr(TrustedImm32(EntryFrame::calleeSaveRegistersBufferOffset()), destBufferGPR); 1274 1275 CopySpooler spooler(*this, framePointerRegister, destBufferGPR, temp1, temp2, fpTemp1, fpTemp2); 1276 1277 RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets(); 1278 const RegisterAtOffsetList* currentCalleeSaves = &RegisterAtOffsetList::llintBaselineCalleeSaveRegisters(); 1279 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 1280 unsigned registerCount = allCalleeSaves->size(); 1281 1282 unsigned i = 0; 1283 for (; i < registerCount; i++) { 1284 RegisterAtOffset entry = allCalleeSaves->at(i); 1285 if (dontCopyRegisters.contains(entry.reg())) 1286 continue; 1287 RegisterAtOffset* currentFrameEntry = currentCalleeSaves->find(entry.reg()); 1288 1289 if (!entry.reg().isGPR()) 1290 break; 1291 if (currentFrameEntry) 1292 spooler.loadGPR(currentFrameEntry->offset()); 1293 else 1294 spooler.copyGPR(entry.reg().gpr()); 1295 spooler.storeGPR(entry.offset()); 1296 } 1297 spooler.finalizeGPR(); 1298 1299 for (; i < registerCount; i++) { 1300 RegisterAtOffset entry = allCalleeSaves->at(i); 1301 if (dontCopyRegisters.get(entry.reg())) 1302 continue; 1303 RegisterAtOffset* currentFrameEntry = currentCalleeSaves->find(entry.reg()); 1304 1305 RELEASE_ASSERT(entry.reg().isFPR()); 1306 if (currentFrameEntry) 1307 spooler.loadFPR(currentFrameEntry->offset()); 1308 else 1309 spooler.copyFPR(entry.reg().fpr()); 1310 spooler.storeFPR(entry.offset()); 1311 } 1312 spooler.finalizeFPR(); 1313 1314 #else 1315 UNUSED_PARAM(topEntryFrame); 1316 UNUSED_PARAM(usedRegisters); 1317 #endif 1318 } 1319 1320 void AssemblyHelpers::emitSaveOrCopyLLIntBaselineCalleeSavesFor(CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, RestoreTagRegisterMode tagRegisterMode, GPRReg temp1, GPRReg temp2, GPRReg temp3) 1321 { 1322 ASSERT_UNUSED(codeBlock, codeBlock); 1323 ASSERT(JITCode::isBaselineCode(codeBlock->jitType())); 1324 ASSERT(codeBlock->calleeSaveRegisters() == &RegisterAtOffsetList::llintBaselineCalleeSaveRegisters()); 1325 1326 const RegisterAtOffsetList* calleeSaves = &RegisterAtOffsetList::llintBaselineCalleeSaveRegisters(); 1327 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters()); 1328 unsigned registerCount = calleeSaves->size(); 1329 1330 GPRReg dstBufferGPR = temp1; 1331 addPtr(TrustedImm32(offsetVirtualRegister.offsetInBytes()), framePointerRegister, dstBufferGPR); 1332 1333 CopySpooler spooler(*this, framePointerRegister, dstBufferGPR, temp2, temp3); 1334 1335 for (unsigned i = 0; i < registerCount; i++) { 1336 RegisterAtOffset entry = calleeSaves->at(i); 1337 if (dontSaveRegisters.get(entry.reg())) 1338 continue; 1339 RELEASE_ASSERT(entry.reg().isGPR()); 1340 1341 #if USE(JSVALUE32_64) 1342 UNUSED_PARAM(tagRegisterMode); 1343 #else 1344 if (tagRegisterMode == CopyBaselineCalleeSavedRegistersFromBaseFrame) 1345 spooler.loadGPR(entry.offset()); 1346 else 1347 #endif 1348 spooler.copyGPR(entry.reg().gpr()); 1349 spooler.storeGPR(entry.offset()); 1350 } 1351 spooler.finalizeGPR(); 1352 } 1353 1106 1354 } // namespace JSC 1107 1355 -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r279049 r279256 296 296 #endif 297 297 } 298 298 299 template<typename Op> class Spooler; 300 class LoadRegSpooler; 301 class StoreRegSpooler; 302 class CopySpooler; 303 299 304 Address addressFor(const RegisterAtOffset& entry) 300 305 { 301 306 return Address(GPRInfo::callFrameRegister, entry.offset()); 302 307 } 303 304 void emitSave(const RegisterAtOffsetList& list) 305 { 306 for (const RegisterAtOffset& entry : list) { 307 if (entry.reg().isGPR()) 308 storePtr(entry.reg().gpr(), addressFor(entry)); 309 else 310 storeDouble(entry.reg().fpr(), addressFor(entry)); 311 } 312 } 313 314 void emitRestore(const RegisterAtOffsetList& list) 315 { 316 for (const RegisterAtOffset& entry : list) { 317 if (entry.reg().isGPR()) 318 loadPtr(addressFor(entry), entry.reg().gpr()); 319 else 320 loadDouble(addressFor(entry), entry.reg().fpr()); 321 } 322 } 308 309 void emitSave(const RegisterAtOffsetList&); 310 void emitRestore(const RegisterAtOffsetList&); 323 311 324 312 void emitSaveCalleeSavesFor(CodeBlock* codeBlock) … … 327 315 328 316 const RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 329 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters()); 330 unsigned registerCount = calleeSaves->size(); 331 332 for (unsigned i = 0; i < registerCount; i++) { 333 RegisterAtOffset entry = calleeSaves->at(i); 334 if (dontSaveRegisters.get(entry.reg())) 335 continue; 336 storeReg(entry.reg(), Address(framePointerRegister, entry.offset())); 337 } 338 } 317 emitSaveCalleeSavesFor(calleeSaves); 318 } 319 320 void emitSaveCalleeSavesFor(const RegisterAtOffsetList* calleeSaves); 339 321 340 322 enum RestoreTagRegisterMode { UseExistingTagRegisterContents, CopyBaselineCalleeSavedRegistersFromBaseFrame }; 341 323 342 void emitSaveOrCopyCalleeSavesFor(CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, RestoreTagRegisterMode tagRegisterMode, GPRReg temp) 343 { 344 ASSERT(codeBlock); 345 ASSERT(JITCode::isBaselineCode(codeBlock->jitType())); 346 347 const RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters(); 348 RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters()); 349 unsigned registerCount = calleeSaves->size(); 350 351 #if USE(JSVALUE64) 352 RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters(); 353 #endif 354 355 for (unsigned i = 0; i < registerCount; i++) { 356 RegisterAtOffset entry = calleeSaves->at(i); 357 if (dontSaveRegisters.get(entry.reg())) 358 continue; 359 RELEASE_ASSERT(entry.reg().isGPR()); 360 361 GPRReg registerToWrite; 362 363 #if USE(JSVALUE32_64) 364 UNUSED_PARAM(tagRegisterMode); 365 UNUSED_PARAM(temp); 366 #else 367 if (tagRegisterMode == CopyBaselineCalleeSavedRegistersFromBaseFrame && baselineCalleeSaves.get(entry.reg())) { 368 registerToWrite = temp; 369 loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, entry.offset()), registerToWrite); 370 } else 371 #endif 372 registerToWrite = entry.reg().gpr(); 373 374 storePtr(registerToWrite, Address(framePointerRegister, offsetVirtualRegister.offsetInBytes() + entry.offset())); 375 } 376 } 324 void emitSaveOrCopyLLIntBaselineCalleeSavesFor(CodeBlock*, VirtualRegister offsetVirtualRegister, RestoreTagRegisterMode, GPRReg temp1, GPRReg temp2, GPRReg temp3); 377 325 378 326 void emitRestoreCalleeSavesFor(CodeBlock* codeBlock) … … 384 332 } 385 333 386 void emitRestoreCalleeSavesFor(const RegisterAtOffsetList* calleeSaves) 387 { 388 RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters()); 389 unsigned registerCount = calleeSaves->size(); 390 391 for (unsigned i = 0; i < registerCount; i++) { 392 RegisterAtOffset entry = calleeSaves->at(i); 393 if (dontRestoreRegisters.get(entry.reg())) 394 continue; 395 loadReg(Address(framePointerRegister, entry.offset()), entry.reg()); 396 } 397 } 334 void emitRestoreCalleeSavesFor(const RegisterAtOffsetList* calleeSaves); 398 335 399 336 void emitSaveCalleeSaves() … … 466 403 void restoreCalleeSavesFromEntryFrameCalleeSavesBuffer(EntryFrame*&); 467 404 468 void copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(EntryFrame*& topEntryFrame, const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() }) 469 { 470 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0 471 GPRReg temp1 = usedRegisters.getFreeGPR(0); 472 GPRReg temp2 = usedRegisters.getFreeGPR(1); 473 FPRReg fpTemp = usedRegisters.getFreeFPR(); 474 ASSERT(temp2 != InvalidGPRReg); 475 476 // Copy saved calleeSaves on stack or unsaved calleeSaves in register to vm calleeSave buffer 477 loadPtr(&topEntryFrame, temp1); 478 addPtr(TrustedImm32(EntryFrame::calleeSaveRegistersBufferOffset()), temp1); 479 480 RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets(); 481 const RegisterAtOffsetList* currentCalleeSaves = &RegisterAtOffsetList::llintBaselineCalleeSaveRegisters(); 482 RegisterSet dontCopyRegisters = RegisterSet::stackRegisters(); 483 unsigned registerCount = allCalleeSaves->size(); 484 485 for (unsigned i = 0; i < registerCount; i++) { 486 RegisterAtOffset entry = allCalleeSaves->at(i); 487 if (dontCopyRegisters.get(entry.reg())) 488 continue; 489 RegisterAtOffset* currentFrameEntry = currentCalleeSaves->find(entry.reg()); 490 491 if (entry.reg().isGPR()) { 492 GPRReg regToStore; 493 if (currentFrameEntry) { 494 // Load calleeSave from stack into temp register 495 regToStore = temp2; 496 loadPtr(Address(framePointerRegister, currentFrameEntry->offset()), regToStore); 497 } else 498 // Just store callee save directly 499 regToStore = entry.reg().gpr(); 500 501 storePtr(regToStore, Address(temp1, entry.offset())); 502 } else { 503 FPRReg fpRegToStore; 504 if (currentFrameEntry) { 505 // Load calleeSave from stack into temp register 506 fpRegToStore = fpTemp; 507 loadDouble(Address(framePointerRegister, currentFrameEntry->offset()), fpRegToStore); 508 } else 509 // Just store callee save directly 510 fpRegToStore = entry.reg().fpr(); 511 512 storeDouble(fpRegToStore, Address(temp1, entry.offset())); 513 } 514 } 515 #else 516 UNUSED_PARAM(topEntryFrame); 517 UNUSED_PARAM(usedRegisters); 518 #endif 519 } 405 void copyLLIntBaselineCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(EntryFrame*&, const TempRegisterSet& usedRegisters = { RegisterSet::stubUnavailableRegisters() }); 520 406 521 407 void emitMaterializeTagCheckRegisters() -
trunk/Source/JavaScriptCore/jit/ScratchRegisterAllocator.cpp
r278875 r279256 1 1 /* 2 * Copyright (C) 2014-20 17Apple Inc. All rights reserved.2 * Copyright (C) 2014-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 29 29 #if ENABLE(JIT) 30 30 31 #include "AssemblyHelpersSpoolers.h" 31 32 #include "MaxFrameExtentForSlowPathCall.h" 32 33 #include "VM.h" … … 102 103 FPRReg ScratchRegisterAllocator::allocateScratchFPR() { return allocateScratch<FPRInfo>(); } 103 104 104 ScratchRegisterAllocator::PreservedState ScratchRegisterAllocator::preserveReusedRegistersByPushing( MacroAssembler& jit, ExtraStackSpace extraStackSpace)105 ScratchRegisterAllocator::PreservedState ScratchRegisterAllocator::preserveReusedRegistersByPushing(AssemblyHelpers& jit, ExtraStackSpace extraStackSpace) 105 106 { 106 107 if (!didReuseRegisters()) … … 125 126 } 126 127 127 void ScratchRegisterAllocator::restoreReusedRegistersByPopping( MacroAssembler& jit, const ScratchRegisterAllocator::PreservedState& preservedState)128 void ScratchRegisterAllocator::restoreReusedRegistersByPopping(AssemblyHelpers& jit, const ScratchRegisterAllocator::PreservedState& preservedState) 128 129 { 129 130 RELEASE_ASSERT(preservedState); … … 162 163 } 163 164 164 unsigned ScratchRegisterAllocator::preserveRegistersToStackForCall( MacroAssembler& jit, const RegisterSet& usedRegisters, unsigned extraBytesAtTopOfStack)165 unsigned ScratchRegisterAllocator::preserveRegistersToStackForCall(AssemblyHelpers& jit, const RegisterSet& usedRegisters, unsigned extraBytesAtTopOfStack) 165 166 { 166 167 RELEASE_ASSERT(extraBytesAtTopOfStack % sizeof(void*) == 0); … … 175 176 MacroAssembler::stackPointerRegister); 176 177 178 AssemblyHelpers::StoreRegSpooler spooler(jit, MacroAssembler::stackPointerRegister); 179 177 180 unsigned count = 0; 178 181 for (GPRReg reg = MacroAssembler::firstRegister(); reg <= MacroAssembler::lastRegister(); reg = MacroAssembler::nextRegister(reg)) { 179 182 if (usedRegisters.get(reg)) { 180 jit.storePtr(reg, MacroAssembler::Address(MacroAssembler::stackPointerRegister, extraBytesAtTopOfStack + (count * sizeof(EncodedJSValue)))); 181 count++; 182 } 183 } 183 spooler.storeGPR({ reg, static_cast<ptrdiff_t>(extraBytesAtTopOfStack + (count * sizeof(EncodedJSValue))) }); 184 count++; 185 } 186 } 187 spooler.finalizeGPR(); 188 184 189 for (FPRReg reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = MacroAssembler::nextFPRegister(reg)) { 185 190 if (usedRegisters.get(reg)) { 186 jit.storeDouble(reg, MacroAssembler::Address(MacroAssembler::stackPointerRegister, extraBytesAtTopOfStack + (count * sizeof(EncodedJSValue)))); 187 count++; 188 } 189 } 191 spooler.storeFPR({ reg, static_cast<ptrdiff_t>(extraBytesAtTopOfStack + (count * sizeof(EncodedJSValue))) }); 192 count++; 193 } 194 } 195 spooler.finalizeFPR(); 190 196 191 197 RELEASE_ASSERT(count == usedRegisters.numberOfSetRegisters()); … … 194 200 } 195 201 196 void ScratchRegisterAllocator::restoreRegistersFromStackForCall( MacroAssembler& jit, const RegisterSet& usedRegisters, const RegisterSet& ignore, unsigned numberOfStackBytesUsedForRegisterPreservation, unsigned extraBytesAtTopOfStack)202 void ScratchRegisterAllocator::restoreRegistersFromStackForCall(AssemblyHelpers& jit, const RegisterSet& usedRegisters, const RegisterSet& ignore, unsigned numberOfStackBytesUsedForRegisterPreservation, unsigned extraBytesAtTopOfStack) 197 203 { 198 204 RELEASE_ASSERT(extraBytesAtTopOfStack % sizeof(void*) == 0); … … 202 208 } 203 209 210 AssemblyHelpers::LoadRegSpooler spooler(jit, MacroAssembler::stackPointerRegister); 211 204 212 unsigned count = 0; 205 213 for (GPRReg reg = MacroAssembler::firstRegister(); reg <= MacroAssembler::lastRegister(); reg = MacroAssembler::nextRegister(reg)) { 206 214 if (usedRegisters.get(reg)) { 207 215 if (!ignore.get(reg)) 208 jit.loadPtr(MacroAssembler::Address(MacroAssembler::stackPointerRegister, extraBytesAtTopOfStack + (sizeof(EncodedJSValue) * count)), reg); 209 count++; 210 } 211 } 216 spooler.loadGPR({ reg, static_cast<ptrdiff_t>(extraBytesAtTopOfStack + (sizeof(EncodedJSValue) * count)) }); 217 count++; 218 } 219 } 220 spooler.finalizeGPR(); 221 212 222 for (FPRReg reg = MacroAssembler::firstFPRegister(); reg <= MacroAssembler::lastFPRegister(); reg = MacroAssembler::nextFPRegister(reg)) { 213 223 if (usedRegisters.get(reg)) { 214 224 if (!ignore.get(reg)) 215 jit.loadDouble(MacroAssembler::Address(MacroAssembler::stackPointerRegister, extraBytesAtTopOfStack + (sizeof(EncodedJSValue) * count)), reg); 216 count++; 217 } 218 } 225 spooler.loadFPR({ reg, static_cast<ptrdiff_t>(extraBytesAtTopOfStack + (sizeof(EncodedJSValue) * count)) }); 226 count++; 227 } 228 } 229 spooler.finalizeFPR(); 219 230 220 231 unsigned stackOffset = (usedRegisters.numberOfSetRegisters()) * sizeof(EncodedJSValue); -
trunk/Source/JavaScriptCore/jit/ScratchRegisterAllocator.h
r278875 r279256 1 1 /* 2 * Copyright (C) 2012 , 2014Apple Inc. All rights reserved.2 * Copyright (C) 2012-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 28 28 #if ENABLE(JIT) 29 29 30 #include "MacroAssembler.h"31 30 #include "RegisterSet.h" 32 31 #include "TempRegisterSet.h" … … 34 33 namespace JSC { 35 34 35 class AssemblyHelpers; 36 36 struct ScratchBuffer; 37 37 … … 85 85 }; 86 86 87 PreservedState preserveReusedRegistersByPushing( MacroAssembler& jit, ExtraStackSpace);88 void restoreReusedRegistersByPopping( MacroAssembler& jit, const PreservedState&);87 PreservedState preserveReusedRegistersByPushing(AssemblyHelpers& jit, ExtraStackSpace); 88 void restoreReusedRegistersByPopping(AssemblyHelpers& jit, const PreservedState&); 89 89 90 90 RegisterSet usedRegistersForCall() const; 91 91 92 92 unsigned desiredScratchBufferSizeForCall() const; 93 94 static unsigned preserveRegistersToStackForCall( MacroAssembler& jit, const RegisterSet& usedRegisters, unsigned extraPaddingInBytes);95 static void restoreRegistersFromStackForCall( MacroAssembler& jit, const RegisterSet& usedRegisters, const RegisterSet& ignore, unsigned numberOfStackBytesUsedForRegisterPreservation, unsigned extraPaddingInBytes);93 94 static unsigned preserveRegistersToStackForCall(AssemblyHelpers& jit, const RegisterSet& usedRegisters, unsigned extraPaddingInBytes); 95 static void restoreRegistersFromStackForCall(AssemblyHelpers& jit, const RegisterSet& usedRegisters, const RegisterSet& ignore, unsigned numberOfStackBytesUsedForRegisterPreservation, unsigned extraPaddingInBytes); 96 96 97 97 private: -
trunk/Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp
r278338 r279256 1 1 /* 2 * Copyright (C) 2019-202 0Apple Inc. All rights reserved.2 * Copyright (C) 2019-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 2982 2982 patch->setGenerator([] (CCallHelpers& jit, const B3::StackmapGenerationParams& params) { 2983 2983 auto calleeSaves = params.code().calleeSaveRegisterAtOffsetList(); 2984 2985 for (RegisterAtOffset calleeSave : calleeSaves) 2986 jit.load64ToReg(CCallHelpers::Address(GPRInfo::callFrameRegister, calleeSave.offset()), calleeSave.reg()); 2987 2984 jit.emitRestore(calleeSaves); 2988 2985 jit.emitFunctionEpilogue(); 2989 2986 jit.ret(); -
trunk/Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp
r278340 r279256 1 1 /* 2 * Copyright (C) 2016-202 0Apple Inc. All rights reserved.2 * Copyright (C) 2016-2021 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 2285 2285 patch->setGenerator([] (CCallHelpers& jit, const B3::StackmapGenerationParams& params) { 2286 2286 auto calleeSaves = params.code().calleeSaveRegisterAtOffsetList(); 2287 2288 for (RegisterAtOffset calleeSave : calleeSaves) 2289 jit.load64ToReg(CCallHelpers::Address(GPRInfo::callFrameRegister, calleeSave.offset()), calleeSave.reg()); 2290 2287 jit.emitRestore(calleeSaves); 2291 2288 jit.emitFunctionEpilogue(); 2292 2289 jit.ret();
Note:
See TracChangeset
for help on using the changeset viewer.