Changeset 161927 in webkit
- Timestamp:
- Jan 13, 2014, 5:08:05 PM (11 years ago)
- Location:
- branches/jsCStack/Source/JavaScriptCore
- Files:
-
- 14 edited
Legend:
- Unmodified
- Added
- Removed
-
branches/jsCStack/Source/JavaScriptCore/ChangeLog
r161913 r161927 1 2014-01-13 Mark Lam <mark.lam@apple.com> 2 3 CStack: Fix 64-bit C Loop LLINT. 4 https://bugs.webkit.org/show_bug.cgi?id=126790. 5 6 Reviewed by Geoffrey Garen. 7 8 1. Fixed miscellaneous bugs relevant for the C Loop LLINT (details below). 9 10 2. Simplified CLoop::execute() by making it more emulate CPU calls as well. 11 This is done by automatically synthesizing an opcode label at the return 12 point after the call to JS code. The "lr" register (named after the ARM 13 link register) will be set to that return opcode label before the call. 14 The call itself is implemented as an opcode dispatch. 15 16 * heap/Heap.cpp: 17 (JSC::Heap::markRoots): 18 - Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP. 19 * interpreter/JSStack.cpp: 20 (JSC::JSStack::gatherConservativeRoots): 21 - Previously, we were declaring a span from baseOfStack() to topOfStack(). 22 baseOfStack() points to the highest slot in the stack. 23 topOfStack() points to the word below the lowest slot in the stack. 24 The ConservativeRoots class will invert the high and low pointers to 25 ensure that it iterates from low to high. However, with this span, the 26 GC will miss not scan the highest slot in the stack, and will instead 27 scan the slot below the stack which is technically outside the stack. 28 29 The span is now fixed to be from topOfStack() + 1 to highAddress(). 30 highAddress() points at the slot above the highest slot in the stack. 31 This means GC will now correctly scan the stack from its lowest to its 32 highest slots (inclusive). 33 34 (JSC::JSStack::sanitizeStack): 35 - Similar to the gatherConservativeRoots() case, sanitizeStack() is 36 nullifying a span of stack that starts at 2 past the lowest slot in 37 the stack. 38 39 This is because topOfStack() points at the slot below the lowest slot 40 in the stack. m_lastStackTop() points to an old topOfStack() i.e. it 41 potentially points to a slot that is not in the region of memory 42 allocated for the stack. 43 44 We now add 1 to both of these values to ensure that we're zeroing a 45 region that is in the stack's allocated memory, and stop at the slot 46 (inclusive) just below the stack's current lowest used slot. 47 48 * interpreter/JSStack.h: 49 * interpreter/JSStackInlines.h: 50 - Made topOfStack() public because CLoop::execute() needs it. 51 - The LLINT assembly (in CLoop::execute()) now takes care of pushing 52 and popping the stack. Hence, we no longer need JSStack's pushFrame, 53 popFrame, and stack fence infrastruture which relies on pushFrame 54 and popFrame. These are now removed. 55 56 * llint/LLIntOpcode.h: 57 - Added new pseudo opcodes: 58 - llint_return_to_host: this is the pseudo return address for returning 59 to the host i.e. return from CLoop::execute(). 60 - llint_cloop_did_return_from_js_X: these are synthesized by the cloop offlineasm 61 as needed i.e. every time it needs to generate code for a cloopCallJSFunction 62 "instruction". These are the opcodes that will serve as the return point 63 that we store in lr and return to when we see a "ret" opcode. 64 65 While the offlineasm automatically generates these in LLIntAssembly.h, 66 we have to manually add the declaration of these opcodes here. If we end 67 up generating more or less in the future (due to changes in the LLINT 68 assembly code), then we'll get compiler errors that will tell us to 69 update this list. 70 71 * llint/LLIntSlowPaths.cpp: 72 (JSC::LLInt::llint_stack_check_at_vm_entry): 73 * llint/LLIntSlowPaths.h: 74 - This slow path isn't needed for the non C loop build because the stack is 75 finite sized and grows itself on access. For the C loop, we need this check 76 function to give the JSStack a chance to grow the stack. 77 78 * llint/LowLevelInterpreter64.asm: 79 - Added call to llint_stack_check_at_vm_entry in doCallToJavaScript(). 80 - Fixed up calls and stack adjustments. 81 82 - In makeHostFunctionCall(), the non C loop build will push lr and cfr when 83 we call the host function. That's why we adjust the sp by 16 before the call. 84 For the C loop build, we need to set the lr and cfr ourselves here. 85 86 - In nativeCallTrampoline(), unlike makeHostFunctionCall(), the return address 87 and cfr has already been set in the frame. Hence, we didn't have to do 88 anything extra. Also got rid of the distinct block for the C_LOOP and just 89 reuse the block for ARM64 since it is exactly what the C_LOOP needs for 90 the most part. 91 92 * llint/LowLevelInterpreter.asm: 93 - Added push/pop or lr and cfr as needed. 94 - In callTargetFunction(), make the C_LOOP use the same code as other 95 targets except for the call instruction. 96 - Same for slowPathForCall(). 97 - In prologue(), exclude the OSR check code for the C_LOOP build since it 98 is not needed. Also added popping of cfr and lr since I'm working through 99 this logic already. 100 101 * llint/LowLevelInterpreter.cpp: 102 (JSC::CLoopRegister::operator Register*): 103 (JSC::CLoop::execute): 104 - Added TRACE_OPCODE() for debugging use only. 105 - Simplified some CLoopRegister names. 106 - Initialize the needed registers and incoming arguments before entering 107 the interpreter loop. 108 - Added llint_return_to_host as the exit opcode for returning from 109 CLoop::execute(). We set it as the value for lr (the return address) 110 before we enter the interpreter loop. 111 - Updated the getHostCallReturnValue opcode handler to match the current 112 getHostCallReturnValue and getHostCallReturnValueWithExecState code in 113 JITOperations.cpp. 114 115 * offlineasm/cloop.rb: 116 - Updated C loop register names. 117 - Added tracking of the number of cloop_did_return_from_js labels. 118 - Added push, pop, and change the implementation of the cloopCallJSFunction 119 pseudo instruction to use synthesized cloop_did_return_from_js opcodes 120 / labels as return addresses. 121 122 * runtime/Executable.cpp: 123 - Fix C loop build breaker by a prior patch. 124 125 * runtime/VM.cpp: 126 (JSC::VM::VM): 127 - Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP. 128 1 129 2014-01-13 Michael Saboff <msaboff@apple.com> 2 130 -
branches/jsCStack/Source/JavaScriptCore/heap/Heap.cpp
r161575 r161927 1 1 /* 2 * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2013, 2014 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2007 Eric Seidel <eric@webkit.org> 4 4 * … … 460 460 } 461 461 462 #if ENABLE(LLINT_C LOOP)462 #if ENABLE(LLINT_C_LOOP) 463 463 ConservativeRoots stackRoots(&m_objectSpace.blocks(), &m_storageSpace); 464 464 { … … 500 500 visitor.donateAndDrain(); 501 501 } 502 #if ENABLE(LLINT_C LOOP)502 #if ENABLE(LLINT_C_LOOP) 503 503 { 504 504 GCPHASE(VisitStackRoots); -
branches/jsCStack/Source/JavaScriptCore/interpreter/JSStack.cpp
r161582 r161927 108 108 void JSStack::gatherConservativeRoots(ConservativeRoots& conservativeRoots) 109 109 { 110 conservativeRoots.add( baseOfStack(), topOfStack());110 conservativeRoots.add(topOfStack() + 1, highAddress()); 111 111 } 112 112 113 113 void JSStack::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks) 114 114 { 115 conservativeRoots.add( baseOfStack(), topOfStack(), jitStubRoutines, codeBlocks);115 conservativeRoots.add(topOfStack() + 1, highAddress(), jitStubRoutines, codeBlocks); 116 116 } 117 117 … … 121 121 122 122 if (m_lastStackTop < topOfStack()) { 123 char* begin = reinterpret_cast<char*>(m_lastStackTop );124 char* end = reinterpret_cast<char*>(topOfStack() );123 char* begin = reinterpret_cast<char*>(m_lastStackTop + 1); 124 char* end = reinterpret_cast<char*>(topOfStack() + 1); 125 125 memset(begin, 0, end - begin); 126 126 } -
branches/jsCStack/Source/JavaScriptCore/interpreter/JSStack.h
r161582 r161927 35 35 #include <wtf/PageReservation.h> 36 36 #include <wtf/VMTags.h> 37 38 #if ENABLE(LLINT_C_LOOP)39 #define ENABLE_DEBUG_JSSTACK 040 #if !defined(NDEBUG) && !defined(ENABLE_DEBUG_JSSTACK)41 #define ENABLE_DEBUG_JSSTACK 142 #endif43 #endif // ENABLE(LLINT_C_LOOP)44 37 45 38 namespace JSC { … … 110 103 void setReservedZoneSize(size_t); 111 104 112 CallFrame* pushFrame(class CodeBlock*, JSScope*, int argsCount, JSObject* callee); 113 114 void popFrame(CallFrame*); 115 116 #if ENABLE(DEBUG_JSSTACK) 117 void installFence(CallFrame*, const char *function = "", int lineNo = 0); 118 void validateFence(CallFrame*, const char *function = "", int lineNo = 0); 119 static const int FenceSize = 4; 120 #else // !ENABLE(DEBUG_JSSTACK) 121 void installFence(CallFrame*, const char* = "", int = 0) { } 122 void validateFence(CallFrame*, const char* = "", int = 0) { } 123 #endif // !ENABLE(DEBUG_JSSTACK) 105 inline Register* topOfStack(); 124 106 #endif // ENABLE(LLINT_C_LOOP) 125 107 126 108 private: 127 128 inline Register* topOfFrameFor(CallFrame*);129 inline Register* topOfStack();130 109 131 110 #if ENABLE(LLINT_C_LOOP) … … 145 124 146 125 #if ENABLE(LLINT_C_LOOP) 126 inline Register* topOfFrameFor(CallFrame*); 127 147 128 Register* reservationTop() const 148 129 { … … 150 131 return reinterpret_cast_ptr<Register*>(reservationTop); 151 132 } 152 153 #if ENABLE(DEBUG_JSSTACK)154 static JSValue generateFenceValue(size_t argIndex);155 void installTrapsAfterFrame(CallFrame*);156 Register* startOfFrameFor(CallFrame*);157 #else158 void installTrapsAfterFrame(CallFrame*) { }159 #endif160 133 161 134 bool grow(Register* newTopOfStack); -
branches/jsCStack/Source/JavaScriptCore/interpreter/JSStackInlines.h
r161582 r161927 44 44 } 45 45 46 #if ENABLE(LLINT_C_LOOP) 47 46 48 inline Register* JSStack::topOfFrameFor(CallFrame* frame) 47 49 { … … 56 58 { 57 59 return topOfFrameFor(m_topCallFrame); 58 }59 60 #if ENABLE(LLINT_C_LOOP)61 62 #if ENABLE(DEBUG_JSSTACK)63 inline Register* JSStack::startOfFrameFor(CallFrame* frame)64 {65 CallFrame* callerFrame = frame->callerFrameSkippingVMEntrySentinel();66 return topOfFrameFor(callerFrame);67 }68 #endif // ENABLE(DEBUG_JSSTACK)69 70 inline CallFrame* JSStack::pushFrame(class CodeBlock* codeBlock, JSScope* scope, int argsCount, JSObject* callee)71 {72 ASSERT(!!scope);73 Register* oldEnd = topOfStack();74 75 // Ensure that we have enough space for the parameters:76 size_t paddedArgsCount = argsCount;77 if (codeBlock) {78 size_t numParameters = codeBlock->numParameters();79 if (paddedArgsCount < numParameters)80 paddedArgsCount = numParameters;81 }82 83 Register* newCallFrameSlot = oldEnd - paddedArgsCount - (2 * JSStack::CallFrameHeaderSize) + 1;84 85 #if ENABLE(DEBUG_JSSTACK)86 newCallFrameSlot -= JSStack::FenceSize;87 #endif88 89 Register* topOfStack = newCallFrameSlot;90 if (!!codeBlock)91 topOfStack += codeBlock->stackPointerOffset();92 93 // Ensure that we have the needed stack capacity to push the new frame:94 if (!grow(topOfStack))95 return 0;96 97 // Compute the address of the new VM sentinel frame for this invocation:98 CallFrame* newVMEntrySentinelFrame = CallFrame::create(newCallFrameSlot + paddedArgsCount + JSStack::CallFrameHeaderSize);99 ASSERT(!!newVMEntrySentinelFrame);100 101 // Compute the address of the new frame for this invocation:102 CallFrame* newCallFrame = CallFrame::create(newCallFrameSlot);103 ASSERT(!!newCallFrame);104 105 // The caller frame should always be the real previous frame on the stack,106 // and not a potential GlobalExec that was passed in. Point callerFrame to107 // the top frame on the stack.108 CallFrame* callerFrame = m_topCallFrame;109 110 // Initialize the VM sentinel frame header:111 newVMEntrySentinelFrame->initializeVMEntrySentinelFrame(callerFrame);112 113 // Initialize the callee frame header:114 newCallFrame->init(codeBlock, 0, scope, newVMEntrySentinelFrame, argsCount, callee);115 116 ASSERT(!!newCallFrame->scope());117 118 // Pad additional args if needed:119 // Note: we need to subtract 1 from argsCount and paddedArgsCount to120 // exclude the this pointer.121 for (size_t i = argsCount-1; i < paddedArgsCount-1; ++i)122 newCallFrame->setArgument(i, jsUndefined());123 124 installFence(newCallFrame, __FUNCTION__, __LINE__);125 validateFence(newCallFrame, __FUNCTION__, __LINE__);126 installTrapsAfterFrame(newCallFrame);127 128 // Push the new frame:129 m_topCallFrame = newCallFrame;130 131 return newCallFrame;132 }133 134 inline void JSStack::popFrame(CallFrame* frame)135 {136 validateFence(frame, __FUNCTION__, __LINE__);137 138 // Pop off the callee frame and the sentinel frame.139 CallFrame* callerFrame = frame->callerFrame()->vmEntrySentinelCallerFrame();140 141 // Pop to the caller:142 m_topCallFrame = callerFrame;143 144 // If we are popping the very first frame from the stack i.e. no more145 // frames before this, then we can now safely shrink the stack. In146 // this case, we're shrinking all the way to the beginning since there147 // are no more frames on the stack.148 if (!callerFrame)149 shrink(highAddress());150 151 installTrapsAfterFrame(callerFrame);152 60 } 153 61 … … 184 92 } 185 93 186 #if ENABLE(DEBUG_JSSTACK)187 inline JSValue JSStack::generateFenceValue(size_t argIndex)188 {189 unsigned fenceBits = 0xfacebad0 | ((argIndex+1) & 0xf);190 JSValue fenceValue = JSValue(fenceBits);191 return fenceValue;192 }193 194 // The JSStack fences mechanism works as follows:195 // 1. A fence is a number (JSStack::FenceSize) of JSValues that are initialized196 // with values generated by JSStack::generateFenceValue().197 // 2. When pushFrame() is called, the fence is installed after the max extent198 // of the previous topCallFrame and the last arg of the new frame:199 //200 // | ... |201 // |--------------------------------------|202 // | Frame Header of previous frame |203 // |--------------------------------------|204 // topCallFrame --> | |205 // | Locals of previous frame |206 // |--------------------------------------|207 // | *** the Fence *** |208 // |--------------------------------------|209 // | VM entry sentinel frame header |210 // |--------------------------------------|211 // | Args of new frame |212 // |--------------------------------------|213 // | Frame Header of new frame |214 // |--------------------------------------|215 // frame --> | Locals of new frame |216 // | |217 //218 // 3. In popFrame() and elsewhere, we can call JSStack::validateFence() to219 // assert that the fence contains the values we expect.220 221 inline void JSStack::installFence(CallFrame* frame, const char *function, int lineNo)222 {223 UNUSED_PARAM(function);224 UNUSED_PARAM(lineNo);225 Register* startOfFrame = startOfFrameFor(frame);226 227 // The last argIndex is at:228 size_t maxIndex = frame->argIndexForRegister(startOfFrame) + 1;229 size_t startIndex = maxIndex - FenceSize;230 for (size_t i = startIndex; i < maxIndex; ++i) {231 JSValue fenceValue = generateFenceValue(i);232 frame->setArgument(i, fenceValue);233 }234 }235 236 inline void JSStack::validateFence(CallFrame* frame, const char *function, int lineNo)237 {238 UNUSED_PARAM(function);239 UNUSED_PARAM(lineNo);240 ASSERT(!!frame->scope());241 Register* startOfFrame = startOfFrameFor(frame);242 size_t maxIndex = frame->argIndexForRegister(startOfFrame) + 1;243 size_t startIndex = maxIndex - FenceSize;244 for (size_t i = startIndex; i < maxIndex; ++i) {245 JSValue fenceValue = generateFenceValue(i);246 JSValue actualValue = frame->getArgumentUnsafe(i);247 ASSERT(fenceValue == actualValue);248 }249 }250 251 // When debugging the JSStack, we install bad values after the extent of the252 // topCallFrame at the end of pushFrame() and popFrame(). The intention is253 // to trigger crashes in the event that memory in this supposedly unused254 // region is read and consumed without proper initialization. After the trap255 // words are installed, the stack looks like this:256 //257 // | ... |258 // |-----------------------------|259 // | Frame Header of frame |260 // |-----------------------------|261 // topCallFrame --> | |262 // | Locals of frame |263 // |-----------------------------|264 // | *** Trap words *** |265 // |-----------------------------|266 // | Unused space ... |267 // | ... |268 269 inline void JSStack::installTrapsAfterFrame(CallFrame* frame)270 {271 Register* topOfFrame = topOfFrameFor(frame);272 const int sizeOfTrap = 64;273 int32_t* startOfTrap = reinterpret_cast<int32_t*>(topOfFrame);274 int32_t* endOfTrap = startOfTrap - sizeOfTrap;275 int32_t* endOfCommitedMemory = reinterpret_cast<int32_t*>(m_commitTop);276 277 // Make sure we're not exceeding the amount of available memory to write to:278 if (endOfTrap < endOfCommitedMemory)279 endOfTrap = endOfCommitedMemory;280 281 // Lay the traps:282 int32_t* p = startOfTrap;283 while (--p >= endOfTrap)284 *p = 0xabadcafe; // A bad word to trigger a crash if deref'ed.285 }286 #endif // ENABLE(DEBUG_JSSTACK)287 94 #endif // ENABLE(LLINT_C_LOOP) 288 95 -
branches/jsCStack/Source/JavaScriptCore/llint/LLIntOpcode.h
r161219 r161927 1 1 /* 2 * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 35 35 #define FOR_EACH_LLINT_NOJIT_NATIVE_HELPER(macro) \ 36 36 macro(getHostCallReturnValue, 1) \ 37 macro(llint_return_to_host, 1) \ 37 38 macro(llint_call_to_javascript, 1) \ 38 39 macro(llint_call_to_native_function, 1) \ 39 macro(handleUncaughtException, 1) 40 macro(handleUncaughtException, 1) \ 41 \ 42 macro(llint_cloop_did_return_from_js_1, 1) \ 43 macro(llint_cloop_did_return_from_js_2, 1) \ 44 macro(llint_cloop_did_return_from_js_3, 1) \ 45 macro(llint_cloop_did_return_from_js_4, 1) \ 46 macro(llint_cloop_did_return_from_js_5, 1) \ 47 macro(llint_cloop_did_return_from_js_6, 1) \ 48 macro(llint_cloop_did_return_from_js_7, 1) \ 40 49 41 50 #else // !ENABLE(LLINT_C_LOOP) -
branches/jsCStack/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
r161575 r161927 1 1 /* 2 * Copyright (C) 2011, 2012, 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 1430 1430 } 1431 1431 1432 #if ENABLE(LLINT_C_LOOP) 1433 SlowPathReturnType llint_stack_check_at_vm_entry(VM* vm, Register* newTopOfStack) 1434 { 1435 bool success = vm->interpreter->stack().ensureCapacityFor(newTopOfStack); 1436 return encodeResult(reinterpret_cast<void*>(success), 0); 1437 } 1438 #endif 1439 1432 1440 } } // namespace JSC::LLInt 1433 1441 -
branches/jsCStack/Source/JavaScriptCore/llint/LLIntSlowPaths.h
r161219 r161927 1 1 /* 2 * Copyright (C) 2011 Apple Inc. All rights reserved.2 * Copyright (C) 2011, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 125 125 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_put_to_scope); 126 126 extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM*, ProtoCallFrame*); 127 #if ENABLE(LLINT_C_LOOP) 128 extern "C" SlowPathReturnType llint_stack_check_at_vm_entry(VM*, Register*); 129 #endif 127 130 128 131 } } // namespace JSC::LLInt -
branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r161913 r161927 218 218 219 219 macro checkStackPointerAlignment(tempReg, location) 220 if ARM64 220 if ARM64 or C_LOOP 221 221 # ARM64 will check for us! 222 # C_LOOP does not need the alignment, and can use a little perf 223 # improvement from avoiding useless work. 222 224 else 223 225 andp sp, 0xf, tempReg … … 231 233 macro preserveCallerPCAndCFR() 232 234 if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4 233 # In C_LOOP case, we're only preserving the bytecode vPC.234 # FIXME: Need to fix for other ports235 # move lr, destinationRegister235 push lr 236 push cfr 237 move sp, cfr 236 238 elsif X86 or X86_64 237 239 push cfr … … 247 249 macro restoreCallerPCAndCFR() 248 250 if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4 249 # In C_LOOP case, we're only preserving the bytecode vPC.250 # FIXME: Need to fix for other ports251 # move lr, destinationRegister251 move cfr, sp 252 pop cfr 253 pop lr 252 254 elsif X86 or X86_64 253 255 move cfr, sp … … 286 288 elsif ARM64 287 289 pushLRAndFP 288 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 290 elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 291 push lr 289 292 push cfr 290 push lr291 293 end 292 294 move sp, cfr … … 298 300 elsif ARM64 299 301 popLRAndFP 300 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 302 elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 303 pop cfr 301 304 pop lr 302 pop cfr303 305 end 304 306 end … … 311 313 elsif ARM64 312 314 pushLRAndFP 313 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 315 elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 316 push lr 314 317 push lr 315 318 push cfr … … 328 331 elsif ARM64 329 332 popLRAndFP 330 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 333 elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 334 pop cfr 331 335 pop cfr 332 336 pop lr … … 353 357 354 358 macro callTargetFunction(callLinkInfo, calleeFramePtr) 359 move calleeFramePtr, sp 355 360 if C_LOOP 356 361 cloopCallJSFunction LLIntCallLinkInfo::machineCodeTarget[callLinkInfo] 357 362 else 358 move calleeFramePtr, sp359 363 call LLIntCallLinkInfo::machineCodeTarget[callLinkInfo] 360 restoreStackPointerAfterCall()361 dispatchAfterCall()362 end364 end 365 restoreStackPointerAfterCall() 366 dispatchAfterCall() 363 367 end 364 368 … … 367 371 slowPath, 368 372 macro (callee) 373 btpz t1, .dontUpdateSP 374 addp CallerFrameAndPCSize, t1, sp 375 .dontUpdateSP: 369 376 if C_LOOP 370 377 cloopCallJSFunction callee 371 378 else 372 btpz t1, .dontUpdateSP373 addp CallerFrameAndPCSize, t1, sp374 .dontUpdateSP:375 379 call callee 376 restoreStackPointerAfterCall()377 dispatchAfterCall()378 380 end 381 restoreStackPointerAfterCall() 382 dispatchAfterCall() 379 383 end) 380 384 end … … 440 444 end 441 445 codeBlockGetter(t1) 446 if C_LOOP 447 else 442 448 baddis 5, CodeBlock::m_llintExecuteCounter + ExecutionCounter::m_counter[t1], .continue 443 449 cCall2(osrSlowPath, cfr, PC) … … 447 453 if ARM64 448 454 popLRAndFP 455 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4 456 pop cfr 457 pop lr 449 458 else 450 459 pop cfr … … 454 463 codeBlockGetter(t1) 455 464 .continue: 465 end 466 456 467 codeBlockSetter(t1) 457 468 -
branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
r161913 r161927 1 1 /* 2 * Copyright (C) 2012 Apple Inc. All rights reserved.2 * Copyright (C) 2012, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 91 91 #define OFFLINE_ASM_END 92 92 93 #if ENABLE(OPCODE_TRACING) 94 #define TRACE_OPCODE(opcode) dataLogF(" op %s\n", #opcode) 95 #else 96 #define TRACE_OPCODE(opcode) 97 #endif 98 93 99 // To keep compilers happy in case of unused labels, force usage of the label: 94 100 #define USE_LABEL(label) \ … … 98 104 } while (false) 99 105 100 #define OFFLINE_ASM_OPCODE_LABEL(opcode) DEFINE_OPCODE(opcode) USE_LABEL(opcode); 106 #define OFFLINE_ASM_OPCODE_LABEL(opcode) DEFINE_OPCODE(opcode) USE_LABEL(opcode); TRACE_OPCODE(opcode); 101 107 102 108 #if ENABLE(COMPUTED_GOTO_OPCODES) … … 213 219 #endif // !USE(JSVALUE64) 214 220 221 intptr_t* ip; 215 222 int8_t* i8p; 216 223 void* vp; 224 CallFrame* callFrame; 217 225 ExecState* execState; 218 226 void* instruction; … … 233 241 operator VM*() { return vm; } 234 242 operator ProtoCallFrame*() { return protoCallFrame; } 243 operator Register*() { return reinterpret_cast<Register*>(vp); } 235 244 236 245 #if USE(JSVALUE64) … … 314 323 // 3. 64 bit result values will be in t0. 315 324 316 CLoopRegister t0, t1, t2, t3, t5, sp ;325 CLoopRegister t0, t1, t2, t3, t5, sp, cfr, lr, pc; 317 326 #if USE(JSVALUE64) 318 CLoopRegister rBasePC, tagTypeNumber, tagMask; 319 #endif 320 CLoopRegister rRetVPC; 327 CLoopRegister pcBase, tagTypeNumber, tagMask; 328 #endif 321 329 CLoopDoubleRegister d0, d1; 322 330 323 Instruction* vPC; 324 325 // rPC is an alias for vPC. Set up the alias: 326 CLoopRegister& rPC = *CAST<CLoopRegister*>(&vPC); 331 lr.opcode = getOpcode(llint_return_to_host); 332 sp.vp = vm->interpreter->stack().topOfStack() + 1; 333 cfr.callFrame = vm->topCallFrame; 334 #ifndef NDEBUG 335 void* startSP = sp.vp; 336 CallFrame* startCFR = cfr.callFrame; 337 #endif 327 338 328 339 // Initialize the incoming args for doCallToJavaScript: … … 338 349 #endif // USE(JSVALUE64) 339 350 340 // cfr is an alias for callFrame. Set up this alias:341 CallFrame* callFrame;342 CLoopRegister& cfr = *CAST<CLoopRegister*>(&callFrame);343 344 // Simulate a native return PC which should never be used:345 rRetVPC.i = 0xbbadbeef;346 347 351 // Interpreter variables for value passing between opcodes and/or helpers: 348 352 NativeFunction nativeFunc = 0; 349 353 JSValue functionReturnValue; 350 354 Opcode opcode = getOpcode(entryOpcodeID); 355 356 #define PUSH(cloopReg) \ 357 do { \ 358 sp.ip--; \ 359 *sp.ip = cloopReg.i; \ 360 } while (false) 361 362 #define POP(cloopReg) \ 363 do { \ 364 cloopReg.i = *sp.ip; \ 365 sp.ip++; \ 366 } while (false) 351 367 352 368 #if ENABLE(OPCODE_STATS) … … 358 374 359 375 #if USE(JSVALUE32_64) 360 #define FETCH_OPCODE() vPC->u.opcode376 #define FETCH_OPCODE() pc.opcode 361 377 #else // USE(JSVALUE64) 362 #define FETCH_OPCODE() *bitwise_cast<Opcode*>( rBasePC.i8p + rPC.i * 8)378 #define FETCH_OPCODE() *bitwise_cast<Opcode*>(pcBase.i8p + pc.i * 8) 363 379 #endif // USE(JSVALUE64) 364 380 … … 409 425 #include "LLIntAssembly.h" 410 426 427 OFFLINE_ASM_GLUE_LABEL(llint_return_to_host) 428 { 429 ASSERT(startSP == sp.vp); 430 ASSERT(startCFR == cfr.callFrame); 431 #if USE(JSVALUE32_64) 432 return JSValue(t1.i, t0.i); // returning JSValue(tag, payload); 433 #else 434 return JSValue::decode(t0.encodedJSValue); 435 #endif 436 } 437 411 438 // In the ASM llint, getHostCallReturnValue() is a piece of glue 412 // function provided by the JIT (see dfg/DFGOperations.cpp).439 // function provided by the JIT (see jit/JITOperations.cpp). 413 440 // We simulate it here with a pseduo-opcode handler. 414 441 OFFLINE_ASM_GLUE_LABEL(getHostCallReturnValue) 415 442 { 416 // The ASM part pops the frame:417 callFrame = callFrame->callerFrame();418 419 443 // The part in getHostCallReturnValueWithExecState(): 420 444 JSValue result = vm->hostCallReturnValue; … … 425 449 t0.encodedJSValue = JSValue::encode(result); 426 450 #endif 427 goto doReturnHelper; 451 opcode = lr.opcode; 452 DISPATCH_OPCODE(); 428 453 } 429 454 … … 434 459 435 460 } // END bytecode handler cases. 436 437 //========================================================================438 // Bytecode helpers:439 440 doReturnHelper: {441 ASSERT(!!callFrame);442 if (callFrame->isVMEntrySentinel()) {443 #if USE(JSVALUE32_64)444 return JSValue(t1.i, t0.i); // returning JSValue(tag, payload);445 #else446 return JSValue::decode(t0.encodedJSValue);447 #endif448 }449 450 // The normal ASM llint call implementation returns to the caller as451 // recorded in rRetVPC, and the caller would fetch the return address452 // from ArgumentCount.tag() (see the dispatchAfterCall() macro used in453 // the callTargetFunction() macro in the llint asm files).454 //455 // For the C loop, we don't have the JIT stub to do this work for us. So,456 // we jump to llint_generic_return_point.457 458 vPC = callFrame->currentVPC();459 460 #if USE(JSVALUE64)461 // Based on LowLevelInterpreter64.asm's dispatchAfterCall():462 463 // When returning from a native trampoline call, unlike the assembly464 // LLInt, we can't simply return to the caller. In our case, we grab465 // the caller's VPC and resume execution there. However, the caller's466 // VPC returned by callFrame->currentVPC() is in the form of the real467 // address of the target bytecode, but the 64-bit llint expects the468 // VPC to be a bytecode offset. Hence, we need to map it back to a469 // bytecode offset before we dispatch via the usual dispatch mechanism470 // i.e. NEXT_INSTRUCTION():471 472 CodeBlock* codeBlock = callFrame->codeBlock();473 ASSERT(codeBlock);474 rPC.vp = callFrame->currentVPC();475 rPC.i = rPC.i8p - reinterpret_cast<int8_t*>(codeBlock->instructions().begin());476 rPC.i >>= 3;477 478 rBasePC.vp = codeBlock->instructions().begin();479 #endif // USE(JSVALUE64)480 481 goto llint_generic_return_point;482 483 } // END doReturnHelper.484 485 461 486 462 #if ENABLE(COMPUTED_GOTO_OPCODES) -
branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r161913 r161927 151 151 152 152 if C_LOOP 153 # FIXME: Need to call stack check here to see if we can grow the stack. 154 # Will need to preserve registers so that we can recover if we do not end 155 # up throwing a StackOverflowError. 153 move entry, temp2 154 move vm, temp3 155 cloopCallSlowPath _llint_stack_check_at_vm_entry, vm, temp1 156 bpeq t0, 0, .stackCheckFailed 157 move temp2, entry 158 move temp3, vm 159 jmp .stackHeightOK 160 161 .stackCheckFailed: 162 move temp2, entry 163 move temp3, vm 156 164 end 157 165 … … 226 234 macro makeJavaScriptCall(entry, temp) 227 235 addp 16, sp 228 call entry 236 if C_LOOP 237 cloopCallJSFunction entry 238 else 239 call entry 240 end 229 241 subp 16, sp 230 242 end … … 238 250 move sp, a0 239 251 end 240 addp 16, sp 241 call temp 242 subp 16, sp 252 if C_LOOP 253 storep cfr, [sp] 254 storep lr, 8[sp] 255 cloopCallNative temp 256 else 257 addp 16, sp 258 call temp 259 subp 16, sp 260 end 243 261 end 244 262 … … 1896 1914 andp MarkedBlockMask, t3 1897 1915 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 1898 elsif ARM64 1916 elsif ARM64 or C_LOOP 1899 1917 loadp ScopeChain[cfr], t0 1900 1918 andp MarkedBlockMask, t0 … … 1910 1928 loadp JSFunction::m_executable[t1], t1 1911 1929 move t2, cfr # Restore cfr to avoid loading from stack 1912 call executableOffsetToFunction[t1] 1913 restoreReturnAddressBeforeReturn(t3) 1914 loadp ScopeChain[cfr], t3 1915 andp MarkedBlockMask, t3 1916 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 1917 elsif C_LOOP 1918 loadp CallerFrame[cfr], t0 1919 loadp ScopeChain[t0], t1 1920 storep t1, ScopeChain[cfr] 1921 1922 loadp ScopeChain[cfr], t3 1923 andp MarkedBlockMask, t3 1924 loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3 1925 storep cfr, VM::topCallFrame[t3] 1926 1927 move t0, t2 1928 preserveReturnAddressAfterCall(t3) 1929 storep t3, ReturnPC[cfr] 1930 move cfr, t0 1931 loadp Callee[cfr], t1 1932 loadp JSFunction::m_executable[t1], t1 1933 move t2, cfr 1934 cloopCallNative executableOffsetToFunction[t1] 1935 1930 if C_LOOP 1931 cloopCallNative executableOffsetToFunction[t1] 1932 else 1933 call executableOffsetToFunction[t1] 1934 end 1936 1935 restoreReturnAddressBeforeReturn(t3) 1937 1936 loadp ScopeChain[cfr], t3 -
branches/jsCStack/Source/JavaScriptCore/offlineasm/cloop.rb
r161238 r161927 1 # Copyright (C) 2012 Apple Inc. All rights reserved.1 # Copyright (C) 2012, 2014 Apple Inc. All rights reserved. 2 2 # 3 3 # Redistribution and use in source and binary forms, with or without … … 80 80 "t3" 81 81 when "t4" 82 " rPC"82 "pc" 83 83 when "t5" 84 84 "t5" 85 85 when "t6" 86 " rBasePC"86 "pcBase" 87 87 when "csr1" 88 88 "tagTypeNumber" … … 92 92 "cfr" 93 93 when "lr" 94 " rRetVPC"94 "lr" 95 95 when "sp" 96 96 "sp" … … 545 545 $asm.putc "{" 546 546 $asm.putc " SlowPathReturnType result = #{operands[0].cLabel}(#{operands[1].clDump}, #{operands[2].clDump});" 547 $asm.putc " decodeResult(result, t0. instruction, t1.vp);"547 $asm.putc " decodeResult(result, t0.vp, t1.vp);" 548 548 $asm.putc "}" 549 549 end 550 550 551 551 class Instruction 552 @@didReturnFromJSLabelCounter = 0 553 552 554 def lowerC_LOOP 553 555 $asm.codeOrigin codeOriginString if $enableCodeOriginComments … … 863 865 $asm.putc "CRASH(); // break instruction not implemented." 864 866 when "ret" 865 $asm.putc "goto doReturnHelper;" 867 $asm.putc "opcode = lr.opcode;" 868 $asm.putc "DISPATCH_OPCODE();" 866 869 867 870 when "cbeq" … … 1084 1087 1085 1088 when "memfence" 1089 1090 when "push" 1091 $asm.putc "PUSH(#{operands[0].clDump});" 1086 1092 when "pop" 1087 $asm.putc " RELEASE_ASSERT_NOT_REACHED(); // pop not implemented."1093 $asm.putc "POP(#{operands[0].clDump});" 1088 1094 1089 1095 when "pushCalleeSaves" … … 1103 1109 # as an opcode dispatch. 1104 1110 when "cloopCallJSFunction" 1111 @@didReturnFromJSLabelCounter += 1 1112 $asm.putc "lr.opcode = getOpcode(llint_cloop_did_return_from_js_#{@@didReturnFromJSLabelCounter});" 1105 1113 $asm.putc "opcode = #{operands[0].clValue(:opcode)};" 1106 1114 $asm.putc "DISPATCH_OPCODE();" 1115 $asm.putsLabel("llint_cloop_did_return_from_js_#{@@didReturnFromJSLabelCounter}") 1107 1116 1108 1117 # We can't do generic function calls with an arbitrary set of args, but -
branches/jsCStack/Source/JavaScriptCore/runtime/Executable.cpp
r161705 r161927 35 35 #include "Operations.h" 36 36 #include "Parser.h" 37 #include <wtf/CommaPrinter.h> 37 38 #include <wtf/Vector.h> 38 39 #include <wtf/text/StringBuilder.h> -
branches/jsCStack/Source/JavaScriptCore/runtime/VM.cpp
r161863 r161927 1 1 /* 2 * Copyright (C) 2008, 2011, 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2008, 2011, 2013, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 222 222 #endif 223 223 , m_stackLimit(0) 224 #if ENABLE(LLINT_C LOOP)224 #if ENABLE(LLINT_C_LOOP) 225 225 , m_jsStackLimit(0) 226 226 #endif
Note:
See TracChangeset
for help on using the changeset viewer.