Changeset 161927 in webkit


Ignore:
Timestamp:
Jan 13, 2014, 5:08:05 PM (11 years ago)
Author:
mark.lam@apple.com
Message:

CStack: Fix 64-bit C Loop LLINT.
https://bugs.webkit.org/show_bug.cgi?id=126790.

Reviewed by Geoffrey Garen.

  1. Fixed miscellaneous bugs relevant for the C Loop LLINT (details below).
  1. Simplified CLoop::execute() by making it more emulate CPU calls as well. This is done by automatically synthesizing an opcode label at the return point after the call to JS code. The "lr" register (named after the ARM link register) will be set to that return opcode label before the call. The call itself is implemented as an opcode dispatch.
  • heap/Heap.cpp:

(JSC::Heap::markRoots):

  • Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP.
  • interpreter/JSStack.cpp:

(JSC::JSStack::gatherConservativeRoots):

  • Previously, we were declaring a span from baseOfStack() to topOfStack(). baseOfStack() points to the highest slot in the stack. topOfStack() points to the word below the lowest slot in the stack. The ConservativeRoots class will invert the high and low pointers to ensure that it iterates from low to high. However, with this span, the GC will miss not scan the highest slot in the stack, and will instead scan the slot below the stack which is technically outside the stack.

The span is now fixed to be from topOfStack() + 1 to highAddress().
highAddress() points at the slot above the highest slot in the stack.
This means GC will now correctly scan the stack from its lowest to its
highest slots (inclusive).

(JSC::JSStack::sanitizeStack):

  • Similar to the gatherConservativeRoots() case, sanitizeStack() is nullifying a span of stack that starts at 2 past the lowest slot in the stack.

This is because topOfStack() points at the slot below the lowest slot
in the stack. m_lastStackTop() points to an old topOfStack() i.e. it
potentially points to a slot that is not in the region of memory
allocated for the stack.

We now add 1 to both of these values to ensure that we're zeroing a
region that is in the stack's allocated memory, and stop at the slot
(inclusive) just below the stack's current lowest used slot.

  • interpreter/JSStack.h:
  • interpreter/JSStackInlines.h:
  • Made topOfStack() public because CLoop::execute() needs it.
  • The LLINT assembly (in CLoop::execute()) now takes care of pushing and popping the stack. Hence, we no longer need JSStack's pushFrame, popFrame, and stack fence infrastruture which relies on pushFrame and popFrame. These are now removed.
  • llint/LLIntOpcode.h:
  • Added new pseudo opcodes:
    • llint_return_to_host: this is the pseudo return address for returning to the host i.e. return from CLoop::execute().
    • llint_cloop_did_return_from_js_X: these are synthesized by the cloop offlineasm as needed i.e. every time it needs to generate code for a cloopCallJSFunction "instruction". These are the opcodes that will serve as the return point that we store in lr and return to when we see a "ret" opcode.

While the offlineasm automatically generates these in LLIntAssembly.h,
we have to manually add the declaration of these opcodes here. If we end
up generating more or less in the future (due to changes in the LLINT
assembly code), then we'll get compiler errors that will tell us to
update this list.

  • llint/LLIntSlowPaths.cpp:

(JSC::LLInt::llint_stack_check_at_vm_entry):

  • llint/LLIntSlowPaths.h:
  • This slow path isn't needed for the non C loop build because the stack is finite sized and grows itself on access. For the C loop, we need this check function to give the JSStack a chance to grow the stack.
  • llint/LowLevelInterpreter64.asm:
  • Added call to llint_stack_check_at_vm_entry in doCallToJavaScript().
  • Fixed up calls and stack adjustments.
  • In makeHostFunctionCall(), the non C loop build will push lr and cfr when we call the host function. That's why we adjust the sp by 16 before the call. For the C loop build, we need to set the lr and cfr ourselves here.
  • In nativeCallTrampoline(), unlike makeHostFunctionCall(), the return address and cfr has already been set in the frame. Hence, we didn't have to do anything extra. Also got rid of the distinct block for the C_LOOP and just reuse the block for ARM64 since it is exactly what the C_LOOP needs for the most part.
  • llint/LowLevelInterpreter.asm:
  • Added push/pop or lr and cfr as needed.
  • In callTargetFunction(), make the C_LOOP use the same code as other targets except for the call instruction.
  • Same for slowPathForCall().
  • In prologue(), exclude the OSR check code for the C_LOOP build since it is not needed. Also added popping of cfr and lr since I'm working through this logic already.
  • llint/LowLevelInterpreter.cpp:

(JSC::CLoopRegister::operator Register*):
(JSC::CLoop::execute):

  • Added TRACE_OPCODE() for debugging use only.
  • Simplified some CLoopRegister names.
  • Initialize the needed registers and incoming arguments before entering the interpreter loop.
  • Added llint_return_to_host as the exit opcode for returning from CLoop::execute(). We set it as the value for lr (the return address) before we enter the interpreter loop.
  • Updated the getHostCallReturnValue opcode handler to match the current getHostCallReturnValue and getHostCallReturnValueWithExecState code in JITOperations.cpp.
  • offlineasm/cloop.rb:
  • Updated C loop register names.
  • Added tracking of the number of cloop_did_return_from_js labels.
  • Added push, pop, and change the implementation of the cloopCallJSFunction pseudo instruction to use synthesized cloop_did_return_from_js opcodes / labels as return addresses.
  • runtime/Executable.cpp:
  • Fix C loop build breaker by a prior patch.
  • runtime/VM.cpp:

(JSC::VM::VM):

  • Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP.
Location:
branches/jsCStack/Source/JavaScriptCore
Files:
14 edited

Legend:

Unmodified
Added
Removed
  • branches/jsCStack/Source/JavaScriptCore/ChangeLog

    r161913 r161927  
     12014-01-13  Mark Lam  <mark.lam@apple.com>
     2
     3        CStack: Fix 64-bit C Loop LLINT.
     4        https://bugs.webkit.org/show_bug.cgi?id=126790.
     5
     6        Reviewed by Geoffrey Garen.
     7
     8        1. Fixed miscellaneous bugs relevant for the C Loop LLINT (details below).
     9
     10        2. Simplified CLoop::execute() by making it more emulate CPU calls as well.
     11           This is done by automatically synthesizing an opcode label at the return
     12           point after the call to JS code. The "lr" register (named after the ARM
     13           link register) will be set to that return opcode label before the call.
     14           The call itself is implemented as an opcode dispatch.
     15
     16        * heap/Heap.cpp:
     17        (JSC::Heap::markRoots):
     18        - Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP.
     19        * interpreter/JSStack.cpp:
     20        (JSC::JSStack::gatherConservativeRoots):
     21        - Previously, we were declaring a span from baseOfStack() to topOfStack().
     22          baseOfStack() points to the highest slot in the stack.
     23          topOfStack() points to the word below the lowest slot in the stack.
     24          The ConservativeRoots class will invert the high and low pointers to
     25          ensure that it iterates from low to high. However, with this span, the
     26          GC will miss not scan the highest slot in the stack, and will instead
     27          scan the slot below the stack which is technically outside the stack.
     28
     29          The span is now fixed to be from topOfStack() + 1 to highAddress().
     30          highAddress() points at the slot above the highest slot in the stack.
     31          This means GC will now correctly scan the stack from its lowest to its
     32          highest slots (inclusive).
     33
     34        (JSC::JSStack::sanitizeStack):
     35        - Similar to the gatherConservativeRoots() case, sanitizeStack() is
     36          nullifying a span of stack that starts at 2 past the lowest slot in
     37          the stack.
     38
     39          This is because topOfStack() points at the slot below the lowest slot
     40          in the stack. m_lastStackTop() points to an old topOfStack() i.e. it
     41          potentially points to a slot that is not in the region of memory
     42          allocated for the stack.
     43
     44          We now add 1 to both of these values to ensure that we're zeroing a
     45          region that is in the stack's allocated memory, and stop at the slot
     46          (inclusive) just below the stack's current lowest used slot.
     47
     48        * interpreter/JSStack.h:
     49        * interpreter/JSStackInlines.h:
     50        - Made topOfStack() public because CLoop::execute() needs it.
     51        - The LLINT assembly (in CLoop::execute()) now takes care of pushing
     52          and popping the stack. Hence, we no longer need JSStack's pushFrame,
     53          popFrame, and stack fence infrastruture which relies on pushFrame
     54          and popFrame. These are now removed.
     55
     56        * llint/LLIntOpcode.h:
     57        - Added new pseudo opcodes:
     58          - llint_return_to_host: this is the pseudo return address for returning
     59            to the host i.e. return from CLoop::execute().
     60          - llint_cloop_did_return_from_js_X: these are synthesized by the cloop offlineasm
     61            as needed i.e. every time it needs to generate code for a cloopCallJSFunction
     62            "instruction". These are the opcodes that will serve as the return point
     63            that we store in lr and return to when we see a "ret" opcode.
     64
     65            While the offlineasm automatically generates these in LLIntAssembly.h,
     66            we have to manually add the declaration of these opcodes here. If we end
     67            up generating more or less in the future (due to changes in the LLINT
     68            assembly code), then we'll get compiler errors that will tell us to
     69            update this list.
     70
     71        * llint/LLIntSlowPaths.cpp:
     72        (JSC::LLInt::llint_stack_check_at_vm_entry):
     73        * llint/LLIntSlowPaths.h:
     74        - This slow path isn't needed for the non C loop build because the stack is
     75          finite sized and grows itself on access. For the C loop, we need this check
     76          function to give the JSStack a chance to grow the stack.
     77
     78        * llint/LowLevelInterpreter64.asm:
     79        - Added call to llint_stack_check_at_vm_entry in doCallToJavaScript().
     80        - Fixed up calls and stack adjustments.
     81
     82        - In makeHostFunctionCall(), the non C loop build will push lr and cfr when
     83          we call the host function. That's why we adjust the sp by 16 before the call.
     84          For the C loop build, we need to set the lr and cfr ourselves here.
     85
     86        - In nativeCallTrampoline(), unlike makeHostFunctionCall(), the return address
     87          and cfr has already been set in the frame. Hence, we didn't have to do
     88          anything extra. Also got rid of the distinct block for the C_LOOP and just
     89          reuse the block for ARM64 since it is exactly what the C_LOOP needs for
     90          the most part.
     91
     92        * llint/LowLevelInterpreter.asm:
     93        - Added push/pop or lr and cfr as needed.
     94        - In callTargetFunction(), make the C_LOOP use the same code as other
     95          targets except for the call instruction.
     96        - Same for slowPathForCall().
     97        - In prologue(), exclude the OSR check code for the C_LOOP build since it
     98          is not needed. Also added popping of cfr and lr since I'm working through
     99          this logic already.
     100
     101        * llint/LowLevelInterpreter.cpp:
     102        (JSC::CLoopRegister::operator Register*):
     103        (JSC::CLoop::execute):
     104        - Added TRACE_OPCODE() for debugging use only.
     105        - Simplified some CLoopRegister names.
     106        - Initialize the needed registers and incoming arguments before entering
     107          the interpreter loop.
     108        - Added llint_return_to_host as the exit opcode for returning from
     109          CLoop::execute(). We set it as the value for lr (the return address)
     110          before we enter the interpreter loop.
     111        - Updated the getHostCallReturnValue opcode handler to match the current
     112          getHostCallReturnValue and getHostCallReturnValueWithExecState code in
     113          JITOperations.cpp.
     114
     115        * offlineasm/cloop.rb:
     116        - Updated C loop register names.
     117        - Added tracking of the number of cloop_did_return_from_js labels.
     118        - Added push, pop, and change the implementation of the cloopCallJSFunction
     119          pseudo instruction to use synthesized cloop_did_return_from_js opcodes
     120          / labels as return addresses.
     121
     122        * runtime/Executable.cpp:
     123        - Fix C loop build breaker by a prior patch.
     124
     125        * runtime/VM.cpp:
     126        (JSC::VM::VM):
     127        - Fixed typo: LLINT_CLOOP ==> LLINT_C_LOOP.
     128
    11292014-01-13  Michael Saboff  <msaboff@apple.com>
    2130
  • branches/jsCStack/Source/JavaScriptCore/heap/Heap.cpp

    r161575 r161927  
    11/*
    2  *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2013 Apple Inc. All rights reserved.
     2 *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2013, 2014 Apple Inc. All rights reserved.
    33 *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
    44 *
     
    460460    }
    461461
    462 #if ENABLE(LLINT_CLOOP)
     462#if ENABLE(LLINT_C_LOOP)
    463463    ConservativeRoots stackRoots(&m_objectSpace.blocks(), &m_storageSpace);
    464464    {
     
    500500            visitor.donateAndDrain();
    501501        }
    502 #if ENABLE(LLINT_CLOOP)
     502#if ENABLE(LLINT_C_LOOP)
    503503        {
    504504            GCPHASE(VisitStackRoots);
  • branches/jsCStack/Source/JavaScriptCore/interpreter/JSStack.cpp

    r161582 r161927  
    108108void JSStack::gatherConservativeRoots(ConservativeRoots& conservativeRoots)
    109109{
    110     conservativeRoots.add(baseOfStack(), topOfStack());
     110    conservativeRoots.add(topOfStack() + 1, highAddress());
    111111}
    112112
    113113void JSStack::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks)
    114114{
    115     conservativeRoots.add(baseOfStack(), topOfStack(), jitStubRoutines, codeBlocks);
     115    conservativeRoots.add(topOfStack() + 1, highAddress(), jitStubRoutines, codeBlocks);
    116116}
    117117
     
    121121   
    122122    if (m_lastStackTop < topOfStack()) {
    123         char* begin = reinterpret_cast<char*>(m_lastStackTop);
    124         char* end = reinterpret_cast<char*>(topOfStack());
     123        char* begin = reinterpret_cast<char*>(m_lastStackTop + 1);
     124        char* end = reinterpret_cast<char*>(topOfStack() + 1);
    125125        memset(begin, 0, end - begin);
    126126    }
  • branches/jsCStack/Source/JavaScriptCore/interpreter/JSStack.h

    r161582 r161927  
    3535#include <wtf/PageReservation.h>
    3636#include <wtf/VMTags.h>
    37 
    38 #if ENABLE(LLINT_C_LOOP)
    39 #define ENABLE_DEBUG_JSSTACK 0
    40 #if !defined(NDEBUG) && !defined(ENABLE_DEBUG_JSSTACK)
    41 #define ENABLE_DEBUG_JSSTACK 1
    42 #endif
    43 #endif // ENABLE(LLINT_C_LOOP)
    4437
    4538namespace JSC {
     
    110103        void setReservedZoneSize(size_t);
    111104
    112         CallFrame* pushFrame(class CodeBlock*, JSScope*, int argsCount, JSObject* callee);
    113 
    114         void popFrame(CallFrame*);
    115 
    116 #if ENABLE(DEBUG_JSSTACK)
    117         void installFence(CallFrame*, const char *function = "", int lineNo = 0);
    118         void validateFence(CallFrame*, const char *function = "", int lineNo = 0);
    119         static const int FenceSize = 4;
    120 #else // !ENABLE(DEBUG_JSSTACK)
    121         void installFence(CallFrame*, const char* = "", int = 0) { }
    122         void validateFence(CallFrame*, const char* = "", int = 0) { }
    123 #endif // !ENABLE(DEBUG_JSSTACK)
     105        inline Register* topOfStack();
    124106#endif // ENABLE(LLINT_C_LOOP)
    125107
    126108    private:
    127 
    128         inline Register* topOfFrameFor(CallFrame*);
    129         inline Register* topOfStack();
    130109
    131110#if ENABLE(LLINT_C_LOOP)
     
    145124
    146125#if ENABLE(LLINT_C_LOOP)
     126        inline Register* topOfFrameFor(CallFrame*);
     127
    147128        Register* reservationTop() const
    148129        {
     
    150131            return reinterpret_cast_ptr<Register*>(reservationTop);
    151132        }
    152 
    153 #if ENABLE(DEBUG_JSSTACK)
    154         static JSValue generateFenceValue(size_t argIndex);
    155         void installTrapsAfterFrame(CallFrame*);
    156         Register* startOfFrameFor(CallFrame*);
    157 #else
    158         void installTrapsAfterFrame(CallFrame*) { }
    159 #endif
    160133
    161134        bool grow(Register* newTopOfStack);
  • branches/jsCStack/Source/JavaScriptCore/interpreter/JSStackInlines.h

    r161582 r161927  
    4444}
    4545
     46#if ENABLE(LLINT_C_LOOP)
     47
    4648inline Register* JSStack::topOfFrameFor(CallFrame* frame)
    4749{
     
    5658{
    5759    return topOfFrameFor(m_topCallFrame);
    58 }
    59 
    60 #if ENABLE(LLINT_C_LOOP)
    61 
    62 #if ENABLE(DEBUG_JSSTACK)
    63 inline Register* JSStack::startOfFrameFor(CallFrame* frame)
    64 {
    65     CallFrame* callerFrame = frame->callerFrameSkippingVMEntrySentinel();
    66     return topOfFrameFor(callerFrame);
    67 }
    68 #endif // ENABLE(DEBUG_JSSTACK)
    69 
    70 inline CallFrame* JSStack::pushFrame(class CodeBlock* codeBlock, JSScope* scope, int argsCount, JSObject* callee)
    71 {
    72     ASSERT(!!scope);
    73     Register* oldEnd = topOfStack();
    74 
    75     // Ensure that we have enough space for the parameters:
    76     size_t paddedArgsCount = argsCount;
    77     if (codeBlock) {
    78         size_t numParameters = codeBlock->numParameters();
    79         if (paddedArgsCount < numParameters)
    80             paddedArgsCount = numParameters;
    81     }
    82 
    83     Register* newCallFrameSlot = oldEnd - paddedArgsCount - (2 * JSStack::CallFrameHeaderSize) + 1;
    84 
    85 #if ENABLE(DEBUG_JSSTACK)
    86     newCallFrameSlot -= JSStack::FenceSize;
    87 #endif
    88 
    89     Register* topOfStack = newCallFrameSlot;
    90     if (!!codeBlock)
    91         topOfStack += codeBlock->stackPointerOffset();
    92 
    93     // Ensure that we have the needed stack capacity to push the new frame:
    94     if (!grow(topOfStack))
    95         return 0;
    96 
    97     // Compute the address of the new VM sentinel frame for this invocation:
    98     CallFrame* newVMEntrySentinelFrame = CallFrame::create(newCallFrameSlot + paddedArgsCount + JSStack::CallFrameHeaderSize);
    99     ASSERT(!!newVMEntrySentinelFrame);
    100 
    101     // Compute the address of the new frame for this invocation:
    102     CallFrame* newCallFrame = CallFrame::create(newCallFrameSlot);
    103     ASSERT(!!newCallFrame);
    104 
    105     // The caller frame should always be the real previous frame on the stack,
    106     // and not a potential GlobalExec that was passed in. Point callerFrame to
    107     // the top frame on the stack.
    108     CallFrame* callerFrame = m_topCallFrame;
    109 
    110     // Initialize the VM sentinel frame header:
    111     newVMEntrySentinelFrame->initializeVMEntrySentinelFrame(callerFrame);
    112 
    113     // Initialize the callee frame header:
    114     newCallFrame->init(codeBlock, 0, scope, newVMEntrySentinelFrame, argsCount, callee);
    115 
    116     ASSERT(!!newCallFrame->scope());
    117 
    118     // Pad additional args if needed:
    119     // Note: we need to subtract 1 from argsCount and paddedArgsCount to
    120     // exclude the this pointer.
    121     for (size_t i = argsCount-1; i < paddedArgsCount-1; ++i)
    122         newCallFrame->setArgument(i, jsUndefined());
    123 
    124     installFence(newCallFrame, __FUNCTION__, __LINE__);
    125     validateFence(newCallFrame, __FUNCTION__, __LINE__);
    126     installTrapsAfterFrame(newCallFrame);
    127 
    128     // Push the new frame:
    129     m_topCallFrame = newCallFrame;
    130 
    131     return newCallFrame;
    132 }
    133 
    134 inline void JSStack::popFrame(CallFrame* frame)
    135 {
    136     validateFence(frame, __FUNCTION__, __LINE__);
    137 
    138     // Pop off the callee frame and the sentinel frame.
    139     CallFrame* callerFrame = frame->callerFrame()->vmEntrySentinelCallerFrame();
    140 
    141     // Pop to the caller:
    142     m_topCallFrame = callerFrame;
    143 
    144     // If we are popping the very first frame from the stack i.e. no more
    145     // frames before this, then we can now safely shrink the stack. In
    146     // this case, we're shrinking all the way to the beginning since there
    147     // are no more frames on the stack.
    148     if (!callerFrame)
    149         shrink(highAddress());
    150 
    151     installTrapsAfterFrame(callerFrame);
    15260}
    15361
     
    18492}
    18593
    186 #if ENABLE(DEBUG_JSSTACK)
    187 inline JSValue JSStack::generateFenceValue(size_t argIndex)
    188 {
    189     unsigned fenceBits = 0xfacebad0 | ((argIndex+1) & 0xf);
    190     JSValue fenceValue = JSValue(fenceBits);
    191     return fenceValue;
    192 }
    193 
    194 // The JSStack fences mechanism works as follows:
    195 // 1. A fence is a number (JSStack::FenceSize) of JSValues that are initialized
    196 //    with values generated by JSStack::generateFenceValue().
    197 // 2. When pushFrame() is called, the fence is installed after the max extent
    198 //    of the previous topCallFrame and the last arg of the new frame:
    199 //
    200 //                     | ...                                  |
    201 //                     |--------------------------------------|
    202 //                     | Frame Header of previous frame       |
    203 //                     |--------------------------------------|
    204 //    topCallFrame --> |                                      |
    205 //                     | Locals of previous frame             |
    206 //                     |--------------------------------------|
    207 //                     | *** the Fence ***                    |
    208 //                     |--------------------------------------|
    209 //                     | VM entry sentinel frame header       |
    210 //                     |--------------------------------------|
    211 //                     | Args of new frame                    |
    212 //                     |--------------------------------------|
    213 //                     | Frame Header of new frame            |
    214 //                     |--------------------------------------|
    215 //           frame --> | Locals of new frame                  |
    216 //                     |                                      |
    217 //
    218 // 3. In popFrame() and elsewhere, we can call JSStack::validateFence() to
    219 //    assert that the fence contains the values we expect.
    220 
    221 inline void JSStack::installFence(CallFrame* frame, const char *function, int lineNo)
    222 {
    223     UNUSED_PARAM(function);
    224     UNUSED_PARAM(lineNo);
    225     Register* startOfFrame = startOfFrameFor(frame);
    226 
    227     // The last argIndex is at:
    228     size_t maxIndex = frame->argIndexForRegister(startOfFrame) + 1;
    229     size_t startIndex = maxIndex - FenceSize;
    230     for (size_t i = startIndex; i < maxIndex; ++i) {
    231         JSValue fenceValue = generateFenceValue(i);
    232         frame->setArgument(i, fenceValue);
    233     }
    234 }
    235 
    236 inline void JSStack::validateFence(CallFrame* frame, const char *function, int lineNo)
    237 {
    238     UNUSED_PARAM(function);
    239     UNUSED_PARAM(lineNo);
    240     ASSERT(!!frame->scope());
    241     Register* startOfFrame = startOfFrameFor(frame);
    242     size_t maxIndex = frame->argIndexForRegister(startOfFrame) + 1;
    243     size_t startIndex = maxIndex - FenceSize;
    244     for (size_t i = startIndex; i < maxIndex; ++i) {
    245         JSValue fenceValue = generateFenceValue(i);
    246         JSValue actualValue = frame->getArgumentUnsafe(i);
    247         ASSERT(fenceValue == actualValue);
    248     }
    249 }
    250 
    251 // When debugging the JSStack, we install bad values after the extent of the
    252 // topCallFrame at the end of pushFrame() and popFrame(). The intention is
    253 // to trigger crashes in the event that memory in this supposedly unused
    254 // region is read and consumed without proper initialization. After the trap
    255 // words are installed, the stack looks like this:
    256 //
    257 //                     | ...                         |
    258 //                     |-----------------------------|
    259 //                     | Frame Header of frame       |
    260 //                     |-----------------------------|
    261 //    topCallFrame --> |                             |
    262 //                     | Locals of frame             |
    263 //                     |-----------------------------|
    264 //                     | *** Trap words ***          |
    265 //                     |-----------------------------|
    266 //                     | Unused space ...            |
    267 //                     | ...                         |
    268 
    269 inline void JSStack::installTrapsAfterFrame(CallFrame* frame)
    270 {
    271     Register* topOfFrame = topOfFrameFor(frame);
    272     const int sizeOfTrap = 64;
    273     int32_t* startOfTrap = reinterpret_cast<int32_t*>(topOfFrame);
    274     int32_t* endOfTrap = startOfTrap - sizeOfTrap;
    275     int32_t* endOfCommitedMemory = reinterpret_cast<int32_t*>(m_commitTop);
    276 
    277     // Make sure we're not exceeding the amount of available memory to write to:
    278     if (endOfTrap < endOfCommitedMemory)
    279         endOfTrap = endOfCommitedMemory;
    280 
    281     // Lay the traps:
    282     int32_t* p = startOfTrap;
    283     while (--p >= endOfTrap)
    284         *p = 0xabadcafe; // A bad word to trigger a crash if deref'ed.
    285 }
    286 #endif // ENABLE(DEBUG_JSSTACK)
    28794#endif // ENABLE(LLINT_C_LOOP)
    28895
  • branches/jsCStack/Source/JavaScriptCore/llint/LLIntOpcode.h

    r161219 r161927  
    11/*
    2  * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    3535#define FOR_EACH_LLINT_NOJIT_NATIVE_HELPER(macro) \
    3636    macro(getHostCallReturnValue, 1) \
     37    macro(llint_return_to_host, 1) \
    3738    macro(llint_call_to_javascript, 1) \
    3839    macro(llint_call_to_native_function, 1) \
    39     macro(handleUncaughtException, 1)
     40    macro(handleUncaughtException, 1) \
     41    \
     42    macro(llint_cloop_did_return_from_js_1, 1) \
     43    macro(llint_cloop_did_return_from_js_2, 1) \
     44    macro(llint_cloop_did_return_from_js_3, 1) \
     45    macro(llint_cloop_did_return_from_js_4, 1) \
     46    macro(llint_cloop_did_return_from_js_5, 1) \
     47    macro(llint_cloop_did_return_from_js_6, 1) \
     48    macro(llint_cloop_did_return_from_js_7, 1) \
    4049
    4150#else // !ENABLE(LLINT_C_LOOP)
  • branches/jsCStack/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp

    r161575 r161927  
    11/*
    2  * Copyright (C) 2011, 2012, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    14301430}
    14311431
     1432#if ENABLE(LLINT_C_LOOP)
     1433SlowPathReturnType llint_stack_check_at_vm_entry(VM* vm, Register* newTopOfStack)
     1434{
     1435    bool success = vm->interpreter->stack().ensureCapacityFor(newTopOfStack);
     1436    return encodeResult(reinterpret_cast<void*>(success), 0);
     1437}
     1438#endif
     1439
    14321440} } // namespace JSC::LLInt
    14331441
  • branches/jsCStack/Source/JavaScriptCore/llint/LLIntSlowPaths.h

    r161219 r161927  
    11/*
    2  * Copyright (C) 2011 Apple Inc. All rights reserved.
     2 * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    125125LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_put_to_scope);
    126126extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM*, ProtoCallFrame*);
     127#if ENABLE(LLINT_C_LOOP)
     128extern "C" SlowPathReturnType llint_stack_check_at_vm_entry(VM*, Register*);
     129#endif
    127130
    128131} } // namespace JSC::LLInt
  • branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter.asm

    r161913 r161927  
    218218
    219219macro checkStackPointerAlignment(tempReg, location)
    220     if ARM64
     220    if ARM64 or C_LOOP
    221221        # ARM64 will check for us!
     222        # C_LOOP does not need the alignment, and can use a little perf
     223        # improvement from avoiding useless work.
    222224    else
    223225        andp sp, 0xf, tempReg
     
    231233macro preserveCallerPCAndCFR()
    232234    if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4
    233         # In C_LOOP case, we're only preserving the bytecode vPC.
    234         # FIXME: Need to fix for other ports
    235         # move lr, destinationRegister
     235        push lr
     236        push cfr
     237        move sp, cfr
    236238    elsif X86 or X86_64
    237239        push cfr
     
    247249macro restoreCallerPCAndCFR()
    248250    if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4
    249         # In C_LOOP case, we're only preserving the bytecode vPC.
    250         # FIXME: Need to fix for other ports
    251         # move lr, destinationRegister
     251        move cfr, sp
     252        pop cfr
     253        pop lr
    252254    elsif X86 or X86_64
    253255        move cfr, sp
     
    286288    elsif ARM64
    287289        pushLRAndFP
    288     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     290    elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     291        push lr
    289292        push cfr
    290         push lr
    291293    end
    292294    move sp, cfr
     
    298300    elsif ARM64
    299301        popLRAndFP
    300     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     302    elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     303        pop cfr
    301304        pop lr
    302         pop cfr
    303305    end
    304306end
     
    311313    elsif ARM64
    312314        pushLRAndFP
    313     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     315    elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     316        push lr
    314317        push lr
    315318        push cfr
     
    328331    elsif ARM64
    329332        popLRAndFP
    330     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     333    elsif C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS
     334        pop cfr
    331335        pop cfr
    332336        pop lr
     
    353357
    354358macro callTargetFunction(callLinkInfo, calleeFramePtr)
     359    move calleeFramePtr, sp
    355360    if C_LOOP
    356361        cloopCallJSFunction LLIntCallLinkInfo::machineCodeTarget[callLinkInfo]
    357362    else
    358         move calleeFramePtr, sp
    359363        call LLIntCallLinkInfo::machineCodeTarget[callLinkInfo]
    360         restoreStackPointerAfterCall()
    361         dispatchAfterCall()
    362     end
     364    end
     365    restoreStackPointerAfterCall()
     366    dispatchAfterCall()
    363367end
    364368
     
    367371        slowPath,
    368372        macro (callee)
     373            btpz t1, .dontUpdateSP
     374            addp CallerFrameAndPCSize, t1, sp
     375        .dontUpdateSP:
    369376            if C_LOOP
    370377                cloopCallJSFunction callee
    371378            else
    372                 btpz t1, .dontUpdateSP
    373                 addp CallerFrameAndPCSize, t1, sp
    374             .dontUpdateSP:
    375379                call callee
    376                 restoreStackPointerAfterCall()
    377                 dispatchAfterCall()
    378380            end
     381            restoreStackPointerAfterCall()
     382            dispatchAfterCall()
    379383        end)
    380384end
     
    440444    end
    441445    codeBlockGetter(t1)
     446if C_LOOP
     447else
    442448    baddis 5, CodeBlock::m_llintExecuteCounter + ExecutionCounter::m_counter[t1], .continue
    443449    cCall2(osrSlowPath, cfr, PC)
     
    447453    if ARM64
    448454        popLRAndFP
     455    elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4
     456        pop cfr
     457        pop lr
    449458    else
    450459        pop cfr
     
    454463    codeBlockGetter(t1)
    455464.continue:
     465end
     466
    456467    codeBlockSetter(t1)
    457468   
  • branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp

    r161913 r161927  
    11/*
    2  * Copyright (C) 2012 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    9191#define OFFLINE_ASM_END
    9292
     93#if ENABLE(OPCODE_TRACING)
     94#define TRACE_OPCODE(opcode) dataLogF("   op %s\n", #opcode)
     95#else
     96#define TRACE_OPCODE(opcode)
     97#endif
     98
    9399// To keep compilers happy in case of unused labels, force usage of the label:
    94100#define USE_LABEL(label) \
     
    98104    } while (false)
    99105
    100 #define OFFLINE_ASM_OPCODE_LABEL(opcode) DEFINE_OPCODE(opcode) USE_LABEL(opcode);
     106#define OFFLINE_ASM_OPCODE_LABEL(opcode) DEFINE_OPCODE(opcode) USE_LABEL(opcode); TRACE_OPCODE(opcode);
    101107
    102108#if ENABLE(COMPUTED_GOTO_OPCODES)
     
    213219#endif // !USE(JSVALUE64)
    214220
     221        intptr_t* ip;
    215222        int8_t* i8p;
    216223        void* vp;
     224        CallFrame* callFrame;
    217225        ExecState* execState;
    218226        void* instruction;
     
    233241    operator VM*() { return vm; }
    234242    operator ProtoCallFrame*() { return protoCallFrame; }
     243    operator Register*() { return reinterpret_cast<Register*>(vp); }
    235244
    236245#if USE(JSVALUE64)
     
    314323    // 3. 64 bit result values will be in t0.
    315324
    316     CLoopRegister t0, t1, t2, t3, t5, sp;
     325    CLoopRegister t0, t1, t2, t3, t5, sp, cfr, lr, pc;
    317326#if USE(JSVALUE64)
    318     CLoopRegister rBasePC, tagTypeNumber, tagMask;
    319 #endif
    320     CLoopRegister rRetVPC;
     327    CLoopRegister pcBase, tagTypeNumber, tagMask;
     328#endif
    321329    CLoopDoubleRegister d0, d1;
    322330
    323     Instruction* vPC;
    324 
    325     // rPC is an alias for vPC. Set up the alias:
    326     CLoopRegister& rPC = *CAST<CLoopRegister*>(&vPC);
     331    lr.opcode = getOpcode(llint_return_to_host);
     332    sp.vp = vm->interpreter->stack().topOfStack() + 1;
     333    cfr.callFrame = vm->topCallFrame;
     334#ifndef NDEBUG
     335    void* startSP = sp.vp;
     336    CallFrame* startCFR = cfr.callFrame;
     337#endif
    327338
    328339    // Initialize the incoming args for doCallToJavaScript:
     
    338349#endif // USE(JSVALUE64)
    339350
    340     // cfr is an alias for callFrame. Set up this alias:
    341     CallFrame* callFrame;
    342     CLoopRegister& cfr = *CAST<CLoopRegister*>(&callFrame);
    343 
    344     // Simulate a native return PC which should never be used:
    345     rRetVPC.i = 0xbbadbeef;
    346 
    347351    // Interpreter variables for value passing between opcodes and/or helpers:
    348352    NativeFunction nativeFunc = 0;
    349353    JSValue functionReturnValue;
    350354    Opcode opcode = getOpcode(entryOpcodeID);
     355
     356    #define PUSH(cloopReg) \
     357        do { \
     358            sp.ip--; \
     359            *sp.ip = cloopReg.i; \
     360        } while (false)
     361   
     362    #define POP(cloopReg) \
     363        do { \
     364            cloopReg.i = *sp.ip; \
     365            sp.ip++; \
     366        } while (false)
    351367
    352368    #if ENABLE(OPCODE_STATS)
     
    358374
    359375    #if USE(JSVALUE32_64)
    360         #define FETCH_OPCODE() vPC->u.opcode
     376        #define FETCH_OPCODE() pc.opcode
    361377    #else // USE(JSVALUE64)
    362         #define FETCH_OPCODE() *bitwise_cast<Opcode*>(rBasePC.i8p + rPC.i * 8)
     378        #define FETCH_OPCODE() *bitwise_cast<Opcode*>(pcBase.i8p + pc.i * 8)
    363379    #endif // USE(JSVALUE64)
    364380
     
    409425        #include "LLIntAssembly.h"
    410426
     427        OFFLINE_ASM_GLUE_LABEL(llint_return_to_host)
     428        {
     429            ASSERT(startSP == sp.vp);
     430            ASSERT(startCFR == cfr.callFrame);
     431#if USE(JSVALUE32_64)
     432            return JSValue(t1.i, t0.i); // returning JSValue(tag, payload);
     433#else
     434            return JSValue::decode(t0.encodedJSValue);
     435#endif
     436        }
     437
    411438        // In the ASM llint, getHostCallReturnValue() is a piece of glue
    412         // function provided by the JIT (see dfg/DFGOperations.cpp).
     439        // function provided by the JIT (see jit/JITOperations.cpp).
    413440        // We simulate it here with a pseduo-opcode handler.
    414441        OFFLINE_ASM_GLUE_LABEL(getHostCallReturnValue)
    415442        {
    416             // The ASM part pops the frame:
    417             callFrame = callFrame->callerFrame();
    418 
    419443            // The part in getHostCallReturnValueWithExecState():
    420444            JSValue result = vm->hostCallReturnValue;
     
    425449            t0.encodedJSValue = JSValue::encode(result);
    426450#endif
    427             goto doReturnHelper;
     451            opcode = lr.opcode;
     452            DISPATCH_OPCODE();
    428453        }
    429454
     
    434459
    435460    } // END bytecode handler cases.
    436 
    437     //========================================================================
    438     // Bytecode helpers:
    439 
    440     doReturnHelper: {
    441         ASSERT(!!callFrame);
    442         if (callFrame->isVMEntrySentinel()) {
    443 #if USE(JSVALUE32_64)
    444             return JSValue(t1.i, t0.i); // returning JSValue(tag, payload);
    445 #else
    446             return JSValue::decode(t0.encodedJSValue);
    447 #endif
    448         }
    449 
    450         // The normal ASM llint call implementation returns to the caller as
    451         // recorded in rRetVPC, and the caller would fetch the return address
    452         // from ArgumentCount.tag() (see the dispatchAfterCall() macro used in
    453         // the callTargetFunction() macro in the llint asm files).
    454         //
    455         // For the C loop, we don't have the JIT stub to do this work for us. So,
    456         // we jump to llint_generic_return_point.
    457 
    458         vPC = callFrame->currentVPC();
    459 
    460 #if USE(JSVALUE64)
    461         // Based on LowLevelInterpreter64.asm's dispatchAfterCall():
    462 
    463         // When returning from a native trampoline call, unlike the assembly
    464         // LLInt, we can't simply return to the caller. In our case, we grab
    465         // the caller's VPC and resume execution there. However, the caller's
    466         // VPC returned by callFrame->currentVPC() is in the form of the real
    467         // address of the target bytecode, but the 64-bit llint expects the
    468         // VPC to be a bytecode offset. Hence, we need to map it back to a
    469         // bytecode offset before we dispatch via the usual dispatch mechanism
    470         // i.e. NEXT_INSTRUCTION():
    471 
    472         CodeBlock* codeBlock = callFrame->codeBlock();
    473         ASSERT(codeBlock);
    474         rPC.vp = callFrame->currentVPC();
    475         rPC.i = rPC.i8p - reinterpret_cast<int8_t*>(codeBlock->instructions().begin());
    476         rPC.i >>= 3;
    477 
    478         rBasePC.vp = codeBlock->instructions().begin();
    479 #endif // USE(JSVALUE64)
    480 
    481         goto llint_generic_return_point;
    482 
    483     } // END doReturnHelper.
    484 
    485461
    486462#if ENABLE(COMPUTED_GOTO_OPCODES)
  • branches/jsCStack/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

    r161913 r161927  
    151151
    152152    if C_LOOP
    153     # FIXME: Need to call stack check here to see if we can grow the stack.
    154     # Will need to preserve registers so that we can recover if we do not end
    155     # up throwing a StackOverflowError.
     153        move entry, temp2
     154        move vm, temp3
     155        cloopCallSlowPath _llint_stack_check_at_vm_entry, vm, temp1
     156        bpeq t0, 0, .stackCheckFailed
     157        move temp2, entry
     158        move temp3, vm
     159        jmp .stackHeightOK
     160
     161.stackCheckFailed:
     162        move temp2, entry
     163        move temp3, vm
    156164    end
    157165
     
    226234macro makeJavaScriptCall(entry, temp)
    227235    addp 16, sp
    228     call entry
     236    if C_LOOP
     237        cloopCallJSFunction entry
     238    else
     239        call entry
     240    end
    229241    subp 16, sp
    230242end
     
    238250        move sp, a0
    239251    end
    240     addp 16, sp
    241     call temp
    242     subp 16, sp
     252    if C_LOOP
     253        storep cfr, [sp]
     254        storep lr, 8[sp]
     255        cloopCallNative temp
     256    else
     257        addp 16, sp
     258        call temp
     259        subp 16, sp
     260    end
    243261end
    244262
     
    18961914        andp MarkedBlockMask, t3
    18971915        loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
    1898     elsif ARM64
     1916    elsif ARM64 or C_LOOP
    18991917        loadp ScopeChain[cfr], t0
    19001918        andp MarkedBlockMask, t0
     
    19101928        loadp JSFunction::m_executable[t1], t1
    19111929        move t2, cfr # Restore cfr to avoid loading from stack
    1912         call executableOffsetToFunction[t1]
    1913         restoreReturnAddressBeforeReturn(t3)
    1914         loadp ScopeChain[cfr], t3
    1915         andp MarkedBlockMask, t3
    1916         loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
    1917     elsif C_LOOP
    1918         loadp CallerFrame[cfr], t0
    1919         loadp ScopeChain[t0], t1
    1920         storep t1, ScopeChain[cfr]
    1921 
    1922         loadp ScopeChain[cfr], t3
    1923         andp MarkedBlockMask, t3
    1924         loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
    1925         storep cfr, VM::topCallFrame[t3]
    1926 
    1927         move t0, t2
    1928         preserveReturnAddressAfterCall(t3)
    1929         storep t3, ReturnPC[cfr]
    1930         move cfr, t0
    1931         loadp Callee[cfr], t1
    1932         loadp JSFunction::m_executable[t1], t1
    1933         move t2, cfr
    1934         cloopCallNative executableOffsetToFunction[t1]
    1935 
     1930        if C_LOOP
     1931            cloopCallNative executableOffsetToFunction[t1]
     1932        else
     1933            call executableOffsetToFunction[t1]
     1934        end
    19361935        restoreReturnAddressBeforeReturn(t3)
    19371936        loadp ScopeChain[cfr], t3
  • branches/jsCStack/Source/JavaScriptCore/offlineasm/cloop.rb

    r161238 r161927  
    1 # Copyright (C) 2012 Apple Inc. All rights reserved.
     1# Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
    22#
    33# Redistribution and use in source and binary forms, with or without
     
    8080            "t3"
    8181        when "t4"
    82             "rPC"
     82            "pc"
    8383        when "t5"
    8484            "t5"
    8585        when "t6"
    86             "rBasePC"
     86            "pcBase"
    8787        when "csr1"
    8888            "tagTypeNumber"
     
    9292            "cfr"
    9393        when "lr"
    94             "rRetVPC"
     94            "lr"
    9595        when "sp"
    9696            "sp"
     
    545545    $asm.putc "{"
    546546    $asm.putc "    SlowPathReturnType result = #{operands[0].cLabel}(#{operands[1].clDump}, #{operands[2].clDump});"
    547     $asm.putc "    decodeResult(result, t0.instruction, t1.vp);"
     547    $asm.putc "    decodeResult(result, t0.vp, t1.vp);"
    548548    $asm.putc "}"
    549549end
    550550
    551551class Instruction
     552    @@didReturnFromJSLabelCounter = 0
     553
    552554    def lowerC_LOOP
    553555        $asm.codeOrigin codeOriginString if $enableCodeOriginComments
     
    863865            $asm.putc "CRASH(); // break instruction not implemented."
    864866        when "ret"
    865             $asm.putc "goto doReturnHelper;"
     867            $asm.putc "opcode = lr.opcode;"
     868            $asm.putc "DISPATCH_OPCODE();"
    866869
    867870        when "cbeq"
     
    10841087           
    10851088        when "memfence"
     1089
     1090        when "push"
     1091            $asm.putc "PUSH(#{operands[0].clDump});"
    10861092        when "pop"
    1087             $asm.putc "RELEASE_ASSERT_NOT_REACHED(); // pop not implemented."
     1093            $asm.putc "POP(#{operands[0].clDump});"
    10881094
    10891095        when "pushCalleeSaves"
     
    11031109        # as an opcode dispatch.
    11041110        when "cloopCallJSFunction"
     1111            @@didReturnFromJSLabelCounter += 1
     1112            $asm.putc "lr.opcode = getOpcode(llint_cloop_did_return_from_js_#{@@didReturnFromJSLabelCounter});"
    11051113            $asm.putc "opcode = #{operands[0].clValue(:opcode)};"
    11061114            $asm.putc "DISPATCH_OPCODE();"
     1115            $asm.putsLabel("llint_cloop_did_return_from_js_#{@@didReturnFromJSLabelCounter}")
    11071116
    11081117        # We can't do generic function calls with an arbitrary set of args, but
  • branches/jsCStack/Source/JavaScriptCore/runtime/Executable.cpp

    r161705 r161927  
    3535#include "Operations.h"
    3636#include "Parser.h"
     37#include <wtf/CommaPrinter.h>
    3738#include <wtf/Vector.h>
    3839#include <wtf/text/StringBuilder.h>
  • branches/jsCStack/Source/JavaScriptCore/runtime/VM.cpp

    r161863 r161927  
    11/*
    2  * Copyright (C) 2008, 2011, 2013 Apple Inc. All rights reserved.
     2 * Copyright (C) 2008, 2011, 2013, 2014 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    222222#endif
    223223    , m_stackLimit(0)
    224 #if ENABLE(LLINT_CLOOP)
     224#if ENABLE(LLINT_C_LOOP)
    225225    , m_jsStackLimit(0)
    226226#endif
Note: See TracChangeset for help on using the changeset viewer.